Blog / One IP, 23,307 unique passwords, eight hours: anatomy of a …

One IP, 23,307 unique passwords, eight hours: anatomy of a 'rise' that wasn't

May 2, 2026 · By IntrusionLabs · cowrie, ssh, brute-force, hassh, melbikomas, honeypot

The attack-volume chart on our dashboard jumped roughly 3x over the last 48 hours. My first guess was the usual suspects: a botnet rotating through a fresh IP pool, a worm picking up speed, a scanner project running a wider survey. Something distributed, in other words.

It wasn't. The whole spike is one IP, hitting one of our three sensors, running a Go-based SSH brute-forcer for about eight hours. And the part that's actually interesting is what the operator apparently never noticed.

This is a live writeup. The actor is still in our data and the URLs below resolve. I'll come back and update this post when the activity subsides or escalates.

The spike

Total events across all three sensors, last seven days from honeypot_events_daily:

Day Events Per-sensor
2026-04-26 32,244 10,748
2026-04-27 40,137 13,379
2026-04-28 21,796 7,265
2026-04-29 100,181 33,394
2026-04-30 219,609 73,203

The dashboard normalizes per active sensor, SUM(event_count) / COUNT(DISTINCT agent_id) per day off the daily continuous aggregate. Three sensors, baseline runs around 10–25k events/sensor/day. April 30 came in at 73k.

But the work wasn't spread evenly. Per-sensor for those last two days:

Day Newark Singapore Seattle
2026-04-29 37,261 18,093 44,827
2026-04-30 22,074 18,425 179,110

Singapore was flat, Newark slightly up, Seattle did about 60x its baseline on April 30 alone. So the question is really just: what was hitting Seattle?

The IP

One source IP accounts for 186,456 of the events on Seattle: 185.246.152.229. Live actor record: /threats/actor/185.246.152.229/.

  • Country: NL host, routed via Lithuanian VPS provider Melbikomas UAB (AS56630)
  • First seen: 2026-04-29 22:50 UTC
  • Last seen: 2026-04-30 07:01 UTC
  • Total observed events: 186,456
  • External corroboration at the time of writing: zero, with no AbuseIPDB, Spamhaus, DShield, or BlocklistDE hit
  • Our intent verdict: suspicious / reconnaissance (confidence 0.41)

That last point is worth pausing on. Brand-new attacker, eight hours of activity, more events than most of our top-100 actors ever produce, and not yet on any of the eight blocklists we corroborate against. If your CTI feed only flags things once two or three sources agree, you wouldn't have seen this one yet.

Cadence by hour, UTC:

2026-04-29 22:00   3,558    (warm-up)
2026-04-29 23:00  22,776
2026-04-30 00:00  21,389
2026-04-30 01:00  21,224
2026-04-30 02:00  22,800
2026-04-30 03:00  23,629
2026-04-30 04:00  23,342
2026-04-30 05:00  23,446
2026-04-30 06:00  23,710
2026-04-30 07:00     582    (stop)

Eight near-flat hours at ~23k events/hour, then a clean cutoff. Looks like a scheduled job to me, not someone driving it interactively.

The behavior

Cowrie event distribution from this IP:

Event type Count
cowrie.session.connect 23,307
cowrie.client.version 23,306
cowrie.client.kex 23,306
cowrie.login.success 23,307
cowrie.session.params 23,307
cowrie.command.input 23,307
cowrie.session.closed 23,308
cowrie.log.closed 23,307

23,307 sessions, every one with the same eight-event lifecycle. Open TCP, KEX, log in, run one command, close. No retries.

The client version banner across every session is SSH-2.0-Go. That's the default identifier from golang.org/x/crypto/ssh when nobody bothers to override it. Custom Go SSH client, default banner, no obfuscation.

Single HASSH fingerprint across all 23,309 SessionProfile records: 01ca35584ad5a1b66cf6a9846b5b2821. One library, one config, one binary.

Login attempts:

username: root
distinct passwords: 23,307
distinct attempts:  23,307

Every session is root against a different password. Zero repeats over eight hours. So the brute-forcer is consuming a wordlist top to bottom and marking each entry as used, which is correct behavior for a serious attempt. A small sample of what they tried:

root / 19830317     (date format YYYYMMDD)
root / 52101314
root / 1401
root / karibou
root / 030189       (date format DDMMYY)
root / hotass
root / boricua
root / starfire
root / queens
root / lxsz60652
root / montgom2409
root / 18273645

This isn't Mirai's 60-entry hardcoded list and it isn't the standard credential wordlists shipped with most botnet kits. The mix of birthdays, names, and what look like personal-tag passwords reads more like a real-world credential dump filtered down to plausible Linux root passwords. rockyou, hashes.org, one of the breached.to-era lists, something in that family.

After each successful login, the attacker runs exactly one command:

echo -e "\x6F\x6B"

That decodes to literal ok. It's a smell test for the shell. A real shell processes the escape sequences and prints ok; a fake shell that just echoes its input prints the raw \x6F\x6B. Reasonable probe. The odd thing is the operator doesn't appear to read the result. Cowrie captures the response, but the session closes before there's plausibly time to read it back over the wire.

What the operator should have noticed

This is the part I find genuinely interesting.

In Cowrie's default configuration, every login attempt succeeds. So this operator just generated 23,307 consecutive Authentication succeeded events with 23,307 different passwords against the same root account. No real-world server behaves that way. By attempt fifty you should be suspicious; by attempt five hundred you should be certain.

A competent SSH brute-forcer watches its own success rate. Hitting a real host you'll see thousands of Permission denied followed by the occasional Accepted password, and that low hit rate is the signal worth acting on. A 100% success rate is either a credential-stuffing miracle or a honeypot. There isn't really a third option.

So either:

  1. They don't read their own logs in real time. They fire the brute-forcer, dump success/fail to a file, and only check it after the run completes. That's consistent with the scheduled-job cadence we see.
  2. They read the logs but only act on a downstream signal, like whether their echo probe produces the expected output. But they don't appear to read the probe response either, so this falls apart.
  3. They know it's a honeypot and don't care, because what they're actually doing is wordlist validation or product testing.

Case 1 is the simplest explanation and the least flattering. A high-volume credential-scanning operation that doesn't sanity-check response rate against ground truth is sloppy. For comparison: the libssh botnet I wrote about previously generated 109k sessions across 4,154 IPs, roughly one session per IP per day. This single Melbikomas IP ran 23k sessions in eight hours. Much more aggressive per-IP, and apparently much less careful.

It's not a singleton

The HASSH 01ca35584ad5a1b66cf6a9846b5b2821 doesn't belong to this IP alone. Live cluster: /tools/hassh/01ca35584ad5a1b66cf6a9846b5b2821/.

Nine distinct source IPs in our dataset carry this fingerprint:

IP Country ASN Org Sessions Active window
217.24.173.77 UA 21497 PrJSC VF UKRAINE 32,162 Apr 1–2
185.246.152.229 NL 56630 Melbikomas UAB 23,298 Apr 29–30
45.167.20.121 AR 267721 FIORANI ALEJANDRO 11,665 Mar 30
141.94.207.126 FR 16276 OVH SAS 8,204 Mar 6–7
103.105.67.170 ID 17995 PT iForte Indonesia 4,083 Apr 18
218.202.186.98 CN 9808 China Mobile 151 Mar 28
117.50.208.104 CN 4808 China Unicom 8 Mar 29 – Apr 2
94.159.59.30 RU 49531 NetCom-R LLC 4 Mar 4
119.62.86.238 CN 4837 China Unicom 4 Apr 8

Same Go binary running on rented infrastructure across 8 ASNs in 7 countries. Each IP does one big burst and then goes quiet: the OVH IP did 65k events in two days, the VF Mobile IP in Ukraine did 257k. The Melbikomas burst is the latest installment of an operator who has been rotating IPs since at least early March, and once an IP is burned they don't seem to come back to it.

Looking at any one of those IPs in isolation, you'd call it an isolated brute-forcer. The HASSH undoes that.

This cluster sits below our default campaign-detection threshold, which promotes a HASSH at ≥50 actors with ≥3 distinct /16 subnets. Nine IPs is too small. But it has the right shape: distinct ASNs, distinct countries, sustained behavior across two months, a single tooling fingerprint. Worth a threshold review on our end.

What a defender does with this

Per-IP blocking is whack-a-mole here. By the time 185.246.152.229 lands on a feed, the operator has moved on, and based on the table above they've been switching IPs every couple of weeks since at least early March.

What's stable across the cluster:

  1. The HASSH 01ca35584ad5a1b66cf6a9846b5b2821. If your edge SSH proxy or honeypot logs HASSH (Cowrie does, Zeek does, OpenCanary doesn't natively), match on it. Block or rate-limit by fingerprint, not address.
  2. The client banner SSH-2.0-Go with no version field. Coarser than HASSH because plenty of legitimate Go services run SSH clients, but it's a useful pre-filter at the IDS layer if you can't capture HASSH directly.
  3. The behavioral signature: a session that opens, immediately tries root with a never-before-seen password, and runs echo -e "\x6f\x6b" is this exact actor. Any one of those is suggestive; all three together is diagnostic.

If you want to track the cluster over time:

curl https://intrusionlabs.com/api/v1/fingerprints/hassh/01ca35584ad5a1b66cf6a9846b5b2821

Returns the live actor list. New IPs joining the cluster will show up there.

I'll come back and update this post if a tenth IP shows up on the same HASSH, or if 185.246.152.229 reappears. If you've seen this fingerprint hit your own honeypot infrastructure, get in touch, I'd like to compare notes.

References (live data)