It’s been a hectic and rewarding time at Elemendar in the last few months, full of firsts. From releasing the first beta of our product to our first customers, signing on our first cyber threat intelligence (CTI) vendor customer, hiring our first non-founding core team member, exhibiting at our first trade show, bringing on our first investors, not to mention growing the top-line at a rate of 4x year-on-year (OK, that wasn’t a first, but it sure added to the ambiance). It’s also been a few months since Elemendar’s second birthday, and altogether that’s prompted me to go back to basics and reflect a bit on the journey so far. I’m sharing these thoughts to stake down where we came from, where we are now, and where we see the future. And more importantly, why we do what we do.

Elemendar is in a corner of the cybersecurity world called cyber threat intelligence (CTI). CTI is the discipline of looking for patterns in what happens during cyber attacks, and creating information that tells defenders (and their tools) what the attacks are and how to defend against them. This whole discipline, as the cybersecurity industry itself, has been maturing rapidly in the last few years.

As more and more data has become available over the past decade, this threat intelligence industry started taking off in earnest around 2013-14 and it’s grown to something fairly sizeable, like a few billion-dollar market for CTI products. Moreover, this is only a fraction of the investment in the people working with CTI.

These people – the world’s CTI experts and analysts – are using their human intelligence and creativity to make sense of cyber attack patterns. They distil these patterns, together with much complicated technical information, into an understanding of where future attacks might go. This is the real value of their work. Trouble is, CTI vendor analysts may well find all the useful patterns they can, but if there aren’t enough of those same smart people on the customer side – to make sense of all this good intelligence that’s coming out of the vendors – how does that value get realised? Spoiler: it doesn’t, and that’s a huge waste when you consider the size of the investment we talked about.

That is the exact problem that moved me and Syra at the first GCHQ / NCSC Cyber Accelerator over two years ago. We saw that most of that intelligence was expressed in the form of natural language, the language that we humans use to communicate with each other, both in the public and private sector. This is fine as long as there is a human to do something with their understanding of that language on the other side, that of the CTI consumer. The problem is that these people don’t exist: we simply don’t have enough trained cyber threat analysts. If LinkedIn is anything to go by, CTI jobs are growing 5% monthly.

That’s 80% a year – not a bad growth rate for a start-up, but terrible news for filling those positions. These are skills that take years to build. Almost doubling that talent pool every year is impossible, and not doing so leaves us all the more vulnerable.

These facts and figures don’t tell the whole story, however. Behind them, we saw an enormous waste of human talent. Over forty thousand of our best and brightest slaving away as they copy-paste hashes between documents and analysis tools, huge chunks of their days a drudgery well below their potential. Even with all this toil, still knowing that they couldn’t possibly assimilate all the information in front of them, and that they fail to protect everything they care about as well as they aspire to. I still remember the day at the Accelerator when, sitting alongside Jamie W, CDO analyst at the NCSC, I shared in his version of that frustration. Every day since, I keep asking: is this the best we can do?

We believe that valuable people should be free to do valuable work.

So two years ago, Syra and I set ourselves a challenge. Let’s use all the (then recent) advances in machine intelligence, together with some of the (then emerging) standards within CTI, to solve this problem. If we don’t have enough people to do that job, and if we know that we can’t have enough people to do that job, can we use some of those advances in technology and in standards to make machines better able to consume CTI? To get the machines to do the job instead, and reduce the waste of CTI that makes us more vulnerable? To free those precious few analysts to do their best work?

Two years later, we know the answer. Yes, we can.