Cryton is a Cron-like red team framework for complex attack scenarios automation and scheduling. Through usage of core and attack modules it provides ways to plan, execute and evaluate multi step attacks.
The lifecycle of Attack scenario in Cryton context can be seen in following picture:
With Cryton you can:
- Design an attack Template
- Create an Instance
- Schedule (or directly Execute) a Run
- Generate a Report
- Evaluate results
Purpose of Cryton tool is to execute complex attack scenarios, in which the system under test is known in advance. It was designed as such to assist red teams in cyber security exercises in means of repeatability of certain attack scenarios. These scenarios are often prepared in advance and reflect vulnerabilities hidden in blue teams infrastructure.
Imagine you are taking part in cyber defense exercise as a tutor. The task for your trainees is to defend a system or a whole infrastructure (which you prepared) against an attacker. This system is full of vulnerabilities and misconfigurations (which you prepared as well). Your trainees have eg. one hour to fix as many of these issues as they can find. Imagine then that you have to check each and every system for all the fixes to see how your trainees managed to succeed. How would you do that effectively?
This is where Cryton comes to play. If you know all the vulnerabilities in trainees system - and you do - you can prepare an attack scenario to check if they are still available and working after the fix. Cryton will execute the plan against all targets you tell it to and then generate reports (human and machine process-able). You can then not only see, which attack steps did succeed on which system, but also score your trainees based on these results.
With this in mind you should not expect Cryton to be some kind of evil artificial intelligence capable of taking over the world. It is simply a scheduler for python modules. Scheduler which executes these modules according to some execution tree with conditions based on each step of the scenario. And the modules happen to be scripts orchestrating some well known attack tools. But that is it.
Next section tries to explain the choices for currently employed technologies. Please take into account that these technologies are not supposed to be final and unchangable. They just appeared to be best suited for the task at the time of development. It is possible that they will change in future.
This was the first choice made for scheduler module. It allows you to time your python function to be scheduler on specific time or day or even interval. It is pretty lightweight and does not need much in terms of resources or capacity. So far I have not found anything better suited for the task. There is though one small problem with running it as a service, but there are ways around it.
At the beginning Cryton used SQLite database with direct access. That changed as SQLite is not really good with scaling for the future. Second choice was PostgreSQL, which stayed to this day, but it was updated with use of Django ORM. Using Django REST framework for REST interface also emerged from this choice.
For developing Master-Worker architecture, where you can issue commands remotely, we needed some kind of RPC. Although, as experience showed us, we also needed it to be asynchronous. That's why we chose a messaging system Rabbit MQ.
I guess everyone in IT security field have heard about Metasploit framework. It is one of the most complete and usable open source attack tools available. Of course Cryton uses it for some of attack modules - majority of simulated attacks in CDXs usually do use Metasploit in some way. But it's attacking capabilities are not the only reason to use it. It's real advantage is Metasploit's session management. Every time you open a session to some machine it stores it under specific ID which you can later use to communicate with the target. This is one of the main features you can use while executing your attack scenario in Cryton.
To bundle everything together and make the deployment effortless, we use docker-compose.