The following are the components you will need to set-up if you want to run the entire purpleteam solution
If you can’t be bothered with all this work, then use the purpleteam-labs
cloud environment. Head to the quick-start page.
The following diagram shows how the purplteam components hang together:
Before most of the supporting commands can be run, such as running docker-compose-ui (which hosts the stage two containers), and sam-cli (which hosts the purpleteam lambda functions
compose_pt-net Docker network needs to be created.
You can create this bridge network manually, alternatively once you have the purpleteam-orchestrator repository cloned, run the following two commands from the purpleteam-orchestrator root directory:
npm run dc-build
npm run dc-up
dc-build will build the stage one Docker services (images).
dc-up will create the required Docker network and containers, it will then bring all containers up that are listed (stage one) in the orchestrator-testers-compose.yml file.
Obviously you are going to need a web application that you want to put under test. A couple of options for you to start to experiment with if you don’t yet have a SUT in a Docker container ready to be tested are:
version: "3.7" networks: compose_pt-net: external: true services: web: networks: compose_pt-net: depends_on: - mongo container_name: pt-sut-cont environment: - NODE_ENV=production mongo: networks: compose_pt-net:
This option means that NodeGoat will be running in the
pt-net network created by the orchestrator’s docker-compose
When you are ready to bring NodeGoat up, just run the following command from the NodeGoat root directory:
Details on installing and configuring the aws cli and aws-sam-cli in order to be able to host the purpleteam lambda functions
locally can be found here
Details on configuring and debugging (gaining insights into what is happening inside the containers) if needed can be found here
To run the stage one containers (orchestrator, testers and redis) we use npm scripts from the purpleteam-orchestrator project to run the docker-compose files. The main docker-compose file is orchestrator-testers-compose.yml.
The orchestrator-testers-compose.yml file has a bind mount that expects the
HOST_DIR environment variable to exist and be set to a host directory of your choosing.
Make sure you have a source directory set-up and the
HOST_DIR environment variable has its value assigned to it.
We usually add this to a .env file in root directory of the purpleteam-orchestrator.
This host directory gets written to by the testers and orchestrator and read from the orchestrator. This directory needs group
rwx permissions at least.
If you use a firewall, you may have to make sure that the purpleteam components can communicate with each other.
Communications (TCP) will need to flow from the app-scanner container (
pt-app-scanner-cont) bound to the
pt-net (or listed as
docker network ls) Docker network IP address of
172.25.0.120 - to the IP address and port that local Lambda is listening on (172.25.0.1:3001) which can be seen in the
sam local start-lambda commands (as seen in the local-workflow documentation) used to host the lambda functions
Details on installing the orchestrator dependencies and configuring can be found here.
Currently purpleteam has the app-scanner implemented. The server-scanner and tls-checker are stubbed out and waiting to be implemented.
Additional Testers can be added by community contributors.
Details on installing the app-scanner dependencies and configuring can be found here
Not yet implemented.
Details on installing the server-scanner dependencies and configuring once implemented will be found here
Not yet implemented.
Details on installing the tls-checker dependencies and configuring once implemented will be found here
Details on installing the purpleteam (CLI) dependencies and configuring can be found here