The Next Step in the Evolutionary Cyber Security Ladder; Complex Dynamic
Payloads with High Fidelity Content and Relational Scenarios
Commercial network traffic generation technologies such as Ixia BreakingPoint or Spirent simulate real-world legitimate traffic, distributed denial of service (DDoS), exploits, malware, and fuzzing. These technologies help to test and validate an organization’s security infrastructure.
Today, advanced behavior-based threats are growing more sophisticated, harder to detect, and are accelerating rapidly. Current networks are becoming even more vulnerable to these rapidly growing
threats that cost more than $4B annually in the US alone. Detecting and mitigating Advanced Persistent Threats and Insider Threats demand far more advanced testing techniques, analytics, and sophisticated data sets for consistent detection, demonstration, measurement, and mitigation.
Today, you can combine commercial network traffic and synthetic data generation technologies to
provide rich content that mirrors real-world network traffic with configurable threat patterns contained within the traffic data. This end-to-end solution generates the behavioral network traffic test data as well as the system response files, enabling immediate scoring and correction of systems errors. This is a huge advancement in this critical and growing segment of sophisticated threat-based network testing.
At first, test data and training data may seem like the same thing, however there are several fundamental differences between the two that can make or break your data model.
Training Data is used to build or create a model in its earlier stages and can be implemented to help the model run. Training Data is often found in machine learning to ensure the data model can be "trained" to perform several actions, and ensuring the future development of the API and algorithms that a machine will continuously work with.
Test Data on the other hand are datasets which are used to validate existing data models to make sure algorithms produce correct results, machine learning is happening correctly, and output which mirrors intended findings are consistent. Alternatively, test data can be used to invalidate a data model and disprove it's efficiency, or find that a data model or AI may be producing sub-optimal results and therefore may need further tests or different kinds of training to reach maximum efficiency. Below, is a diagram depicting the data training, validation, and testing model and the different steps that may be encompassed throughout the different processes.
Training AI or any sort of machine learning is a rigorous process that requires a fair amount of training, validation, and testing, and it's very rare a data model is perfected on its first try. That's why it's important for training and test data to be differentiated; otherwise, results will appear too similar, leading observations and evaluations for both datasets to be inconclusive, ensuring more testing and validation would have to be done.
Banking, insurance, and financial applications are apart of the largest sector for consumer IT services and carry very confidential information pertaining to their day to day user base. Banking software must be free of error not only to ensure the best possible customer service, but because hackers can exploit potential bugs in the application and access private financial data, which may compromise the assets of many individuals using said banking, insurance or financial services.
Thus, it becomes essential that every iteration of banking applications are tested profusely before rolling out to the live servers. When used with test data, banking applications can simulate live consumer interactions with minimal risk to all parties involved. Test data can be used throughout each phase of the testing process including but not limited to database testing, integration testing, and security testing. For example, test data can be used for security verification and validation processes to ensure only those with the correct permissions can access their data.
60 percent of breaches are linked to a third party. Why are you giving them access to your data when you don't need too?
Third-party contractors are the biggest source of security incidents outside of a company’s employees:
Why are commercial companies and government agencies giving access to their private and confidential data to third parties when there exist viable technology alternatives to this practice and they don't need too?
With the rise of social media in our current day and age, it's only natural that test data is utilized to ensure platforms such as Facebook, Twitter, and Instagram are working as intended to their highest degree of efficiency. Test data is utilized both for desktop and mobile applications for the social media platforms and is generated for many different use cases. One particular use case for test data in social media applications is for encryption so that user data remains private and isn't accessible to those who shouldn't have it.
Furthermore, user data and test user data are used by platforms such as Facebook and LinkedIn to generated sponsored and tailored content for each specific individual. Each user's data is different to a degree and thus generated test data must be dynamic enough to be manipulated to represent the millions of user-types the social media world may have.
A thought exercise on the System perspective of dev and test, as enabled by ExactData Synthetic Data.
Let’s consider the development of an application that scours incoming data for fraudulent activity… How would that test and analysis look with production data, de-identified production data, hand crafted data, and ExD synthetic data?
Let’s also consider that the application will classify all transactions/events as either bad or good. The perfect application would classify every transaction correctly resulting in 100% Precision (everything classified as bad was actually bad), 100% capture rate (classified every actual bad as bad), 0% escape rate (no bads classified as good), and 0% False Positive rate (no goods classified as bad). The application needs to be developed, tested, and analyzed from a System perspective. For example, the application could classify every transaction as bad and achieve 100% capture rate, and 0% escape rate, but would also result in poor Precision and a huge False Positive rate – thus requiring significant labor support to adjudicate the classifications. On the other extreme, the application could classify everything good, be mostly right, and not catch any bads. Both of these boundary conditions are absurd but illustrate the point of the importance of System.
One method of System analysis is the Confusion Matrix, noted below.
With production data, you don’t know where the bads are, so you can’t complete the confusion matrix.
With de-identified production data, you don’t know where the bads are, so you can’t complete the confusion matrix.
With hand-crafted data, you might have the “truth” to enable completion of the confusion matrix, you would not have the complexity or volume to be truly testing to find the “needle” in the haystack of fraudulent behavior within mass of good behavior.
With ExD synthetic data, you know where every bad is (you have the ground truth), so you CAN complete all 4 quadrants of the confusion matrix, and can then only, conduct a system analysis, driving the application to the real goal of tuning and optimizing Precision (maximizing TP) and Capture rate (maximizing TP/TP+FN) , while at the same time minimizing Escapes (FN) and False Positive rate (FP/FP+TP). Within a particular setup of an application version, these are typically threshold trade-offs, but with next iteration development, there is the opportunity to improve on all scores.
Test data and test data management have become more prominent in today's cyber world. With trends such as data visualization, SCRUM and Agile methodologies, artificial intelligence, and big data connectivity, test data has become essential in how company's operate with their technology.
Test data management is the process of creating non-production data with the implication that it will be utilized for rigorous testing and development for future applications. Test data is safer to use than real data when performing said development tests and can significantly reduce the life-cycle time a test-product may need. Furthermore, test data can lower security risks an organization may have during the software development process, can help prevent rollbacks, and can help set forth precedents for any bug-fixes in one's code or application.
Not only are there many benefits for efficient test data management strategies within the cyber security industry, but the benefits may protrude into other departments such as R&D and marketing as well!
One of the major debates about the cyber world pertains to privacy and how much of it one really has. With companies such as Facebook, Google, and Apple using customer data more and more, we know that our once private lives may not be as secret as we think. In their book 'Who Knows; Safeguarding Your Privacy in a Networked World', authors Ann Cavoukian and Don Tapscott discuss how secure certain documents that are supposed to remain private such as medical records and employment history really are.
The truth is with advancements in technology and cyber security, privacy and one's data is more readily available to companies, hackers, and even other ordinary individuals. Some companies make the point that they only use consumer data for our benefit, showing products one may need before they know they need them or memorizing GPS routes if one travels that way constantly. While it's true that our daily lives are significantly improved because our data is being used in this manner, there one question we have to ask ourselves; is having less privacy a fair trade-off for potential everyday life benefts?
One thing is for certain; we should all take measures to make sure our data is as safe as it can be. Limiting who has access to your social media profiles and information, using secure passwords, and not clicking sketchy links are very easy tips and tricks to always keep in mind to make sure your data is really yours.
What Test Data is Being Used Today and by Who?
An organization’s development ecosystem, including technical partners, software development and
contractors have a growing need to access private and confidential data to do their jobs. Relevant data sources are a necessary component of the software development, technical integration, testing, implementation and ongoing operations and maintenance processes and production data sources are commonly accessed and modified for this purpose. Complex, integrated technology solutions can no longer be managed within an organization’s internal operations but requires a large and varied global ecosystem of partners, consultants, technology companies and contractors. There is a similar need for test data within the organizations cyber security operations. It is also common practice for this ecosystem to utilize historical data, captured network traffic and simple network traffic generation technology for testing purposes.
With every new year comes exciting new updates and trends to the technological world around us! We at ExactData are excited about many trends and future advancements to come, but here are five that we're excited about in particular!
1) Advancement of AI and Mobile Intelligence
It's no secret that AI and mobile intelligence are evolving everyday. We see growth in both of these departments to no end, where things like facial recognition, fingerprint, voice, and eyes scans are all becoming more of a reliable reality! This is seen through many of the innovations of Apple, Samsung, and Google have brought to the table, but also through other fields of data science as well!
2) Automation and Innovation
When one thinks of automation and innovation, jobs and mundane tasks are often the first things thought of. How is data being innovated or automated you may ask? Well being able to derive data in faster response rates, being able to generate, switch, and use data for test purposes on the fly for exact results seems innovative to us! This innovation can be traced to artificial intelligence as well through pattern recognition, GPS sensors, self-driving cars, and more!
3) Cloud Computing and Cyber Security
Cloud computing is becoming more distributed, meaning the origin of the cloud can distribute services to other locations while operating fully in effect from one area. Server updates, latency checks, and bandwidth fixes are becoming quicker every year which not only affects the cloud and its functions but can also be used to stop breaches, glitches, and hackers right in their tracks as soon as they get into the system.
4) Financial Patterns and Recognition
Recognizing financial data patterns through data has been historically tricky due to the immense analytical prowess and and observational skills that could be needed. AI and statistical learning developments however can be trained to pick up these patterns more quickly than ever before, and with less error too. Financial analytics and trend recognition will certainly see upgrades in the upcoming year, especially with more variables such as cryptocurrency coming into play.
5) Accessibility and Privacy
Accessibility and privacy for data files come hand in hand; by making something more accessible you also have the means to make it more restricted. Added levels of security for data can come in many different forms; test data, artificial data, cloud computing, advanced machine learning, more advanced security protocols and more. The rule of thumb is to keep everything private that you may need for later so that nobody else can take or modify it.
While there are so many trends we believe to be up and coming in the world of data, these were just some of the few we believe to be relevant to both the industry and general public as a whole.