The terms "database" and "database management system" are typically used interchangeably despite the fact the two mean completely separate things. Additionally, both are important terms that those in the technology industry should clearly know how to distinct between, but it seems many people either don't or can't. Very quickly, below are definitions for the two vocabulary terms.
A database is a logically modeled cluster of information [data] that is typically stored on a computer or other type of hardware that is easily accessible in various ways.
A database management system is a computer program or other piece of software that allows one to access, interact with, and manipulate a database.
Additionally, there are many types of database management systems that exist in the world today. Historically, relational database management systems (RDBMS) are the most popular approach for managing data due to their accessibility and performance result capabilities. Examples of RDBMS's include the Amazon RDS, Oracle, and MySQL which all utilize Structured Query Language (SQL) to manipulate the different databases they interact with. All RDBMS's are ACID compliant and typically implement an OLTP system.
To combat the limitations of relational database management systems, NoSQL databases became more popular over the years. The term "NoSQL" was coined by Carlo Strozzi in 1998 as the term for his first database which didn't utilize SQL for managing data, hence the label "NoSQL." Examples of popular NoSQL databases include key-value pair databases, document databases, graph databases, and columnar databases, all of which while are similar in concept are different in theory, as there are advantages and disadvantages to using each in different scenarios.
As we continue to move forward in the technology world, we constantly search for the most optimal solution for all of our data needs. These optimal solutions begin with which database management system or systems we choose to utilize to solve our data-related problems. Some database management systems are more equipped for certain scenarios than others, and figuring out which type works best for you is essential when working with big data.
Most scientists agree that no one really knows how the most advanced algorithms do what they do, nor how well they are doing it. That could be a problem. Advances in synthetic data generation technologies can help. These algorithms generate data with a known ground truth, sufficient volumes and with statistically relevant true and false positives (TP, FP) and true and false negatives (TN, FN) for the nature of the test. AI algorithms can now be measured for precision, c, as the fraction of the predicted matches that are true positive matches, or c = TP/(TP + FP).
With the recent Equifax breach coming back into the limelight due to the cancellation of the $125 check the FTC promised to those impacted by the breach, we want to take a look at possible prevention for the breach in the first place, or at least ways that the damage could have been minimized.
A very interesting application of high-fidelity synthetic data generation techniques is to reduce credit card fraud. By 2025, the global losses to credit card fraud are expected to reach almost $50 billion. Detecting fraudulent transactions in a large data-set poses a problem because they are such a small percentage of the overall transactions. Banks and financial institutions are in need of a solution that can correctly identify both fraudulent and non-fraudulent transactions, and detect false/true negatives and false/true positives, enabling the creation of receiver operating curves and tuning the system to optimize for the cost to correct the fraud payment versus the cost of the payment. High fidelity synthetic data solves this dilemma by generating volumes of non-fraudulent transactions while interweaving complex fraud patterns into a very small subset of the overall transactions. The fraud patterns are known, enabling the credit card fraud detection system to be optimized.
Most applications testing, both performance and in development environments, is being done today utilizing production data that has been extracted utilizing an ETL (Extract Transform Load) process and then manually modified to create specific use cases. For example for cyber applications, most testing is being done by replaying network traffic. Due to the labor intensity of this process, use case coverage is generally very low and most of the business logic and workflow rules go untested. This is where the concept of sufficiently complex data comes in. Test data should be of large enough volumes to cover peak processing volumes and have sufficient complexity to cover almost all of the business logic and workflow rules. Utilizing large amounts of sufficiently complex test data will exercise algorithms at peak processing volumes to expose failures before moving to the production environment and enable precision error measurement for ambiguous, true and false errors. Systems can then be optimized for the cost of errors versus the cost to correct.
What is ExactData? What do we do? Why is it important, and how can we help you? These are some of the many questions we would like to answer to give a little more insight about how we operate.
ExactData is based in Rochester, New York and we specialize in automating the generation of large, fully artificial, engineered test data for enhanced performance yet quicker results. Our data eliminates security and privacy risks and uses no personal information whatsoever when generating artificial test purpose data making it completely safe to use on top of being unique and optimized per each situation. The creation and advancement of simulated data is unique yet up and coming, and we strive to improve our product everyday.
Our engineers have recently created a script that will inject synthetic data that simulates ADAMS data into a file format that can be consumed by commercial network traffic generators. ADAMS data is simulated data for insider threat detection systems based on anomalies in massive data-sets. Data domains include Logon, Device, HTTP, Email, File, Print, LDAP, Organization Directory, Decoy files, and Psychometric files. Why all of the excitement? The current state of the art network traffic generation tools are using very simplistic content that are not designed for the system under test. Once this integration is complete, cyber security testing can be taken to a while new level where sophisticated threat patterns are interwoven into data and consumed by the network. This will enable sophisticated testing of the network's intrusion detection and measurement of true and false positive errors, so these systems can be optimized for cost and risk performance. This alone is a huge leap in the cyber security industry, and we will only continue to move forward with our advancements in the world of technology.
High interest helping to implement Cyber Behavioral Tools was expressed by many potential clients, including the Cyber Innovation Manager from one of the world's largest banks, a Divisional Chief Information Security Officer for one of the biggest US Federal Systems Integrator's and one of the largest Cyber Independent Testing Laboratories. During the demonstrations large amounts of internally consistent data was generated for all desired behaviors. Data was generated over any time-frame to output: