On Thursday, November 7th the Institute for Robotic Process Information & Artificial Intelligence (IRPA AI) New York Chapter Launch Party will be taking place in Manhattan, New York on 25 West 39th Street on Floor 14. The launch party will include a pre-launch networking event for the new chapter beginning at 5:15pm which will serve as an opportunity to plan and discuss future programs for the chapter. The launch party will also inform guests on how they can be involved with the chapter. Drinks will be served at the launch party during the pre-launch networking happy hour!
For more information or to RSVP to the event, please follow the link here.
For questions or general inquiries about IRPA AI or the NY Chapter Launch Party, please contact Molly Alexander at Molly.Alexander@irpanetwork.com.
You can also learn more about IRPA AI by going to their website and or their LinkedIn!
In recent news, Pitney Bowes and Groupe M6 experienced ransomware attacks which limited customer access to company services and led to the encryption of information on private networks and systems belonging to the companies. Furthermore, email servers and phone lines also went down due to the attacks, and while no customer data was lost or stolen, shows how much of a threat these ransomware attacks can pose on the privacy of companies and their customers.
Ransomware attacks, while hard to detect and fight off, are able to be defeated with time and effort. However, if it takes too much time to defeat said attacks, valuable data could be breached or stolen and many will be put at risk. If the risk is too much, companies forego hopes of fighting off the attacks themselves and end up paying high extortion fees to minimize damage. However what happens when attackers strike again? Will the companies be prepared to fend it off the next time, or will be they be seen as an easy target because they gave in?
One thing is for sure; just as we continue to make strides in the cyber security industry, criminals continue to get more and more advanced with their own cyber attack tactics.
The TAG Cyber Security Annual has been released on their website! You can find it here!
The TAG Cyber Security Annual comes complete with unbiased, expert industry research so we recommend the read! Alternatively, you can download the Annual here!
With the recent Equifax breach coming back into the limelight due to the cancellation of the $125 check the FTC promised to those impacted by the breach, we want to take a look at possible prevention for the breach in the first place, or at least ways that the damage could have been minimized.
Earlier this month, the heart of Manhattan was struck with a major power outage estimated to have impacted up to 72,000 Con Edison customers. While dangerous and definitely hard to look on the bright side of things, there are reports that do bring good news concerning the power outage. NBC reports that terrorism and cyber-attacks were ruled out following an investigation ordered by Mayor Bill De Blasio. So what could have caused this major blackout?
We want to make the following exciting announcement; we've recently hired Tim Rice as our new Vice President of Business Development! Tim's tenure and domain expertise include a successful 20+ years of high-end senior level business development, project management, and raising working capital across a variety of organizations globally, with a concentration in the Northeast at the C level. His result-oriented and disciplined approach to examining current operational, financial, revenue-cycle, compliance/risk, and financial transformations to discover operational and financial inefficiencies in an organization's current state leads to positioning metrics which greatly increase a desired state of profitability, while streamlining costs for his clients.
He has both Fortune 50 as well as entrepreneurial experience and deep passion for achieving amazing results. He has tenure with organizations such as IBM, KPMG, and Accenture. Tim has domain knowledge in banking, financial services, insurance, energy, and telecommunications.
Please welcome Tim & reach out to him @ firstname.lastname@example.org or at (585) 330-7881.
A very interesting application of high-fidelity synthetic data generation techniques is to reduce credit card fraud. By 2025, the global losses to credit card fraud are expected to reach almost $50 billion. Detecting fraudulent transactions in a large data-set poses a problem because they are such a small percentage of the overall transactions. Banks and financial institutions are in need of a solution that can correctly identify both fraudulent and non-fraudulent transactions, and detect false/true negatives and false/true positives, enabling the creation of receiver operating curves and tuning the system to optimize for the cost to correct the fraud payment versus the cost of the payment. High fidelity synthetic data solves this dilemma by generating volumes of non-fraudulent transactions while interweaving complex fraud patterns into a very small subset of the overall transactions. The fraud patterns are known, enabling the credit card fraud detection system to be optimized.
Most applications testing, both performance and in development environments, is being done today utilizing production data that has been extracted utilizing an ETL (Extract Transform Load) process and then manually modified to create specific use cases. For example for cyber applications, most testing is being done by replaying network traffic. Due to the labor intensity of this process, use case coverage is generally very low and most of the business logic and workflow rules go untested. This is where the concept of sufficiently complex data comes in. Test data should be of large enough volumes to cover peak processing volumes and have sufficient complexity to cover almost all of the business logic and workflow rules. Utilizing large amounts of sufficiently complex test data will exercise algorithms at peak processing volumes to expose failures before moving to the production environment and enable precision error measurement for ambiguous, true and false errors. Systems can then be optimized for the cost of errors versus the cost to correct.