As Artificial Intelligence solutions are presented working in various sectors, the possible threats increase, being necessary to have a platform that guarantees in some way that this AI system is safe.
That’s what Microsoft has just released now, an open source tool called Counterfit, which helps developers test the security of these systems.
Has published it and GitHub, and they comment that the reason they have worked on something like this is obvious, as most organizations lack the tools to address adverse machine learning.
It was born out of Microsoft’s own need to evaluate artificial intelligence systems for vulnerabilities. It’s a generic automation tool for targeting multiple AI systems at scale, and while it’s intended for testing systems, they’re also exploring the use of Counterfit in the AI development phase.
The tool can be deployed through the Azure Shell from a browser or installed locally in an Anaconda Python environment.
On his blog they comment that you can evaluate models hosted in any cloud environment or on premises. Counterfit is model independent and strives to be data independent, applicable to models that use text, images, or generic inputs.
Adverse machine learning
What you want to avoid is that an attacker manages to fool a machine learning model with manipulative data. It’s something that has already happened, like he tricked a Tesla into misreading the speed limit by putting black tape on speed signs, for example.
Another example was when a chatbot started posting racist comments on Twitter after analyzing data that was, indeed, racist, fed by a troll.
The tool comes preloaded with published attack algorithms that can be used to initiate the operations necessary to evade and steal AI models, and can assist with vulnerability scanning of these systems.