Word of the Week
“Perfectly Privacy-preserving AI”
How to extract your training data & your model from your ML endpoint?
Over the past few years, providers such as Google, Microsoft, and Amazon have started to provide customers with access to software interfaces allowing them to easily embed machine learning tasks into their applications. Overall, organizations can now use Machine Learning as a Service (MLaaS) engines to outsource complex tasks, e.g., training classifiers, performing predictions, clustering, etc. They can also let others query models trained on their data. Naturally, this approach can also be used (and is often advocated) in other contexts, including government collaborations, citizen science projects, and business-to-business partnerships. However, if malicious users were able to recover data used to train these models, the resulting information leakage would create serious issues. Likewise, if the inner parameters of the model are considered proprietary information, then access to the model should not allow an adversary to learn such parameters. In this document, we set to review privacy challenges in this space, providing a systematic review of the relevant research literature, also exploring possible countermeasures. More specifically, we provide ample background information on relevant concepts around machine learning and privacy. Then, we discuss possible adversarial models and settings, cover a wide range of attacks that relate to private and/or sensitive information leakage, and review recent results attempting to defend against such attacks. Finally, we conclude with a list of open problems that require more work, including the need for better evaluations, more targeted defences, and the study of the relation to policy and data protection efforts.
Especially as of recent, privacy is a big topic. As more of our lives are entrusted to algorithms it’s important that we ensure they are both secure (against causative attacks like evasion or poisoning) and preserve privacy. Hopefully this post has given you a flavour of what the problem is — and what the potential solutions to it might look like.
Stay safe & secure everyone
Word of the Week Special
“When WAFs Go Wrong”
Web application firewalls are increasingly disappointing enterprises today.
A new survey out last week indicates that a significant number of web application attacks bypass the WAF, organizations struggle to tune them, and they’re not well-integrated into broader security functions. This only serves to bolster warnings made by analysts and other studies over the past 18 months that WAF protection mechanisms need to evolve and can’t be the only mainstay for an AppSec program
“Punishing users for cyber security mishaps increases anxiety and reduces productivity”
Thanks to Naz
Idea of the Day: Building trust
Thanks to Mithun
Crypto challenge of the week
UK Government chose not to investigate if Russian hackers interfered in Brexit referendum, report reveals
Covid-19, Authoritarianism and Democracy
Trump says he will ban popular Chinese video app TikTok in the US and Microsoft bid for TikTok on hold amid Trump ban fears
& Trump Targets WeChat and TikTok, in Sharp Escalation With China
Great news in September and October will carry Trump to victory
Book of the month
More on writing by Richard Stiennon
Comic of the week
##Some OWASP stuff first