Infer is a static analysis tool being developed at Facebook. It helps developers find issues in their code early, at code review time. It finds issues such as Null Dereferences, Memory Leaks, and many more, in a range of languages such as Java, C, C++ and Objective-C.
In this talk we will cover how Infer is deployed at Facebook and the challenges to handle the scale of Facebook’s code, as well as the differences between static analysis and testing. We will also cover the basics of Infer’s architecture, and the range of checkers it consists of. Originally it was based on separation logic. Now, additionally, there are various other checkers based on abstract interpretation such as buffer overflow and lock consistency checkers.
Dulma Churchill is a Software Engineer at Facebook, London. She has been developing Infer for over five years and worked on many areas of the tool: frontend, backend, build system integrations, etc. Before, she did her PhD at the University of Munich in resource analysis for Java using type systems.
Modern video game development is very complex with teams of hundreds of people in locations across the globe collaborating on a product that will get released to a demanding and vocal audience of millions of gamers. Quality Verification for game projects includes testing performance and stability of game clients on consoles, load balancing large back-end service systems and the collection and analysis of social media content to gauge the sentiment of our players. At Electronic Arts we spend thousands of hours testing our software to ensure our players have the best possible experience when playing our games. Ultimately, we strive to answer the most elusive question of all: “is it fun?”
Magnus Nordin is a Technical Director of the AI group with the SEED R&D team at Electronic Arts. Magnus has spent 25 years doing computer science and software engineering in a large number of projects and companies. He is currently heading up the deep learning and AI research team of SEED, an EA R&D division.
David King is a Technical Director of the Quality Engineering group at DICE. David is responsible for software driven testing of the DICE game franchises, most notably Star Wars Battlefront and Battlefield. He has worked in the field of software testing at Electronic Arts for the last 9 years.
Stefan Posthuma is a Senior Director of Quality Engineering for EA’s European Studios. Ever since publishing his first game back in 1985, Stefan has worked on (and played!) countless of video games. He joined EA in 1998 and has worked with EA’s global Quality Verification group for the last 4 years.
Magnus, David and Stefan are all located at EA’s studio in Stockholm, Sweden.
Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing a novel automated verification framework for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques work directly with the network code and, in contrast to existing methods, can offer guarantees that adversarial examples are found if they exist. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples.
Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Prior to this she was Professor in the School of Computer Science at the University of Birmingham, Lecturer at the University of Leicester and Assistant Professor at the Jagiellonian University in Cracow, Poland. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. She led the development of the PRISM model checker (www.prismmodelchecker.org), the leading software tool in the area and winner of the HVC Award 2016. Probabilistic model checking has been adopted in many diverse fields, including distributed computing, wireless networks, security, robotics, game theory, systems biology, DNA computing and nanotechnology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm in 2014 and the Royal Society Milner Medal in 2018. Her recent work was supported by the ERC Advanced Grant VERIWARE “From software verification to ‘everyware’ verification” and the EPSRC Programme Grant on Mobile Autonomy. She is a Fellow of ACM and Member of Academia Europea.