Amir Sadovnik, a researcher in robust machine learning, recently shared how he got into the field of data science and how detecting when data has been tampered can lead to better results from artificial intelligence models.
What are you working on now?
I am currently working on a project about assessing and mitigating the risks involved in the adoption of AI into different real-world systems. These risks include many topics which are relevant to data science, especially since the way data can be manipulated, both when training the AI models as well as when performing inference, can lead to a complete failure of the model. Therefore, methods to detect when data has been tampered with and to mitigate the effect of these manipulations is needed to better ensure the robustness of these AI models.
What brought you to ORNL?
Working at ORNL allows me to explore the topics I am most interested while knowing that the work I am doing has a real impact on the society I am living in. In addition, being surrounded by a very strong team of researchers and engineers gives me ample opportunity to both brainstorm ideas and implement them quickly. The combination of meaningful work in addition to a team environment was a real draw for me.
What led you to a career in data science?
I first got interested in data science and machine learning through the subject of computer vision. When I first began my PhD, computers could not understand images very well even though they were able to perform other tasks which appear to be harder better than humans. For example, a computer could beat the world champion in chess but still was not as good as my 4-year-old daughter at finding cats in an image. This discrepancy fascinated me and made me realize how much more progress still needs to be achieved in the field. Today, after major advancements in machine learning and the availability of large data sets, the field has made great progress towards human-level computer vision.