Program

Program at a Glance

Accepted Technical Papers

Biometrics & Counterfeit Detection (Session Chair: Anthony Hoogs)

  • Multi-spectral Facial Landmark Detection, Jin Keong (Monash University Malaysia); Xingbo Dong (Monash university,Malaysia); Zhe Jin (Monash University Malaysia)*; Khawla Mallat (EURECOM); Jean-Luc Dugelay (EURECOM, Campus SophiaTech)

  • Fighting against medicine packaging counterfeits: rotogravure press vs cylinder signatures, Iuliia Tkachenko (LIRIS)*; Alain Tremeau (University Jean Monnet, France); Thierry Fournel (University of Saint-Etienne)

  • Texture-based Presentation Attack Detection for Automatic Speaker Verification, Lazaro Janier Gonzalez-Soler (Hochschule Darmstadt)*; Jose Patino (EURECOM); Marta Gomez-Barrero (Hochschule Ansbach); Massimiliano Todisco (EURECOM); Christoph Busch (Hochschule Darmstadt); Nicholas Evans (EURECOM)

  • Post-Quantum Secure Two-Party Computation for Iris Biometric Template Protection, Pia Bauspieß (Hochschule Darmstadt)*; Jascha Kolberg (Hochschule Darmstadt); Daniel Demmler (Universität Hamburg); Juliane Krämer (Technische Universität Darmstadt); Christoph Busch (Hochschule Darmstadt)

  • An Efficient Super-Resolution Single Image Network using Sharpness Loss Metrics for Iris, Juan E Tapia (Universidad de Santiago)*; Marta Gomez-Barrero (Hochschule Ansbach); Christoph Busch (Hochschule Darmstadt)

Cryptography & Privacy (Session Chair: William Puech)

  • On Perfect Obfuscation: Local Information Geometry Analysis, Behrooz Razeghi (University of Geneva)*; Flavio Calmon (Harvard University); Deniz Gunduz (Imperial College London); Slava Voloshynovskiy (University of Geneva)

  • Multiquadratic Rings and Walsh-Hadamard Transforms for Oblivious Linear Function Evaluation, Alberto Pedrouzo Ulloa (atlanTTic Research Center, University of Vigo)*; Juan Ramón Troncoso-Pastoriza (École Polytechnique Fédérale de Lausanne); Nicolas Gama (Inpher); Mariya Georgieva (Inpher); Fernando Perez-Gonzalez (Universidad de Vigo)

  • The Suitability of RSA for Bulk Data Encryption, Pranshu Bajpai (Michigan State University)*; Cody Carter (Michigan State University); Daria Tarasova (Michigan State University); David Ackley (Michigan State University); Ian Masterson (Michigan State University); Jamie Schmidt (Michigan State University); Richard Enbody (Michigan State University)

  • Threshold audio secret sharing schemes encrypting audio secrets, Tetsuro Ishizuka (University of Aizu)*; Yodai Watanabe (University of Aizu)

Detection of Synthetic Media (Session Chairs: Luisa Verdoliva and Siwei Lyu)

  • Detecting Deep-Fake Videos from Appearance and Behavior, Shruti Agarwal (University of California at Berkeley)*; Tarek El-Gaaly (Facebook); Hany Farid (Berkeley ); Ser-Nam Lim (Facebook AI)

  • CNN Detection of GAN-Generated Face Images based on Cross-Band Co-occurrences Analysis, Mauro Barni (University of Siena); Kassem Kallas (University of Siena); Ehsan Nowroozi (University of Siena); Benedetta Tondi (University of Siena)*

  • Landmark Breaker: Obstructing DeeFake By Disturbing Landmark Extraction, Pu Sun (University of Chinese Academy of Sciences); Yuezun Li (University at Albany, SUNY)*; Honggang Qi (University of Chinese Academy of Sciences); Siwei Lyu (University at Albany)

  • Training Strategies and Data Augmentations in CNN-based DeepFake Video Detection, Luca Bondi (Politecnico di Milano)*; Edoardo Daniele Cannas (Politecnico di Milano); Paolo Bestagini (Politecnico di Milano); Stefano Tubaro (Politecnico di Milano, Italy)

Hardware & Computer Security (Session Chair: Siwei Lyu)

  • Fuzzing Framework for ESP32 Microcontrollers, Matthias Börsig (FZI Forschungszentrum Informatik)*; Sven Nitzsche (FZI Forschungszentrum Informatik); Max Eisele (FZI Forschungszentrum Informatik); Roland Gröll (FZI Forschungszentrum Informatik); Juergen Becker (Karlsruhe Institute of Technology (KIT)); Ingmar Baumgart ( FZI Forschungszentrum Informatik)

  • Reinforcement-Based Divide-and-Conquer Strategy for Side-Channel Attacks, Shan Jin (Department of Electrical and Computer Engineering, Texas A&M University, College Station)*; Riccardo Bettati (Texas A&M University )

  • Electromagnetic Fault Injection as a New Forensic Approach for SoCs, Clément Gaine (CEA)*; Driss Aboulkassimi (CEA); Simon Pontié (CEA); Jean-Pierre Nikolovski (CEA); Jean-Max Dutertre (Mines Saint-Etienne, CEA-Tech, Centre CMP, Gardanne)

  • AmpleDroid Recovering Large Object Files from Android Application Memory, Sneha Sudhakaran (Louisiana State University)*; Aisha Ali-Gombe (Towson University); Augustine Orgah (Louisiana State University); Andrew Case (Volatility); Golden G . Richard III (Louisiana State University)

Media Forensics (Session Chair: David Doermann)

  • Reliable JPEG Forensics via Model Uncertainty, Benedikt Lorch (Friedrich-Alexander University Erlangen-Nürnberg (FAU))*; Anatol Maier (Friedrich-Alexander University Erlangen-Nürnberg (FAU)); Christian Riess (Friedrich-Alexander University Erlangen-Nuremberg)

  • Training CNNs in Presence of JPEG Compression: Multimedia Forensics vs Computer Vision, Sara Mandelli (Politecnico di Milano)*; Nicolo Bonettini (Politecnico di Milano); Paolo Bestagini (Politecnico di Milano); Stefano Tubaro (Politecnico di Milano, Italy)

  • Generative Autoregressive Ensembles for Satellite Imagery Manipulation Detection, Daniel Mas Montserrat (Purdue University)*; János Horváth (Purdue University); Sri Kalyan Yarlagadda (Purdue University); Edward Delp (Purdue University)

  • Empirical Evaluation of PRNU Fingerprint Variation for Mismatched Imaging Pipelines, Sharad Joshi (IIT Gandhinagar)*; Pawel Korus (New York University); Nitin Khanna (Indian Institute of Technology Gandhinagar); Nasir Memon (New York University)

  • Speech Audio Splicing Detection and Localization Exploiting Reverberation Cues, Davide Capoferri (Politecnico di Milano); Clara Borrelli (Politecnico di Milano)*; Paolo Bestagini (Politecnico di Milano); Fabio Antonacci (Politecnico di Milano); Augusto Sarti (Politecnico di Milano); Stefano Tubaro (Politecnico di Milano, Italy)

Network & Communication Security (Session Chair: Quanyan Zhu)

  • A Prospect Theoretic Extension of a Non-Zero-Sum Stochastic Eavesdropping and Jamming Game, ANDREY GARNAEV (Rutgers University)*; Wade Trappe ("WINLAB, Rutgers University"); Narayan Mandayam (Rutgers University); H. Vincent Poor (Princeton University)

  • Encrypted HTTP/2 Traffic Monitoring: Standing the Test of Time and Space, Pierre-Olivier Brissaud (Inria); Jérôme François (INRIA Nancy Grand-Est)*; Isabelle Chrisment (LORIA, Inria Nancy Grand Est); Thibault Cholez (LORIA, Inria Nancy Grand Est); Olivier Bettan (Thales)

  • RF Waveform Synthesis Guided by Deep Reinforcement Learning, T. Scott Brandes (BAE Systems); Scott Kuzdeba (BAE Systems)*; Jessee Mcclelland (BAE systems); Neil Bomberger (BAE Systems); Andrew Radlbeck (BAE Systems)

  • Fast Monte Carlo Dropout and Error Correction for Radio Transmitter Classification, Liangping Ma (Qualcomm)*; John Kaewell (InterDigital)

Steganalysis & Alaska Challenge (Session Chair: Rémi Cogranne)

  • The Syndrome-Trellis Sampler for Generative Steganography, Tamio-Vesa Nakajima (University of Oxford)*; Andrew Ker (University of Oxford)

  • An Ensemble Model using CNNs on Different Domains for ALASKA2 Image Steganalysis, Kaizaburo Chubachi (Preferred Networks, Inc.)*

  • Challenging Academic Research on Steganalysis with Realistic Images, Rémi Cogranne (Troyes University of Technology)*; Quentin Giboulot (Troyes University of Technology); Patrick Bas (Ecole Centrale De Lille)

  • Synchronization Minimizing Statistical Detectability for Side-Informed JPEG Steganography, Quentin Giboulot (Troyes University of Technology)*; Patrick Bas (Ecole Centrale De Lille); Rémi Cogranne (Troyes University of Technology)

  • ImageNet Pre-trained CNNs for JPEG Steganalysis, Yassine Yousfi (Binghamton University)*; Jan Butora (Binghamton University); Eugene Khvedchenya (ODS.ai); Jessica Fridrich (Binghamton University)

Industry & Applications

  • WITNESS & the Guardian Project - WITNESS helps people use video and technology to protect and defend human rights. We identify critical situations and teach those affected by them the basics of video production, safe and ethical filming techniques, and advocacy strategies. The Guardian Project creates easy to use secure apps, open-source software libraries, and customized solutions that can be used around the world by any person looking to protect their communications and personal data from unjust intrusion, interception and monitoring.

  • Sony Semiconductor Solutions produces image sensors where they command the top share of the global market. Mobile applications are central to their business with growth expected in new areas such as automotive cameras, security cameras, and factory automation.

  • Overjet AI uses cutting-edge computer vision, data science and dental research to enable patient-focused and evidence-based care delivery. They will share how they use automated image analysis to detect fraud in insurance claims.

  • Steg AI uses artificial intelligence to embed copyright and authentication meta-data into digital media.

Community Meetups (Unstructured Discussion Rooms)

(A) Homomorphic Encryption and Applied Crypto

(A) Steganalysis & Steganography

(A) Controlled Capture and Provenance Infrastructure

(B) Biometrics & Authentication

(B) Deepfakes and Synthetic Media

(B) Reliability, Interpretability & Practical Use of of Media Forensics

Keynotes

Keynote 1: Living with Manipulated Media: How do technical researchers need to prepare for global realities?

Welcome to the spotlight! Media forensics and information provenance are taking center-stage as a critical part of whole-of-society preparedness for more and more media manipulation. Globally, shallowfakes (lightly edited or simply miscontextualized media) already proliferate and synthetic media usages (malicious, meaningless, and meaningful) are growing in variety and sophistication. So what do technical researchers need to take into account as they build responses to these technologies? WITNESS has lead one of the leading global efforts to understand the threats and prioritize solutions from synthetic media, based on thirty years of work on truth and deception in video. Our focus in our 'Prepare, Don't Panic' program (wit.to/Synthethic-Media-Deepfakes) has been on a global perspective of people working directly on frontlines of dealing with potential malicious usages, and how that translates into needs in terms of detection solutions, provenance solutions and collaboration between forensic researchers and others. The most recent report from our south/southeast Asia meeting was just recently released (wit.to/DeepfakesAsia) and it builds on insights from both the US/Europe-centered workshops we've done as well as ones in Brazil and Sub-Saharan Africa. In the keynote, WITNESS Program Director will discuss findings, and highlight the key questions technical researchers need to ask about their solutions, the accessibility of their technologies, and what is needed as media forensics and information provenance step into the spotlight.

  • Sam Gregory is an award-winning technologist, media-maker, and human rights advocate, and Program Director of WITNESS (witness.org) which helps people use video and technology to protect and defend human rights. With a focus on new forms of misinformation and disinformation as well as innovations in preserving trust and authenticity he leads WITNESS’s work on Emerging Threats and Opportunities including work on preparing better for deepfakes, authenticity infrastructure and new opportunities such as livestreamed and co-present storytelling for action. He co-chairs the Partnership on AI’s Expert Group on AI and the Media.

Quoted in major media worldwide, he has spoken at Davos and the White House and was a 2010 Rockefeller Bellagio resident on the future of video in activism and a 2012-17 Young Global Leader of the World Economic Forum. He is a member of the Technology Advisory Board of the International Criminal Court and the US Board of First Draft. He has published in Journal of Human Rights Practice, Information, Communication and Society, Fiber Culture, and American Anthropologist, and was lead editor of ‘Video for Change’. A graduate of Oxford University and the Harvard Kennedy School, from 2010-2018 he taught the first graduate level course at Harvard on participatory media and human rights.

WITNESS will also be participating in the Industry & Applications session.

Keynote 2: Fact vs. Fiction: Fake News & Big Data.

Until very recently, we knew what media was and why it mattered. But today the definitions are changing. Fact vs. Fiction is no longer clear. The sheer volume of content blurs the lines and creates for the first time a sense that everything that we read, or listen to, or watch could be called into question. Phrases like “seeing is believing” seem like naive nostalgia in an era where the technology to create fake news is moving far faster than the ethics or laws that frame our media landscape. How we can expect developments in the creation and consumption of content to impact the definition of what we expect from media? How the emerging worlds of virtual reality and augmented reality will drive changes in how existing media and information companies and platforms evolve. Technology is driving media creation, while platforms are biased in favor of short and salacious media consumption. What lies ahead will shape our families, our work, and our planet.

  • Steven Rosenbaum is the Executive Director of NYC Media Lab. Rosenbaum was New York City’s first Entrepreneur at Large, for NYCEDC. He has started five companies, all in the video platform and digital storytelling space. He has two patents in video technology and has written two books - Curation Nation (McGraw-Hill) and Curate This (Amazon). He has presented two TED talks He holds two EMMY awards and was honored with the Science Journalism Laureates at Purdue University. Rosenbaum is a journalist, podcaster, and filmmaker. He holds a BA in English from Skidmore College in Saratoga Springs, NY.

Keynote 3: Quantum supremacy: data security and the race for encryption standards in the post-quantum world

The world's daily data – estimated to be 44 zettabytes, or 40 times the number of stars in the observable universe – is protected by seemingly impregnable encryption algorithms as it traverses the digital universe. Now, the advent of game-changing quantum technologies threatens these digital defenses. In her talk, Chair of Cyber Security, Professor Delaram Kahrobaei, reveals how the speed and power of quantum computing could soon break these protective codes: just as Turing cracked Enigma. With more than 294 billion emails a day – including your personal bank details, and, if you’ve paid a few dollars for an ancestry trace, your unique DNA profile – this is a serious worry. So serious that the US National Institute of Standards and Technology (NIST) has fired the starting gun on a race to develop the next generation of cryptography before quantum supremacy is achieved. Professor Kahrobaei, looks at the runners and riders in this race (lattice being the favourite) ; asks whether those leading the quest for new security standards may have backed the wrong horse; and what this might mean for both our personal and our national security.

  • Professor Delaram Kahrobaei is an American Mathematician and Computer Scientist, currently the Chair of Cyber Security at the University of York (UK) since November 2018 and the founder and director of York Interdisciplinary Centre for Cyber Security. She is the only woman with such a title in the UK and one of the youngest Full Professors in the Computer Science Department at York.

Before coming to York, she was a Full Professor at the City University of New York and doctoral faculty in the PhD Program in Computer Science at CUNY Graduate Center. She has adjunct appointments at CUNY Graduate Center in the PhD Program in Computer Science as well as the New York University. She was the President/co-founder of a university start-up Infoshield, Inc developing post-quantum fully homomorphic encryption algorithms for secure processing of sensitive data by artificial intelligence systems.

Her research has been partially supported by grants from the US Office of Naval Research, Canadian New Frontiers in Research Fund Exploration, American Association of Advancement in Sciences, National Science Foundation, National Security Agency, UK-Netherlands: York Maastricht Partnership Investment Fund. She has over 85 publications in prestigious journals and conference proceedings and several US patents. Her main research areas are Post-Quantum Algebraic Cryptography, Data Science, Applied Algebra. Kahrobaei is one of the Associate Editors of the Advances of Mathematics of Communication, published by the American Institute of Mathematical Sciences. She is the Chief Editor of the International Journal of Computer Mathematics: Computer Systems Theory, Taylor & Francis. She is an Associate Editor of SIAM Journal on Applied Algebra and Geometry, The Society for Industrial and Applied Mathematics.

Keynote 4: The Devil Takes the Hindmost - Security from Playing Games

Security is a natural battle of wits, and in many instances a matter of conflict between a defender and an attacker (both of which may take diverse and complex physical forms). If the conflict’s outcome is measurable, say, in numeric quantities or admits at least a probabilistic model, game theory applies as a natural tool to optimize the interests of both players against one another. Whereas cryptography tries to make success for the attacker “impossible”, game theoretic security seeks a less ambitious yet equally effective goal: security is, by game theory, not defined as the absence of threat, but rather as a state of the system where attacking it is more expensive than the whatever can be gained. This is the economic understanding of system security: make the attacker’s challenge so hard that it becomes rational to not attack at all. The talk will introduce the basic concepts of game theory, by showing an application to the basic security goals of confidential and reliable transmission, also reaching out towards extensions to more advanced game-theoretic models of security, up to security risk management and counterfeiting advanced persistent threats (APTs). Connections to complexity theory and (network) security by design are established.

  • Dr. Stefan Rass graduated with a double master degree in mathematics and computer science from the Universitaet Klagenfurt (AAU) in 2005. He received a Ph.D. degree in mathematics in 2009 and habilitated on applied computer science and system security in 2014. His research interests cover decision theory and game-theory with applications in system security, as well as complexity theory, statistics, and information-theoretic security. He authored numerous papers related to security and applied statistics and decision theory in security. He (co-authored) the book “Cryptography for Security and Privacy in Cloud Computing,” published by Artech House, and edited the Springer Birkhäuser Book “Game Theory for Security and Risk Management: From Theory to Practice” in the series on Static & Dynamic Game Theory: Foundations & Applications. He participated in various nationally and internationally funded research projects, as well as being a contributing researcher in many EU projects and offering consultancy services to the industry. Currently, he is an associate professor at the AAU, teaching courses on algorithms and data structures, theoretical computer science, complexity theory, security, and cryptography.

Tutorials

Tutorial 1: DeepFake Generation and Detection

With the availability of powerful and easy-to-use media editing tools, falsifying images and videos has become widespread in the last few years. Coupled with ubiquitous social networks, this allows for the viral dissemination of fake news and raises huge concerns on multimedia security. This scenario became even worse with the advent of deep learning. New and sophisticated methods have been proposed to accomplish manipulations that were previously unthinkable (e.g., deepfake). This tutorial will present the most relevant methods for generation and detection of manipulated media. These are important topics nowadays due to the potential of misuse of harmful fake visual content on the web and on social network. For generation the main techniques based on deep learning will be presented, with focus on those based on both graphics and neural network based methods such as generative adversarial networks or cutting edge neural rendering techniques. Both images and videos will be considered, but also the combination of multiple modalities including audio, and text associated with the underlying imagery. For detection the most reliable approaches will be presented considering both supervised and unsupervised approaches. Furthermore few and zero shot learning will be described in order to enable domain generalization. Results will be presented on challenging datasets and realistic scenarios, such as the spreading of manipulated images and videos over social networks. In addition, the robustness of such methods to adversarial attacks will be analyzed.

  • Dr. Matthias Nießner is a Professor at the Technical University of Munich where he leads the Visual Computing Lab. Before, he was a Visiting Assistant Professor at Stanford University. Prof. Nießner’s research lies at the intersection of computer vision, graphics, and machine learning, where he is particularly interested in cutting-edge techniques for 3D reconstruction, semantic 3D scene understanding, video editing, and AI-driven video synthesis. In total, he has published over 70 academic publications, including 22 papers at the prestigious ACM Transactions on Graphics (SIGGRAPH / SIGGRAPH Asia) journal and 18 works at the leading vision conferences (CVPR, ECCV, ICCV); several of these works won best paper awards, including at SIGCHI’14, HPG’15, SPG’18, and the SIGGRAPH’16 Emerging Technologies Award for the best Live Demo. For his work, Prof. Nießner received several awards: he is a TUM-IAS Rudolph Moessbauer Fellow (2017 – ongoing), he won the Google Faculty Award for Machine Perception (2017), the Nvidia Professor Partnership Award (2018), as well as the prestigious ERC Starting Grant 2018. In 2019, he received the Eurographics Young Researcher Award honoring the best upcoming graphics researcher in Europe. Prof. Nießner is also a co-founder and director of Synthesia Inc., a brand-new startup backed by Marc Cuban, whose aim is to empower storytellers with cutting-edge AI-driven video synthesis.

  • Dr. Luisa Verdoliva is Associate Professor at University Federico II of Naples, Italy, where she leads the Multimedia Forensics Lab. In 2018 she has been visiting professor at Friedrich-Alexander-University (FAU) and in 2019-2020 she has been visiting scientist at Google AI in San Francisco. Her scientific interests are in the field of image and video processing, with main contributions in the area of multimedia forensics. She has published over 120 academic publications, including 45 journal papers. She has been the Principal Investigator for University Federico II of Naples in the DISPARITY (Digital, Semantic and Physical Analysis of Media Integrity) project funded by DARPA under the MEDIFOR program (2016-2020). She has actively contributed to the academic community through service as general co-Chair of the 2019 ACM Workshop on Information Hiding and Multimedia Security, technical Chair of the 2019 IEEE Workshop in Information Forensics and Security and area Chair of the IEEE International Conference on Image Processing since 2017. She is on the Editorial Board of IEEE Transactions on Information Forensics and Security and IEEE Signal Processing Letters. Dr. Verdoliva is vice-Chair of the IEEE Information Forensics and Security Technical Committee. She is the recipient of a Google Faculty Award for Machine Perception (2018) and a TUM-IAS Hans Fischer Senior Fellowship (2020-2023).

Tutorial 2: Deep learning security threats: adversarial examples and backdoor attacks

Concerns regarding the security of deep learning architectures operating in adversarial settings are raised with more and more intensity. The feasibility of several kinds of attacks carried out at test time, training time or both, have been largely proven and countermeasures looked for. The goal of this tutorial is to introduce the two most common types of attacks operating, respectively, at test and training time, namely: i) evasion attacks based on adversarial examples, and ii) backdoor injection, whereby the attacker corrupts the training phase to introduce within the system a weakness to be exploited at test time. The first part of the tutorial will focus on evasion attacks based on adversarial examples. After a general introduction to the problem, the tutorial will explore some of the main challenges attackers must face with when they try to implement the attacks in a realistic setting. Real time demonstrations and experiments will be carried out to exemplify such difficulties, with the aim of appreciating on more realistic basis the risks posed by adversarial examples in practical scenarios. The second part of the tutorial will be dedicated to the emerging threats posed by backdoor attacks. A general taxonomy will be introduced by relying on different perspectives, including the threat model within which the attack operates, the knowledge and the kind of control that the attacker has on the system, and the final goal of the attack. Some possible defenses will also be outlined, reviewing the most recent literature in the field.

  • Mauro Barni graduated in electronic engineering at the University of Florence in 1991. He received the PhD in informatics and telecommunications in October 1995. During the last two decades he has been studying the application of image processing techniques to copyright protection and authentication of multimedia, and the possibility of processing signals that have been previously encrypted without decrypting them. Lately he has been working on theoretical and practical aspects of adversarial signal processing with a particular focus on adversarial multimedia forensics.

He is author/co-author of more than 300 papers published in international journals and conference proceedings, and holds five patents in the field of digital watermarking and image authentication. He is co-author of the book “Watermarking Systems Engineering: Enabling Digital Assets Security and other Applications”, published by Dekker Inc. in February 2004.

He participated to several National and International research projects on diverse topics, including computer vision, multimedia signal processing, remote sensing, digital watermarking, multimedia forensics.

He has been the Editor in Chief of the IEEE Transactions on Information Forensics and Security for the years 2015-2017. He was the funding editor of the EURASIP Journal on Information Security. He has been serving as associate editor of many journals including several IEEE Transactions. Prof. Barni has been the chairman of the IEEE Information Forensic and Security Technical Committee (IFS-TC) from 2010 to 2011. He was the technical program chair of ICASSP 2014. He was appointed DL of the IEEE SPS for the years 2013-2014. He is the recipient of the Individual Technical Achievement Award of EURASIP for 2016. He is a fellow member of the IEEE and a member of EURASIP.

  • Cecilia Pasquini received her BS and MS degree in Mathematics from the University of Ferrara, Italy, in 2010 and 2012, and the PhD in Information and Communication Technology from the University of Trento, Italy, in 2016. From 2016 to 2019, she has been a postdoctoral fellow at the Privacy and Security Lab with the Universität Innsbruck, Austria, and at the IT Security Lab at the University of Münster, Germany. Since 2020, she is a Junior Assistant Professor with the Department of Information Engineering and Computer Science of the University of Trento, Italy. Her research interests include image and video forensics, multimedia security, adversarial signal processing and machine learning. She has been General Co-Chair of the ACM Workshop on Information Hiding & Multimedia Security 2018 and is an elected member of the Eurasip BForSec Technical Area Committee. She is member of the Technical Program Committe of several conferences and workshops and serves as reviewer for many journals (e.g., IEEE TIFS, IEEE TCSVT, IEEE TIP). She received the Top 10% paper award at IEEE MMSP 2013 and the “F. Carassa” GTTI 2015 award for the best ongoing PhD research.