Example essay

GUEST ESSAY: How to Detect Whether a Remote Job Candidate Is Legit — or a “Deepfake” Candidate

Technology provides opportunities to positively impact the world and improve lives.

Related: Why facial recognition should be regulated

It also offers new ways to commit crimes and frauds. The United States Federal Bureau of Investigation (FBI) issued a public warning in June 2022 about a new type of fraud involving remote work and deepfakes.

The Making of Deepfakes

The world is on track to see around 50% of workers make the transition to sustained, full-time telework. Conducting job interviews online is here to stay, and deepfakes can be part of that new normal.

The term refers to an image or video in which the subject’s likeness or voice has been manipulated to make it look like they said or did something they didn’t.

The deepfake creator uses “synthetic media” applications powered by machine learning algorithms. The creator trains this algorithm on two sets of videos and images. One shows the target’s likeness as it moves and talks in various environments. The second shows faces in different situations and lighting conditions. The app encodes these human responses as “low-dimensional representationsto be decoded into images and videos.

The result is a video of one individual convincingly overlaid with the face of another. The voice is harder to impersonate. However, trick images continually look more convincing as algorithms learn and improve to mimic general human mannerisms and specific target characteristics.

Some bad actors also use this technology to create synthetic audio. High profile story saw criminals use a deepfake to impersonate a senior manager on the phone and successfully authorize large funds transfers. Losses totaled $243,000 and the fraud tricked people in the company who knew the real person.


Even deepfake examples designed to educate the public – like a doctored video of Nixon’s resignation speech — mislead observers unintentionally.

The FBI Warning

The The FBI announced that its Internet Crime Complaint Center (IC3) had observed an increase in employment-related fraud involving the theft of personally identifiable information (PII) and deepfakes. These scammers frequently use ill-gotten PII to create CGI images and videos to apply for work-from-home positions. Some of the roles include:

•Technological Information (IT)

• Design and maintenance of the database

• Computer programming and application design

• Technologies related to finance and employment

Some of these roles involve managing intellectual property as well as employee, patient or customer personal information. The stakes are not as simple as lying to land a new job. The larger goal is to use the stolen and synthesized likenesses to secure a position near valuable company data or personal information.

Protect organizations

The deepfakes are convincing, but there are signs to look for. Machine learning is not flawless and sometimes results in an image with telltale artifacts such as:

•The subject blinks too often or not enough.

•The eyebrows or hair, or parts thereof, do not match the subject’s face or movements.

•The skin appears too wrinkled or too perfectly smooth.

•The pitch of the voice does not match the other characteristics of the speaker.

• Reflections in the eyes or glasses do not match the speaker’s environment.

• Other aspects of the speaker’s movement or appearance do not match the expected physical or lighting aspects of the video.

Superimposing the likeness of one individual onto that of another is rarely a seamless process. The usurpation of a voice is also imperfect.

Even so, the losses resulting from deepfake abuse are already staggering. Only one example resulting from tampering with the “deep voice” resulted in a loss of $35 million in fraudulent bank transfers.

Best defense: awareness

Nixon’s example was an attempt to educate the public through exposition. Jordan Peele fake deep of President Obama also sought to raise awareness. Elon Musk compared the use of deepfakes to “summoning the demonto describe how dangerous they can be.

Beyond raising awareness, experts recommend that companies and individuals take concrete measures:

• Find a secret question or code word to exchange at the start of all online or phone conversations.

• Partner with a biometrics-focused security company and make sure their authentication technologies are up to the challenge.

• Educate employees and partners about deepfakes using the same techniques as general cybersecurity awareness.

Using technology to fight technology can only take people so far. The best defense against any new attack vector is vigilance, vigilance, and not being afraid to ask for confirmation when someone receives a request that arouses suspicion.

About the essayist: Zac Amos writes about cybersecurity and the tech industry, and he is the editor of Repirate. Follow him on Twitter Where LinkedIn for more articles on emerging cybersecurity trends.

*** This is a syndicated blog from the Security Bloggers Network of The Last Watchdog written by bacohido. Read the original post at: https://www.lastwatchdog.com/guest-essay-how-to-detect-if-a-remote-job-applicant-is-legit-or-a-deepfake-candidate/

Source link