In brief The EU Commission wants to build a giant facial recognition database that will be shared with law enforcement across different countries.
Police in Europe have been able to share details like fingerprints and DNA in criminal investigations with one another for the last 15 years under the Prüm II policy, and now lawmakers are trying to expand this to support facial recognition.
The latest documents, viewed by Wired, show how big this potential database could grow. Countries capture all sorts of images of their citizens. For example, Hungary has 30 million photos, Italy 17 million, France 6 million, and Germany 5.5 million. These official photographs range from criminal suspects to asylum seekers.
Experts are increasingly pushing back on the proposal launched at the end of last year. “What you are creating is the most extensive biometric surveillance infrastructure that I think we will ever have seen in the world,” said Ella Jakubowska, a policy adviser at the civil rights NGO European Digital Rights.
An EU spokesperson, however, said that “only facial images of suspects or convicted criminals can be exchanged,” as part of Prüm II. “There will be no matching of facial images to the general population.
A new internet language ‘algospeak’ is emerging
Netizens are inventing new words to get around content moderation algorithms as part of a language dubbed “algospeak.”
Posts on social media platforms like TikTok or Instagram can be automatically taken down if they contain toxic or NSFW content. In a bid to skirt these rules, people are replacing banned words with ones that algorithms don’t recognize so their photos or videos aren’t removed from the internet, according to the Washington Post.
Some common examples include saying “unalive” rather than “dead,” “SA” instead of “sexual assault,” or “spicy eggplant” for “vibrator,” or “nip nops” in place of “nipples.”
“The reality is that tech companies have been using automated tools to moderate content for a really long time and while it’s touted as this sophisticated machine learning, it’s often just a list of words they think are problematic,” said Ángel Díaz, a lecturer at the UCLA School of Law who studies technology and racial discrimination.
These so-called algospeak posts aren’t always necessarily offensive, sometimes people are using the made-up phrases to talk about sensitive topics like mental health or sexuality.
Automated radiology tool gets green light from EU
An AI-powered tool that automatically reports healthy-looking X-ray scans has been approved by the European Union, paving the way for the software to be used in real clinical settings in 32 countries.
ChestLink received CE Class IIb certification, according to its creators Oxipit. “It aims to address the shortage of radiologists and their increasing workloads,” the company’s spokesperson Mantas Mikšys told The Register.
ChestLink files reports if it detects no issues in X-ray scans, like nodules lodged inside a patient’s lungs.
“Even if the patient is healthy, the radiologist still has to file the report. This is a mundane, routine task. Even in the current automation scope, ChestLink autonomously reports on 15 percent to 40 percent of daily workflow. It simply removes these studies from the daily workload of the radiologist. A radiologist can devote more time to analyze images with pathologies,” Mikšys added.
Now that ChestLink has been cleared for use, clinicians can use it in the real world and focus on providing better care to patients who need it.
It’s the first regulatory approved medical AI imaging tool that can perform autonomously in the EU. The company expects to start deploying its software in hospitals next year.
Can AI tell us how we’re feeling by hearing our voices?
Researchers are experimenting with AI systems to analyze people’s voices to try and identify psychiatric disorders, like depression or schizophrenia.
But does the technology really work? There is some evidence that people suffering from depression or anxiety tend to speak in ways that are more monotone-sounding, quiet, or fast, Maria Espinola, a psychologist and assistant professor at the University of Cincinnati College of Medicine, told the New York Times.
Diagnosing and treating mental illnesses is tricky, and requires careful analysis that involves more than just hearing how someone talks. Still, researchers are trying to see if AI may be able to help since it could pick up on signs that our own ears may not be able to detect. The idea, however, raises questions especially if the technology is not very interpretable and likely to be biased.
“For machine learning models to work well, you really need to have a very large and diverse and robust set of data,” said Grace Chang, founder of Kintsugi, a startup that has developed an app to track user’s emotional states by listening to their voices.
The datasets have to be representative of people from different ethnicities, ages, and genders, something that is often lacking in medical datasets.
Whether machine learning can or should analyze voices to accurately study mental health remains debatable. ®