After a failed Linux backdoor attempt grabbed headlines, open source leaders are warning of new attacks

The beauty of open source software lies in the dispersed communities that develop and maintain the code, often ungratefully. But while there is strength in this approach, it can also present risks.

This was recently made clear by the discovery of a backdoor that was inserted into XZ Utils, a set of data compression tools built into many distributions of the Linux operating system. Discovered by a Microsoft engineer named Andres Freund, the flaw could have enabled a major cyberattack with global ramifications, since corporate servers typically run on Linux.

Weeks after Freund’s discovery, we’re none the wiser as to the true identity of the culprit, known to the community only as “Jia Tan”—this was likely a state-sponsored operation, but either way, “Jia Tan” spent years involved in XZ Utils project and eventually took it over.

Yesterday, open source leaders warned that the XZ Utils incident was likely not a one-off. In a blog post, senior staff at the Open Source Security Foundation and the OpenJS Foundation, which manages the development of many of the JavaScript technologies that power the web, urged everyone maintaining open source projects to “be alert for social engineering takeover attempts, recognize early patterns of threats that emerge and take steps to protect their open source projects.”

According to the post, someone recently tried to convince the OpenJS Foundation to put them together as a maintainer of a popular JavaScript project (it’s not clear which one) to “address any critical vulnerabilities”. The modus operandi was apparently similar to that used by Jia Tan, and the foundation spotted a “similar suspicious pattern” in two other JavaScript projects it does not host, and alerted the relevant project managers and US authorities.

“Open source projects always welcome contributions from anyone, anywhere, but giving someone administrative access to the source code as a maintainer requires a higher level of trust and is not provided as a ‘quick fix’ to any problem,” wrote OpenJS Foundation CEO Robin Bender Ginn and Open Source Security Foundation CEO Omkhar Arasaratnam.

“These social engineering attacks exploit the sense of duty maintainers have to their project and community to manipulate them,” they added. “Pay attention to how the interaction makes you feel. Interactions that create self-doubt, feelings of inadequacy, not doing enough for the project, etc. can be part of a social engineering attack.”

Said Chris Hughes, head of security at Endor Labs Computer Weekly was not surprised to see multiple attempts to infiltrate open source projects in this way.

“We can probably suspect that many of these [attacks] they are already underway and may have already been successful but have not yet been exposed or identified,” he said. “Most open source projects are incredibly underfunded and run by one or a small group of maintainers, so the use of social engineering attacks on them is not surprising, and given how vulnerable the ecosystem is and the pressures under which maintainers are likely to welcome help in a lot of cases .”

A reminder, if one is needed, of how technically vulnerable we humans are. More news below.

David Meyer

Would you like to submit your thoughts or suggestions to the Data Sheet? Write here.

NEWS NEWS

Microsoft’s $1.5 billion G42 investment. Microsoft has invested $1.5 billion in G42 in Abu Dhabi, the largest AI company in the UAE. As Bloomberg reports, this follows an agreement by the G42 to end its presence in China and its use of Chinese technology so that it can retain access to American technology, most notably Nvidia’s market-dominating AI chips. Under the new agreement, Microsoft president Brad Smith becomes a board member of G42, and G42 will use the Azure cloud.

UK AI regulations. Remember when the UK said it would be in no rush to pass AI legislation, back in February? Now Financial Times reports that new legislation is indeed in the works to ensure that tech companies developing large language models give the government access to algorithms and demonstrate compliance with security regulations. “Officials are exploring moving to regulations for the most powerful AI models,” one unnamed source told the newspaper. In related news, the UK government has announced new legislation that will make it a criminal offense to create a sexually explicit deepfake image.

X in Brazil. Less than two weeks after X CEO Elon Musk said it would restore accounts blocked by Brazil’s Supreme Court, the company has decided it will actually comply with the court’s order, along with those of Brazil’s Superior Electoral Court. As Reuters reports, Supreme Court Justice Alexandre de Moraes opened an obstruction of justice investigation against Musk over the settlement. In other X news, Musk reportedly plans to charge new users a fee to start posting, as a measure against the plague of X robots.

ON OUR FEED

“[Trump Media & Technology Group] may be exposed to greater risks than typical social media platforms due to the focus of its offerings and the involvement of President Donald J. Trump.”

—The True Social parent uses an SEC filing to explain the various ways Trump’s involvement threatens the company. Wired cites said risks, which range from Trump’s potential criminal conviction and history of filing for bankruptcy protection for his companies to the possibility that he may instead choose to focus on work elsewhere. The suggestion in the filing that Trump might sell his stake caused TMTG’s share price to drop more than 18% yesterday.

IN CASE YOU MISSED IT

Fortune partners with Accenture on AI tool to help analyze and visualize Fortune 500: ‘You can’t ask a spreadsheet a question’ Marco Quiroz-Gutierrez

Tesla’s top engineering executive, who led the development of critical technologies, has resigned after 18 years, raising concerns about who will succeed Elon Musk as CEO, Bloomberg reports.

Asking Big Tech to oversee AI is like asking ‘oil companies to solve climate change’, says AI researcher Eleanor Pringle

Expert claims artificial intelligence won’t lead to mass layoffs any time soon: ‘Look when we were promised fully autonomous cars’ by Christiaan Hetzner

Artificial intelligence could eat a quarter of all US electricity by 2030 if it doesn’t get rid of its energy dependency, says Arm Holdings CEO, author Christiaan Hetzner

BEFORE YOU GO

Ad transparency fails. How are Big Tech platforms doing on the ad transparency front, especially since EU law means they should provide searchable databases of the ads they carry? Very bad, according to Mozilla researchers and the anti-disinformation arm CheckFirst. “We find a lot of variation among platforms, but one thing is true for all of them: none is a fully functional ad repository, and none will provide researchers and civil society groups with the tools and data they need to effectively track impact [very large online platforms and search engines] about the upcoming elections in Europe,” they wrote in a report cited by TechCrunch.

This is the web version of Data Sheet, the daily technology business newsletter. Sign up for free delivery to your inbox.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *