How a coding error turned AirTags into perfect malware distributors


One of the scariest facts about mobile computing in 2021 is that simplicity and convenience are far too tempting in small devices (think AppleWatch, AirTags, even rings that track health conditions, smart headphones, etc.).

Compared to their ancestors on laptops and desktops, it is much more difficult to verify that URLs are correct, that malicious SMS / emails do not open, and that employees follow the minimum cybersecurity precautions requested. through IT. In short, as convenience increases, so do security risks. (Confession: Even though I try to be extra vigilant with office email, I periodically, much more often than I should, lower my guard to a message from my AppleWatch.)

Another of the realities of cybersecurity that has always been around and always will be is that small programming mistakes are easy to make and often get overlooked. And yet, these little mistakes can lead to gargantuan security holes. This brings us to Apple and Airtags.

A security researcher came to CISO’s rescue and discovered that an open area to type in a phone number unintentionally turned AirTags into a gift from God to malicious criminals.

Let’s turn to Ars Technica for more details on the disaster.

“Security consultant and penetration tester Bobby Rauch has found that Apple AirTags – tiny devices that can be affixed to frequently lost items like laptops, phones, or car keys – do not clean them. user inputs. This oversight opens the door to the use of AirTags in a fall attack. Instead of inoculating a target’s parking lot with malware-laden USB drives, an attacker can drop a maliciously prepared AirTag ”, the publication reports.

“This type of attack doesn’t need a lot of technological know-how – the attacker simply types a valid XSS in the AirTag’s phone number field, then puts the AirTag in Lost Mode and drops it. somewhere the target is likely to find it. In theory, scanning a lost AirTag is a safe action – it is only supposed to bring up a web page to https://found.apple.com/. The problem is that found.apple.com then embeds the contents of the phone number field into the website as it is displayed on the victim’s browser, uncleaned.

The worst part about this hole is that the damage he can deal is only limited by the creativity of the attacker. By being able to enter almost any URL into this window, coupled with the fact that victims are unlikely to bother to meaningfully investigate what is going on, the bad options are almost endless. .

More Ars Technica: “If found, apple.com innocently integrates the above XSS into the response for a scanned AirTag, the victim gets a pop-up that displays the contents of badside.tld / page.html. It could be a zero-day browser exploit or just a phishing dialog box. Rauch is hypothesizing a fake iCloud login dialog, which may look like the real thing – but dumps the victim’s Apple credentials on the target’s server, ”the story said. “While this is a compelling feat, it is by no means the only one available – just about everything you can do with a web page is on the table and available. This ranges from simple phishing, as the example above shows, to exposing the victim’s phone to a zero-day clickless browser vulnerability.

Rauch posted much more details on Average.

This is why the convenience of devices such as AirTags is dangerous. Their small size and unique function personality make them appear harmless, which they absolutely are not. Any device that can communicate with anyone or anything at the device’s convenience (and, yes, I’m looking at IoT and IIoT door locks, bulbs, temperature sensors, etc.) is a major threat. It’s a threat to consumers, but it’s a far more dangerous threat to corporate IT and security operations.

This is because when employees and contractors (not to mention distributors, suppliers, partners, and even large customers with network credentials) interact with these small devices, they tend to forget every instruction from cybersecurity training. End users who are vigilant about email on their desktop (which not everyone is sad to say) will always drop the ball on ultra-convenient small devices, like I would. We shouldn’t, but we do.

And this “we shouldn’t” deserves more context. Some of these devices – AirTags and smartwatches included – make cybersecurity vigilance on the part of end users virtually impossible. This AirTag nightmare is just another reminder of that fact.

KrebsOnSecurity took a look at some of the scariest elements of this AirTags problem.

“AirTag lost mode allows users to alert Apple when an AirTag is missing. Setting it to lost mode generates a unique URL to https://found.apple.com, and allows the user to enter a personal message and a contact phone number. Anyone who finds the AirTag and scans it with an Apple or Android phone will immediately see this unique Apple URL with the owner’s message, ”noted KrebsOnSecurity. “When scanned, an AirTag in lost mode will present a short message asking the searcher to call the owner on the specified phone number. This information is displayed without asking the searcher to log in or provide any personal information. your average Good Samaritan may not know.

That’s a good explanation of the danger, but the most intriguing part is how nonchalant Apple is about this hole – a pattern I’ve seen with Apple on several occasions. The company says it cares, but its inaction says otherwise.

“Rauch contacted Apple about the bug on June 20, but for three months when he inquired about it, the company simply said it was still investigating. Last Thursday, the company sent Rauch a follow-up email saying it plans to address the weakness in an upcoming update, and in the meantime, wouldn’t he mind talking about it publicly? KrebsOnSecurity reported. Rauch said Apple never answered basic questions it asked about the bug, such as whether they had a timeline to fix it and, if so, whether they planned to credit it in the accompanying security advisory. Or whether its submission would qualify for Apple’s bug bounty program, which promises financial rewards of up to $ 1 million for security researchers who report bugs security in Apple products. Rauch said he had reported numerous software vulnerabilities to other vendors over the years and that Apple’s lack of communication prompted him to go public with his findings – even though Apple says this is not a bug until it is fixed. notice. “

First, Rauch is absolutely right here. When a vendor asks about security, it hurts their users and the industry by sitting on it for months or more. And by not alerting a researcher quickly to whether or not they will be paid, they leave them little choice but to alert the public.

At the very least, the vendor should be explicit and specific about when a fix will be deployed. Here’s the kicker: If Apple can’t access it for a period of time, there’s an obligation to report the hole to potential victims so that they adopt behavior to avoid the hole. Fixing the hole is obviously much better, but if Apple doesn’t do it quickly, it creates an untenable situation.

It’s the age-old bug disclosure problem, something these bounty programs were meant to solve. Pre-patch disclosure risks flagging the hole for cyber thieves, who might rush in to take advantage of it. That said, it’s not like some forwards don’t already know the hole. In this case, Apple’s inaction does nothing more than leave victims exposed to attack.

Apple’s behavior is infuriating. By having a bonus program that links promises of payment to requests for silence, the company has an obligation to take both seriously. If it has such a program and then takes far too long to fix those loopholes, it undermines the entire program, as well as consumers and businesses.

Copyright © 2021 IDG Communications, Inc.


Source link

Previous Orange (ORAN) concludes the takeover of TKR to help the convergence plan in Europe - October 4, 2021
Next Applying digital and analytical innovation to make America's infrastructure more resilient

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *