Operation Triangulation: What you get when attack iPhones of researchers
549 points by ruik 1 year ago | 399 comments- sweis 1 year agoThe video of the talk is online now too: https://www.youtube.com/watch?v=7VWNUUldBEE
- mike_hearn 1 year agoThat's pretty astonishing. The MMIO abuse implies either the attackers have truly phenomenal research capabilities, and/or that they hacked Apple and obtained internal hardware documentation (more likely).
I was willing to believe that maybe it was just a massive NSA-scale research team up until the part with a custom hash function sbox. Apple appears to have known that the feature in question was dangerous and deliberately both hidden it, whatever it is, and then gone further and protected it with a sort of (fairly weak) digital signing feature.
As the blog post points out, there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes). That leaves hacking the developers to steal their internal documentation.
The way it uses a long chain of high effort zero days only to launch an invisible Safari that then starts from scratch, loading a web page that uses a completely different chain of exploits to re-hack the device, also is indicative of a massive organization with truly abysmal levels of internal siloing.
Given that the researchers in question are Russians at Kaspersky, this pretty much has to be the work of the NSA or maybe GCHQ.
Edit: misc other interesting bits from the talk: the malware can enable ad tracking, and also can detect cloud iPhone service hosting that's often used by security researchers. The iOS/macOS malware platform seems to have been in development for over a decade and actually does ML on the device to do object recognition and OCR on photos on-device, to avoid uploading image bytes: they only upload ML generated labels. They truly went to a lot of effort, but all that was no match for a bunch of smart Russian students.
I'm not sure I agree with the speaker that security through obscurity doesn't work, however. This platform has been in the wild for ten years and nobody knows how long they've been exploiting this hidden hardware "feature". If the hardware feature was openly documented it'd have been found much, much sooner.
- black_puppydog 1 year ago> If the hardware feature was openly documented it'd have been found much, much sooner.
Well, the point of kerckhoff's principle is that it should have been openly documented and then anyone lookindg at the docs even pre-publication would have said "we can't ship it like that, that feature needs to go."
- kaba0 1 year ago“More eyes discover more bugs” never worked.
- kaba0 1 year ago
- aberoham 1 year agoAlso note the IoC script — This script allows to scan iTunes backups for indicator of compromise by Operation Triangulation. https://github.com/KasperskyLab/triangle_check
- henriquez 1 year agoThis is a fairly incredible attack, and agree with your analysis. The hidden Safari tab portion where they “re-hack” the device could be bad organizational siloing as you mentioned or indicative of a “build your virus” approach that script kiddies used in the 90s. Could be a modular design for rapid adaptation, ie. perhaps less targeted.
- sampa 1 year agoor Apple just implemented this "API" for them, because they've asked nicely
- chatmasta 1 year agoOr they have assets working at Apple... or they hired an ex-Apple employee... etc.
That's the problem with this sort of security through obscurity; it's only secure as long as the people who know about it can keep it secret.
- mike_hearn 1 year agoI don't think hiring an ex-Apple dev would let you get the needed sbox unless they stole technical documentation as they left.
So it either has to be stolen technical docs, or a feature that was put there specifically for their usage. The fact that the ranges didn't appear in the DeviceTree is indeed a bit suspicious, the fact that the description after being added is just 'DENY' is also suspicious. Why is it OK to describe every range except that one?
But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.
If it's a genuine backdoor and not a weird debugging feature then it should be rather difficult to add one that looks like this without other people in Apple realizing it's there. Chips are written in source code using version control, just like software. You'd have to have a way to modify the source without anyone noticing or sounding the alarm, or modifying it before synthesis is performed. That'd imply either a very deep penetration of Apple's internal network sufficient to inject backdoors into hardware, or they have one or more agents.
This really shows how dangerous it is to intel agencies when they decide to attack security professionals. Attacking Kaspersky has led directly to them burning numerous zero days including several that might have taken fairly extreme efforts to set up. It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.
- runjake 1 year agoGo onto LinkedIn, search for Apple Global Security staff and you’ll get an answer. The head of and much of the staff are ex-USIC people. Now perform those searches over time and do a little OSINT and observe a revolving door where they are not so ex-.
- mike_hearn 1 year ago
- markus_zhang 1 year agoI wouldn’t be surprised if one or two very senior people in large tech companies are agency agents, willingly or not.
I don’t really have any proof but considering the massive gain it shouldn’t surprise anyone. The agencies might not even need to pay large sum of $$$ if the said assets have vulnerabilities.
- WhackyIdeas 1 year agoI think the way it’s done is that the code is presented to them to use, Apple probably don’t even code those parts themselves.
- chatmasta 1 year ago
- supriyo-biswas 1 year agoSo much misinformation in this thread. It’s a Hamming ECC, as described here[1].
[1] https://social.treehouse.systems/@marcan/111655847458820583
- uncle-betty 1 year agoMore evidence for an ECC, obtained by looking at how the 10 output bits of the function depend on its 256 input bits:
Each of the 10 parity bits output by the function is the xor of exactly 104 of the 256 input bits.
Each of the 256 input bits contributes to (= is xor-ed into) either 3 or 5 of the 10 parity bits.
This is in line with the SEC-DED (single error correction, double error detection) ECC construction from the following paper:
https://people.eecs.berkeley.edu/~culler/cs252-s02/papers/hs...
Translating the above observations about the function into properties of the H matrix in the paper:
Each row of the matrix contains an identical number of ones (104).
Each column of the matrix contains an odd number of ones (3 or 5).
- mike_hearn 1 year agoVery interesting, thanks. Summarizing that thread:
- The "hash" is probably an error correcting code fed into GPU cache debug registers which will be stored in the cacheline itself, you're expected to compute the ECC because it's so low level. That is, the goal isn't to protect the DMA interface. (but this isn't 100% certain, it's just an educated guess)
- The "sbox" is similar to but not the same as a regular ECC as commonly used in hardware.
- Martin argues that the existence of such registers and the code table could have been guessed or brute forced, even though a compromise or info leak from Apple seems more likely. Or possibly even from the old PowerVR days. But if it's the NSA then who knows, maybe they are literally fuzzing hidden MMIO ranges to discover these interfaces.
- This is possible because the GPU has full DMA access without an IOMMU for performance reasons, so it's fertile ground for such exploits. Probably more will be discovered.
So that's all reassuring.
- codedokode 1 year agoWhy do you need error-correction code for a debugging feature though? I would not protect debug registers with a hash.
- gorkish 1 year agoBc you are DMA-ing the raw bits into cache with the GPU, but the CPU is going to check those ECC codes on read as the caches on Apple SoC's are ECC-native. It's an integrity 'protection' not a security 'protection'
- gorkish 1 year ago
- uncle-betty 1 year ago
- eastof 1 year agoMaybe more likely they just have people inside Apple?
- newsclues 1 year agoThis is likely at the scale of Apple and the determination of State Actors.
- newsclues 1 year ago
- malaya_zemlya 1 year ago>also is indicative of a massive organization with truly abysmal levels of internal siloing.
Or a joint project between several organizations.
- LargeTomato 1 year agoOr, like, they have a root kit and it works so why reinvent the wheel? They have an attack payload so why reinvent the wheel? Just plug and play all the packages you need until you can compromise your target device.
- computerfriend 1 year agoBut there is a very good reason to reinvent the wheel here: to not burn more zero-days than you have to.
- mike_hearn 1 year agoThe attack payload should not be so tied to an exact installation path that you can't just install it via a different exploit chain.
- computerfriend 1 year ago
- LargeTomato 1 year ago
- raincom 1 year agoS-box is used in symmetric ciphers in cryptography. S-box = substitution box or lookup table [3][4]. You can see an example of Rijndael s-box in a python script here [1][2].
[1] https://github.com/kokke/tiny-AES-c/blob/f06ac37fc31dfdaca2e...
[2] https://anh.cs.luc.edu/331/code/aes.py
- boppo1 1 year ago>there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes).
Then how did these researchers do it? Not being cheeky, I just don't follow security super closely.
- mike_hearn 1 year agoThey reverse engineered the malware.
- mike_hearn 1 year ago
- jsjohnst 1 year agoSeems likely a compromise at the GPU or ARM side as equally possible routes.
- stephen_g 1 year agoWhat do you mean? Both the GPU and CPU design are proprietary to Apple. They used to use regular ARM designed cores but the last one of those before switching to their own core design was something like the A5 days (from memory). It uses the ARM instruction set but isn’t actually designed by ARM at all.
Similar for the GPU too. They may have started with HDL licensed from others (like I think their GPU might actually have been directly based on the PowerVR ones they used to use, but I believe the ARM one is basically from-scratch) but this vulnerability seems unlikely to have existed since then…
- jsjohnst 1 year agoCoreSight is not Apple proprietary, it’s part of ARM’s offering. This vulnerability appears to be part of CoreSight.
> but I believe the ARM one is basically from-scratch
You are wrongly believing then. There’s still a bunch of ARM IP in their CPU.
- jsjohnst 1 year ago
- stephen_g 1 year ago
- throwaway2037 1 year ago> truly phenomenal research capabilities
Maybe a nation state, e.g., APT?
- rst 1 year agoBeing able to put together tooling with these capabilities makes the attacker an APT by definition. These are generally assumed to be national intelligence services, though that is an assumption. (Among other things, there are multiple countries where the lines between intelligence agencies and their contractors are... fuzzy.)
And while Kaspersky is refusing to speculate at all about attribution, the Russian government has claimed (without giving specific evidence) that it's NSA.
- AtlasBarfed 1 year agoI thought there were Israeli private services/ contractors providing APT as a service to, for example, Saudi Arabia or other despotic regimes.
I think that was in the news back in the sochi Olympics. The value of cyber capabilities is only going up with time
The siloing may be due to multiple contractors. I imagine these exploit vendors are protective of their arsenal of attacks.
Because as has been said many times, the three letter agencies aren't exempt from the curse of government employee mediocrity.
- AtlasBarfed 1 year ago
- rst 1 year ago
- baybal2 1 year ago[dead]
- black_puppydog 1 year ago
- DantesKite 1 year agoSteve Weis on Twitter described it best:
“This iMessage exploit is crazy. TrueType vulnerability that has existed since the 90s, 2 kernel exploits, a browser exploit, and an undocumented hardware feature that was not used in shipped software”
https://x.com/sweis/status/1740092722487361809?s=46&t=E3U2EI...
- Muehe 1 year agoFor those interested in the talk by the Kaspersky researches, the cleaned video isn't uploaded yet but you can find a stream replay here:
https://streaming.media.ccc.de/37c3/relive/a91c6e01-49cf-422...
(talk starts at minute 26:20)
- Sweepi 1 year ago...and its online: https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...
- Sweepi 1 year ago
- cf1241290841 1 year agoAs its about a 37c3 presentation here a comment from Fefe¹ in German https://blog.fefe.de/?ts=9b729398
According to him the exploit chain was likely worth in the region of a 8-digit dollar value.
¹ https://en.wikipedia.org/wiki/Felix_von_Leitner
I guess somebody is going to get fired.
- saagarjha 1 year agoWhy? Having exploits “burned” is part of the business.
- cf1241290841 1 year agoExploit yes
Decade old Backdoors no
- 1 year ago
- _kbh_ 1 year ago> Decade old Backdoors no
I really doubt it's a backdoor after reading the blog post and this thread chain from a prolific M1 MacBook hacker (macran) I think it was just an unused or very rarely used feature that was left enabled by accident.
https://social.treehouse.systems/@marcan/111655847458820583
Some choice quotes.
First, yeah, the dbgwrap stuff makes perfect sense. I knew about it for the main CPUs, makes perfect sense it'd exist for the ASCs too. Someone had a lightbulb moment. We might even be able to use some of those tricks for debugging stuff ourselves :)
Second, that "hash" is almost certainly not a hash. It's an ECC code*. I bet this is a cache RAM debug register, and it's writing directly to the raw cache memory array, including the ECC bits, so it has to manually calculate them (yes, caches in Apple SoCs have ECC, I know at least AMCC does and there's no reason to think GPU/ASC caches wouldn't too). The "sbox" is just the order of the input bits to the ECC generator, and the algorithm is a textbook ECC code. I don't know why it's somewhat interestingly shuffled like that, but I bet there's a hardware reason (I think for some of these things they'll even let the hardware synthesis shuffle the bits to whatever happens to be physically optimal, and that's why you won't find the same table anywhere else).
- 1 year ago
- cf1241290841 1 year ago
- saagarjha 1 year ago
- londons_explore 1 year agoCoresight is not some backdoor - it's a debug feature of all ARM CPU's. This looks like a necessary extension to coresight to work with Apples memory protection stuff.
Even though no public documentation exists, I'm sure thousands of Apple engineers have access to a modded gdb or other tooling to make use of it.
- repiret 1 year agoOne persons debug tool is another’s back door.
- smallnix 1 year agoThat does not explain the weird hashing.
- duskwuff 1 year agoAs explained by marcan: it's not "hashing", it's an error-correcting code. Much more understandable in that light.
- adrian_b 1 year agoThat the secret registers are in fact cache test registers, as explained at that link, is a very plausible explanation for their existence.
Nevertheless, this does not explain at all the astonishing fact that they were mapped by default in the accessible memory space, unless listed and explicitly denied in the system configuration files.
No amount of incompetence seems enough to explain such a default policy, so the supposition of an intentional backdoor still seems more likely.
- adrian_b 1 year ago
- duskwuff 1 year ago
- repiret 1 year ago
- transpute 1 year agoiMessage can be disabled by local MDM for supervised devices, via free Apple Configurator in macOS app store, https://support.apple.com/guide/deployment/restrictions-for-...
SMS/MMS messages and non-emergency cellular radio traffic can be disabled by a SIM PIN, e.g. when using device for an extended period via WiFi.For Wi-Fi–only devices, the Messages app is hidden. For devices with Wi-Fi and cellular, the Messages app is still available, but only the SMS/MMS service can be used.
- fishywang 1 year agoWe purchased an iPad with cellular, with the plan to put my home country's sim card in it so I can still receive SMS (as most of the banks there still requires SMS verification when you login), and it turns out that iPad with cellular does not really show you SMS's that's not from the carrier of the sim card.
- transpute 1 year ago> iPad with cellular does not really show you SMS's that's not from the carrier of the sim card.
Does iPad support SMS? The cellular line is usually only for data, https://www.howtogeek.com/710767/how-to-send-sms-text-messag...
iPads can't send SMS text messages through Apple's Messages app. Even if you have an iPad with a cellular data plan for mobile internet on the go, you still can't send SMS text messages.
- fishywang 1 year agoApple's own user guide (https://web.archive.org/web/20201223140550/https://support.a...) suggests otherwise:
>In the Messages app , you can send text messages as SMS/MMS messages through your cellular service, or ...
Also my own experience is that it at least can receive SMS text messages, just it won't show you if it's not from your carrier (if it's from your carrier, it shows you via a popup window or something, can't really remember as that was several years ago).
- fishywang 1 year ago
- fsckboy 1 year agoI've never understood why iPads can't be used as phones with an ordinary cellphone SIM. Is it simply because Apple doesn't want to pay a Qualcomm licensing fee or some equivalent? Who is it in the chain/ecosystem that does not want tablets being used as full phones, the carriers? Apple?
- jrockway 1 year agoI'm guessing it doesn't fit well with the carriers' price structure. Adding a tablet / smart watch / etc. is cheaper than adding another phone to your account. I wouldn't have a cellular iPad if it was a lot extra per month, but I think I pay $10 for both the tablet and the watch, which is fine with me.
- jrockway 1 year ago
- transpute 1 year ago
- fishywang 1 year ago
- londons_explore 1 year agoNotice that the hash value for a data write of all zero's is zero...
And for a single bit, the hash value is a single value from the sbox table. That means this hash algorithm could reasonably have been reverse engineered without internal documentation.
- londons_explore 1 year agoThis 'smells' like a typical way to prevent memory writes to random addresses accidentally triggering this hardware. Doesn't look like it was intended as a security feature.
In fact, this is how I'd implement it if someone said to me it was important that bugs couldn't lead to random writes. This implementation also effectively prevents someone using this feature whilst giving a buffer address they don't know the contents of.
10 bits of security is probably enough for that as long as you reboot the system whenever the hash value is wrong. The coresight debug functionality can totally reboot the system if it wants to.
- tedunangst 1 year agoLike a CRC? I'm reminded of the the Broadcom compression algorithm that required tedious reverse engineering, or a look at the Wikipedia page with sample code.
- the-rc 1 year agoOn the Amiga, you had to write to a blitter control register (BLTSIZE?) twice with the same value or it wouldn't do anything. This might be the same, only a lot more paranoid.
But it might also be a backdoor, intended or not.
- tedunangst 1 year ago
- londons_explore 1 year ago
- londons_explore 1 year agoWhat are the chances this MMIO register could have been discovered by brute force probing every register address?
Mere differences in timing could have indicated the address was a valid address, and then the hash could perhaps have been brute forced too since it is effectively a 20 bit hash.
- chatmasta 1 year agoIt looks like the registers could have been identified fairly easily via brute force. They're physically close to documented GPU registers, and accessing them triggers a GPU panic, which is how the researchers attributed them to the GPU component. The attackers could have used that same test to identify the existence of the registers.
The part that's less easily explained is how they were able to reconstruct a custom sbox table to execute the debug code. That's where the "insider threat" insinuations are strongest, but personally I'm not convinced that it precludes any number of other plausible explanations. For example, the attackers could have extracted the sbox from: older firmwares, OTA update patches, pre-release development devices (probably purchasable on ebay at some points), iOS beta releases, or a bunch of other leaky vectors.
The researcher basically says "I couldn't find this sbox table in any other binary where I looked for it." Well, that's not necessarily surprising since it appears to be Apple specific and thus there are a limited number of binaries where it might have appeared. And as the researcher notes, this includes now unpublished binaries that might have been mistakenly released. It's totally plausible that the attackers got lucky at some point while they were systematically sniffing for this sort of leak, and that the researcher is unlikely to have the same luck any time soon.
- londons_explore 1 year agoLooking at that sbox implementation, I can't believe it was implemented as a lookup table in the hardware of the chip - there must be some condensed Boolean expression that gives the same result.
The fact the attackers didn't know that Boolean expression suggests they reverse engineered it rather than had documentation.
- chatmasta 1 year ago
- soupdiver 1 year ago
- contingencies 1 year agoBegins @ 27:21
In addition contents of the presentation, in terms of timeline...
2018 (September): First undocumented MMIO-present CPU launched, Apple A12 Bionic SOC.
2021 (December): Early exploit chain infrastructure backuprabbit.com created 2021-12-15T18:33:19Z, cloudsponcer.com created 2021-12-17T16:33:50Z.
2022 (April): Later exploit chain infrastructure snoweeanalytics.com created 2022-04-20T15:09:17Z suggesting exploit weaponized by this date.
2023 (December): Approximate date of capture (working back from "half year" quoted analysis period + mid-2023 Apple reports.
The presenters also state that signs within the code reportedly suggested the origin APT group has used the same attack codebase for "10 years" (ie. since ~2013) and also uses it to attack MacOS laptops (with antivirus circumvention). The presenters note that the very "backdoor-like" signed debug functionality may have been included in the chips without Apple's knowledge, eg. by the GPU developer.
So... in less than 3.5 years since the first vulnerable chip hit the market, a series of undocumented debug MMIOs in the Apple CoreSight GPU requiring knowledge of a lengthy secret were successfully weaponized and exploited by an established APT group with a 10+ year history. Kaspersky are "not speculating" but IMHO this is unlikely to be anything but a major state actor.
Theory: I guess since Apple was handed ample evidence of ~40 self-doxxed APT-related AppleIDs, we can judge the identity using any follow-up national security type announcements from the US. If all is quiet it's probably the NSA.
- mike_hearn 1 year agoIt's really a pity they explain all the mistakes that helped the malware be detected.
- halJordan 1 year agoIt's not, it really isnt. Honestly just apply this mentality to one other scenario to test the waters. We should stop publishing yara rules because it flips our hand to the malware makers? It's nonsense to even say.
- halJordan 1 year ago
- mike_hearn 1 year ago
- mb4nck 1 year agoThe (first?) version of the real recording is now up: https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...
- contingencies 1 year ago
- WalterBright 1 year agoThe extra hardware registers might have been discovered by examining the chip itself. One could find where the registers were on it, and notice some extra registers, then do some experimenting to see what they did.
- mhh__ 1 year agoMaybe, but chips already have vast, vast, quantities of physical registers in a big blob.
Assuming it wasn't a lucky guess, timing attacks are often used to find this stuff.
- codedokode 1 year agoIsn't it easier just to pay to one of hundreds employees having access to chip design? Or even get it without paying by appealing to patriotism?
- sangnoir 1 year agoHow many ex-Apple employees work(ed) at NSA? It may just have been the right person doing their regular 9-5 job, with no subterfuge. The list of employers for Hardware security folks is likely a couple of dozen companies, and Apple and NSA are among the most prestigious of them. I expect some employees to move in both directions.
- cryu 1 year agoI know of two, one from my team. Don't know how long they stayed there, though.
- cryu 1 year ago
- WitCanStain 1 year agoOr just covertly tell Apple to hand over its documentation / to knowingly leave gaps in the defenses for NSA to exploit.
- sangnoir 1 year ago
- gusfoo 1 year ago> The extra hardware registers might have been discovered by examining the chip itself.
Perhaps. But it's easier to phone the technical librarian and say "Hi! I'm Bob from the password inspection department. Can you verify your current password for me?"
- smith7018 1 year agoDo you know how this is possible? Would decapping the SoC or taking an xray of it provide a physical map of the registers?
- mhh__ 1 year agoYou can find the register file relatively easily because it's a block of memory that's the same on each core but isn't cache, but it isn't a 1:1 map from architectural registers that we would recognize: the chip is designed to find an optimal allocation of slots in the register file to runtime values.
- pm215 1 year agoThese particular registers aren't part of the CPU proper anyway, so not in the register file in that sense -- they're mmio mapped, and https://securelist.com/operation-triangulation-the-last-hard... concludes that they are "a block of CoreSight MMIO debug registers for the GPU coprocessor".
- saagarjha 1 year agoThat’s where the GPRs would live. There’s no reason you have to put weird MMIO there too.
- pm215 1 year ago
- mhh__ 1 year ago
- _ink_ 1 year agoMaybe, or somebody talked.
- mhh__ 1 year ago
- throwaway81523 1 year agoPhilip Zimmermann a while back was working on a secure phone product called the Black Phone. I tried to convince him that a secure phone should not contain any microphones of any kind. That sounds a bit weird for a phone, but it's ok, if you want to make a voice call, just plug a headset into it for the duration of the call. He wasn't convinced, but this iphone exploit makes me believe it more than ever.
- x1sec 1 year agoPerhaps a physical switch that connects or disconnects the electrical signal from the microphone to the handset could be a more convenient approach.
There is a photo of Mark Zuckerberg with a cut off 3.5mm jack plugged into his laptop - likely to achieve a similar outcome.
- fsflover 1 year agoMy phone has a hardware kill switch for mic/camera.
- x1sec 1 year ago
- stefan_ 1 year agoMaybe I'm too dumb to find it on this page but if you are looking for the actual recording instead of a calendar entry in the past, it's here (a stream dump for now, fast forward to 27 mins):
- trustingtrust 1 year ago>Hardware security very often relies on “security through obscurity”, and it is much more difficult to reverse-engineer than software, but this is a flawed approach, because sooner or later, all secrets are revealed.
The later works when you are not as big as Apple. When you are as big as Apple, you are a very hot target for attackers. There is always the effort vs reward when it comes to exploiting vulnerabilities. The amount of effort that goes into all this is worth thousands of dollars even if someone is doing it just for research. If I was doing this for some random aliexpress board it would be worth nothing and probably security by obscurity would mean no one really cares and the later part works here. But I wonder what Apple is thinking when they use obscurity cause people must start working on exploiting new hardware from day 1. You literally can get one on every corner in a city these days. Hardware Security by obscurity for example would be fine for cards sold by someone like nvidia to only some cloud customers and those are then assumed obsolete in a few years so even if someone gets those on eBay the reward is very low. iPhones on the other hand are a very consumer device and people hang on to their devices for very long.
- I_Am_Nous 1 year ago>Although infections didn’t survive a reboot
Reminder to reboot your iPhone at least weekly if you are concerned about this kind of attack.
- x1sec 1 year agoIn a week, a lot of data can be exfiltrated. Then after you have rebooted, the threat actor reinfects your device.
Frequently rebooting the device can’t hurt but it likely isn’t going to prevent a threat actor from achieving their objectives.
The best mitigation we have is to enable lockdown mode.
- transpute 1 year ago> reboot your iPhone at least weekly
with the Hard Reset key sequence, https://www.wikihow.com/Hard-Reset-an-iPhone
- wyre 1 year agoSorry for the lay question but what’s the benefit of the hard reset over a general restart?
- Mattwmaster58 1 year agoLayperson here so just guessing. If not using the hard reset method, the exploit might fake the reboot sequence to maintain its own persistence. AFAIK, a hard reset is built in hardware and thus impossible to fake.
- carleton 1 year agoI believe they’re assuming that malware can do a pretend reboot whereas the hardware keystroke can’t be faked.
- transpute 1 year ago[dead]
- Mattwmaster58 1 year ago
- wyre 1 year ago
- HumanOstrich 1 year agoNo, they could monitor when devices rebooted and re-infect them immediately, as the article states.
- x1sec 1 year ago
- kevinwang 1 year agoWow, that's amazing. I wonder if attacker like this feel unappreciated since they can't take credit for their work.
- belter 1 year agoPublic key cryptography was developed in 1970s at GCHQ but that was classified.
- belter 1 year ago
- codedokode 1 year agoI see that one of the steps in exploit was to use GPU registers to bypass kernel memory protection. Does it mean that the vulnerability cannot be fixed by an update and existing devices will stay vulnerable?
- ipython 1 year agoThe mitigation is that the mmio range in question has been marked as unwritable in the device trees on recent versions of iOS.
- transpute 1 year agohttps://x.com/alfiecg_dev/status/1740025569600020708
It’s a hardware exploit, using undocumented registers. It can only be mitigated against, but not fully patched.
- flakiness 1 year agoI don't think there is any JIT on GPU and all the code has to go through a host-side kernel call so it should be able to protect the register I guess?
- saagarjha 1 year agoThe kernel cannot protect against this, in fact the attackers have full read/write control and code execution capabilities to mount this attack. The fix is blocking this range from being mapped using features that are more powerful than the kernel.
- saagarjha 1 year ago
- ipython 1 year ago
- kristofferR 1 year agoWhy would the attackers target Kasperspy employees? Seems like a great way to get your exploit chain exposed
- youngtaff 1 year agoPerhaps Kasperspy is doing offensive work for someone?
- youngtaff 1 year ago
- luke-stanley 1 year agoI didn't hear anyone mention fuzzing once. I guess there was probably very specific insider knowledge being made use of and they wanted to point a finger, which is fair enough I guess. I'm just a bit surprised that it has not been mentioned so far in the discussion. Anyhow it seems that a allow-list approach by Apple would have been better than a deny list approach! Literally not checking out of expected bounds!
- camkego 1 year agoThis is a really good question.
Fuzzing is about searching a state-space of an entity: function, method, and I suppose even a hardware-block for unexpected or undefined, or maybe even undocumented behavior.
Certainly this could have been used by the exploiters of these bugs to find undocumented but desirable effects in the hardware of iOS hardware blocks or devices.
- cf1241290841 1 year agoIts one of the major arguments against backdooring systems even if you think this to be acceptable. In the end you create a backdoor for everyone, even if you dont do it as moronic as here. You are the hostile actor.
- cf1241290841 1 year ago
- Alex3917 1 year agoIf they were using a deny list, that sounds like an intentional backdoor.
- luke-stanley 1 year agoIt might just be that they couldn't think of another way to code it though.
- luke-stanley 1 year ago
- camkego 1 year ago
- neilv 1 year ago> If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.
Did the systems software developers know about these registers?
- amai 1 year agoSee also the article from Ars Technica in June 2023: https://arstechnica.com/information-technology/2023/06/click...
- Despegar 1 year agoI'm curious to know from experts if there's anything Apple can do to create a step-change in terms of security of iPhones? Like if the going rate for a zero day is $1 million, is there anything Apple can do that can drive that up to $2 or $3 million? Or is it just going to be a perpetual cat and mouse game with no real "progress"?
- develatio 1 year agoI am by no means a security expert whatsoever. Period. But reading the article carefully, there is a step in the chain of exploits (CVE-2023-32435) which depends on exploiting Safari. Apple implemented a "Lockdown mode" (https://support.apple.com/en-us/105120) which might have handled this (?).
Answering more broadly to your question, the "step-change" that you're asking for is precisely the "Lockdown mode" in iOS devices. It disables most of the features in order to reduce the attack surface of the device.
- codedokode 1 year agoIf you read a better article with technical details [1], you'll see that Apple SOCs contain a "feature" (that resembles a debugging tool) that allows to bypass memory protection by writing into undocumented and unused GPU registers. Apple locks down kernel memory to stop exploits, but these registers allow to bypass the lock.
This vulnerability is they key vulnerability without which all the exploit chain would be useless.
[1] https://securelist.com/operation-triangulation-the-last-hard...
- OrderlyTiamat 1 year ago[flagged]
- OrderlyTiamat 1 year ago
- hn_throwaway_99 1 year agoThe Safari vulnerability wasn't necessary (the device was completely owned before that), and was really just a "nice to have" - it allowed verification of the targeted user and, presumably, customizable malware delivery. From the article, if you look at the bullet points under the Kaspersky diagram of the exploit chain:
> After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device and run spyware, but attackers chose to: a) launch the imagent process and inject a payload that cleans the exploitation artifacts from the device; b) run the Safari process in invisible mode and forward it to the web page with the next stage.
In other words, if looking at the diagram, https://cdn.arstechnica.net/wp-content/uploads/2023/12/trian... , it's completely "game over" once you get to the skull icon in the lower left corner, and the Safari exploit is after that.
- dunham 1 year agoYeah, lockdown mode might have handled it. If I'm reading the article right, the first step of the exploit was a PDF file sent with iMessage.
When I tried out lockdown mode out of curiousity, I found that it was aggressive about blocking PDF viewing. I quickly bailed on it because I often read research papers on the web, and it switched them from view to download.
- codedokode 1 year ago
- doctorpangloss 1 year agoIt could author its format parsers in https://github.com/google/wuffs, and make them BSD-like open source to maximize adoption.
An even bigger change: It could allow users to choose their iMessage client freely. Why not open up the protocol? I’m sure a security focused client would be popular and in the grand scheme of things easy to author.
Perhaps they could open up more of the OS and apps. Perhaps their claims about the security of users and the App Store is kind of BS.
- madeofpalk 1 year agoI struggle to believe that a third party iMessage iOS app would be a security improvement, beyond Lockdown Mode https://support.apple.com/en-us/105120.
Either a third party app would still use the same vulnerable frameworks as iMessage, or they would re-implement them potentially with more vulnerabilities, or just not implement the features, which is what Lockdown Mode gives you.
- sangnoir 1 year agoOne could argue the same about alternatives to Safari, and yet Chrome has proven to be more secure than Safari (based on Pwn2Own results).
- sangnoir 1 year ago
- Ar-Curunir 1 year agoYou do realize that this is an extremely complicated exploit which is not being used on the average user, right?
And being open source hasn’t prevented Android from being much more vulnerable to these kinds of exploits.
- jpalawaga 1 year ago[flagged]
- madeofpalk 1 year ago
- nvm0n2 1 year agoSure. Rewrite sensitive parts of their stack in memory safe languages. They have Swift after all. A lot of the iOS security improvements over time have really been more like mitigations that try to contain the damage when the giant of pile of decades old C gets exploited.
- stephen_g 1 year agoThat is exactly their plan. Swift could always link into C applications, and they have recently come out with C++ interoperability [1] so things like WebKit etc. can start having parts re-written or new parts written from the start in Swift so they can gradually replace C and C++ codebases instead of trying to rewrite everything (which sucks because even for things much, much less complex than WebKit, you can have a team working for three years working on a replacement and it’ll have less features than the original had when you started).
They’re even working on an embedded subset for embedded devices so things like microcontrollers like battery management, the Secure Enclave etc. can run it.
- saagarjha 1 year agoThey’re working on it, but a memory-safe language doesn’t help you in some of the surface that the attackers exploited here.
- nvm0n2 1 year agoI think memory safety + integer overflow checking by default would have blocked many of these. Not the hardware mitigation disable but getting to the point where that matters required some safety problems that can be excluded by better languages.
- nvm0n2 1 year ago
- stephen_g 1 year ago
- maldev 1 year agoIt's already 2-3 million +. Apple has amazing security, especially for the Iphone and continously monitors it and dishes out silent patches. For a REALLY high level example, it restricts system calls per process and requires all calls to be signed with an apple key, AND it restricts who you can do the system call to, these are continuously monitored and updated. Not only this, but persistence on Iphone is effectively dead, meaning you have to reinfect the device after every reboot. One of the big things you notice in the article is the use of ROP, apple requires every executable page to be signed by them, hence why you have to have these assfisting of rop chains.
- Veserv 1 year ago2-3 million dollars is not “amazing”. That is less than the cost to open a McDonalds. You can get a small business loan in the US for more than that. There are literally tens of millions of people in the world who can afford that. That is 1/5 the cost of a tank.
2-3 million dollars is pocket lint to people conducting serious business, let alone governments. It is at best okay if you are conducting minor personal business. This ignores the fact that attacks at the 2-3 million dollar range are trivially wormable. If you had actual cause to hack every phone you are only incurring marginal cents per attack. Even relatively minor attacks like targeting 10,000 people are less than one phone of cost per attack.
- MuffinFlavored 1 year ago> 2-3 million dollars is not “amazing”.
I don't know. $2-3m for reading code in Ghidra and throwing stuff at a wall until something sticks? Maybe some fuzzing, etc.
I get that you theoretically could find an exploit that for example, you send to 100 known wealthy people, and with it you steal saved cookies + device IDs from financial apps and then try to transfer their funds/assets to an account you control but...
Could you really pull that off 100 times before Apple catches on?
I guess you could... easily... now that I think about it.
- MuffinFlavored 1 year ago
- hnburnsy 1 year agoThat is good info, but why does Apple make it non obvious on how to reboot an iOS device and AFAICT there is no way to schedule a regular reboot.
- hnburnsy 1 year agoNeed to restart your non responsive iPhone, hope you have some dexterity...
----
Force restart iPhone
If iPhone isn’t responding, and you can’t turn it off then on, try forcing it to restart.
Press and quickly release the volume up button. Press and quickly release the volume down button. Press and hold the side button. When the Apple logo appears, release the side button.
- hnburnsy 1 year ago
- Veserv 1 year ago
- rmbyrro 1 year agoWe'd need to scrape decades of work in hardware and software for that.
Modern software sits on a foundation that was thought for a different era. They didn't have in mind the current challenges in terms of security and scale.
- stavros 1 year agoWhat do you mean "no real progress"? The price used to be $100.
- Ar-Curunir 1 year agoI mean, this is already an extremely complex chaining of exploits that requires extremely sophisticated research. I can assure you that this is not being used on the average person.
- develatio 1 year ago
- mb4nck 1 year agoAt least the first version of the recording is now up: https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...
- patrickhogan1 1 year agoKnowing more about the exfiltration component where it sends data to a remote server would be helpful. According to the article it’s sending large audio microphone recordings. I assume a company like Kapersky would explicit deny all outgoing network connections and then approve one by one.
- hnburnsy 1 year agoThere is a series of posts on this including one that details the malware payload...
- twobitshifter 1 year agoyeah, I’m wondering the same. Maybe they can’t point a finger at who did it, but there were no clues on the exfiltration?
- hnburnsy 1 year ago> yeah, I’m wondering the same. Maybe they can’t point a finger at who did it, but there were no clues on the exfiltration?
From the articles at the above link...
C&C domains
Using the forensic artifacts, it was possible to identify the set of domain name used by the exploits and further malicious stages. They can be used to check the DNS logs for historical information, and to identify the devices currently running the malware: addatamarket[.]net backuprabbit[.]com businessvideonews[.]com cloudsponcer[.]com datamarketplace[.]net mobilegamerstats[.]com snoweeanalytics[.]com tagclick-cdn[.]com topographyupdates[.]com unlimitedteacup[.]com virtuallaughing[.]com web-trackers[.]com growthtransport[.]com anstv[.]net ans7tv[.]net
- hnburnsy 1 year ago
- hnburnsy 1 year ago
- jeffreygoesto 1 year agoSome agencies will be very sad now...
- barryrandall 1 year agoThose will be the most delicious tears wept in all of 2023.
- cald0s 1 year agoNot really, the decision to do this is calculated. I'm sure many more tears would be shed by those affected if they knew they were compromised...
- cald0s 1 year ago
- barryrandall 1 year ago
- xvector 1 year agoDoes Lockdown Mode prevent agains this?
- 542458 1 year agoI think lockdown drops most iMessage features, so I would suspect the answer is yes. But as far as I can tell, lockdown prevents use of mdm, so it might be a net negative for security… instead, using the mdm policy that disables iMessage might be preferable.
- rdl 1 year agoLockdown prevents new enrollment in MDM/adding profiles, but you can use an MDM you're already enrolled in. It's pretty good from a security perspective.
What I dislike is that it applies to all devices in your iCloud profile, and is overall pretty intrusive/annoying. Best practice if you're going to use it is probably to have multiple iCloud accounts (maybe in a "family" for license sharing), and Lockdown Mode one of them for the more secure devices. I tried using it for all of my devices last year and it was pretty unusable.
(Main pain point was how it handles unsecure wifi networks; I consider ~all networks insecure regardless of wifi encryption, but not being able to save or otherwise autoconnect to a hotel network with an iPad with nothing on it, etc. was the last straw. With a decent travel router it's fine.)
- Obscurity4340 1 year agoYou can still supervise which allows for that all the same, IIRC
- rdl 1 year ago
- halJordan 1 year agoIt likely does. Lockdown mode stops most ios auto-processing wrt to message attachments and this was delivered via a message attachment.
- 542458 1 year ago
- g-b-r 1 year agoAre hashes of the data ever used in known chip debugging features?
Since they're supposed to be disabled in production, what would be their point?
I'm no electronic engineer, but isn't it best for them to be fast and simple, to reduce the chance that they cause interference themselves..?
And isn't it strongly unlikely that an attacker in the supply chain (TSMC??) would be able to reliably plant this in all Apple chips from the A12 to the A16 and the M1 ??
- anotherhue 1 year agoMore important than getting their newly found exploits, you get to know which of yours might be compromised. Prevents counterintelligence.
- Luc 1 year agoThis made me laugh: "Upon execution, it decrypts (using a custom algorithm derived from GTA IV hashing) its configuration [...]"
From https://securelist.com/triangulation-validators-modules/1108...
- MagicMoonlight 1 year agoThat’s going to be a Chinese tool. Knowing the hardware that intimately and having all these convenient undocumented areas to play with is exactly the kind of thing you can put in place if you control the manufacturing.
- dang 1 year agoRelated:
4-year campaign backdoored iPhones using advanced exploit - https://news.ycombinator.com/item?id=38784073
(We moved the comments hither, but the article might still be of interest)
- apienx 1 year agoReminder that Lockdown Mode helps reduce the attack surface of your iPhone. It also helps tremendously with detection. https://support.apple.com/en-us/105120
- chatmasta 1 year agoI've had Lockdown mode enabled for a few months. It's great, and not much of an annoyance at all. You do need to be fairly tech-savvy and remember that it's enabled, because sometimes something silently breaks and you need to opt-out of it (which you can do for a specific website, or WebViews within a specific app). And it won't auto-join "insecure" WiFi which can be annoying at a hotel, but frankly it's probably for the best. Also you won't receive texts with attachments in them, which is usually desirable but breaks workflows like activating a new SIM card while traveling (it's possible this was broken for me due to some other setting to exclude texts from unknown numbers).
The most noticeable difference is that SVG elements (?) are replaced with emojis. I'm not sure how that fallback works but it's funny to see buttons have seemingly random emojis embedded in them. (Does anyone know the details of how this replacement is done? Is it actually glyph fonts being replaced, not SVG?)
- chatmasta 1 year ago
- LanzVonL 1 year agoIsn't the most obvious answer that Apple, like other US tech firms such as Google, simply creates these wild backdoors for the NSA/GCHQ directly? Every time one's patched, three more pop up. We already know Apple and Google cooperate with the spy agencies very eagerly.
- freeflight 1 year agoI consider that plausible with Google due to Google's funding history [0], but Apple is afaik way less "influenced" and the way this pwn was pulled off could also have been done by compromising Apple's hardware supply chain and not Apple itself.
Particularly considering how in the past Apple has been very willing to be on the receiving end of negative headlines for not giving US agencies decrypted access to iCloud accounts of terrorist suspects, with Google I don't remember it ever having been the target of such controversy, meaning they willingly oblige with all incoming requests.
[0] https://qz.com/1145669/googles-true-origin-partly-lies-in-ci...
- jsjohnst 1 year ago> We already know Apple and Google cooperate with the spy agencies very eagerly.
The evidence clearly indicates otherwise…
- freeflight 1 year agoHow so? Any competent intelligence service will not just depend on the goodwill of a corporation to secure access to assets and intelligence.
If they cooperate that's good and convenient, but that does not mean the intelligence service will not set in place contingencies for if the other side suddenly decides not to play ball anymore.
- jsjohnst 1 year agoI said nothing about anything you stated, that’s all clearly possible, I specifically refuted the unsupported claim that Apple “eagerly cooperate with spy agencies”, where there’s ample evidence to support an opposite claim.
- jsjohnst 1 year ago
- Aerbil313 1 year agoAhem, Snowden, PRISM anyone?
- jsjohnst 1 year agoAhem, you mean you have a single example, from a decade ago, one where Apple was hardly a key player (hence why Apple didn’t sign onto PRISM until half a decade after Yahoo, Microsoft, Google, et all), as conclusive evidence of “eagerness to partner with spy agencies”, despite numerous public cases where they’ve done the opposite… got it!
- jsjohnst 1 year ago
- freeflight 1 year ago
- freeflight 1 year ago
- Liebnitz 1 year ago>Apple declined to comment for this article.
- Liebnitz 1 year ago....
- guwop 1 year agoCrazy!
- cf1241290841 1 year agoYears ago i argued about the danger of pdfs with another account and was told not to be a paranoid nutjob.
Told you so.
edit: The fact that this obvious statement gets upvoted above the apple backdoor on 22:40 of the talk also says alot.
edit1: https://imgur.com/a/82JV7I9
- hcarrega 1 year agoTheres a talk on ccc today
- cedws 1 year ago>This attachment exploits vulnerability CVE-2023-41990 in the undocumented, Apple-only TrueType font instruction ADJUST for a remote code execution. This instruction existed since the early 90’s and the patch removed it.
This is getting ridiculous. How many iMessage exploits have there now been via attachments? Why aren't Apple locking down the available codecs? Why isn't BlastDoor doing its job?
This is really disappointing to see time and time again. If a simple app to send and receive messages is this hard to get right, I have very little hope left for software.
- akira2501 1 year agoIf I've read the rest of the documentation correctly, the exploit is actually triggered from an attached ".watchface" file, which of course, has the font vulnerability in it.
I'd like to meet the person who suggested even sending .watchface files as iMessage attachments in the first place. What were you thinking? Did you not have a large enough attack surface already?
- codedokode 1 year agoWell, at least the file extension honestly warns you that the file might watch after your face.
- codedokode 1 year ago
- throwoutway 1 year agoIf I were an embassy employee (covert or overt), I'd want zero iMessage features beyond ASCII and the thumbs-up/down reactions. No attachments, no GIFs, no games, no Apple Pay, no easter eggs, no rich text
Apple really needs a paranoid mode
- et1337 1 year agoLockdown mode exists: https://support.apple.com/en-us/105120
- xvector 1 year agoyou can't use this with MDM unfortunately, so useless for govts, corporations, etc.
- xvector 1 year ago
- et1337 1 year ago
- nvm0n2 1 year agoiOS has a reputation for having the best security, but how many times have Android/WhatsApp had these sorts of silent-instant-root exploits via invisible messages? I don't remember it happening. Maybe the strategy of writing lots of stuff in Java is paying off there.
- Brybry 1 year agoAndroid has had zero click exploits. For example, Stagefright [1]
And even better, there are plenty of old Android phones out which will be vulnerable to various exploits because of weak OTA update support policies.
- kernal 1 year agoSigh…there has never been an 0day Stagefright exploit in the wild. And even if there was it wouldn’t have worked on all Android devices due to the OS differences among OEMs.
Also, there are plenty of old iPhones that do not receive updates anymore and are just as vulnerable so I’m not sure why you needed to get that in.
- kernal 1 year ago
- azinman2 1 year agoWhat’sapp has had exploits. See https://gbhackers.com/new-whatsapp-0-day-vulnerabilities/amp...
- nvm0n2 1 year agoYes, but that wasn't a zero day. WhatsApp's own team found that, and it wasn't a zero-click exploit, you had to be in a video call with the attacker.
- nvm0n2 1 year ago
- Brybry 1 year ago
- twobitshifter 1 year agoi wonder why attachments would ever be loaded from unknown contacts
- 1 year ago
- akira2501 1 year ago
- pushcx 1 year agoIt’s quite unfortunate that Apple doesn’t allow users to uninstall iMessage, it seems to be the infection vector for advanced threats like this, NSO group, etc. Presumably it’s to avoid the support burden, but they could gate it behind having Lockdown Mode enabled for a week or something to shake out the vast majority of mistaken activations.
- transpute 1 year ago> unfortunate that Apple doesn't allow users to uninstall iMessage
It can be disabled via Apple Configurator, https://news.ycombinator.com/item?id=38785311
- Almondsetat 1 year agowhat does "uninstall iMessage" mean? you can disable iMessage right in the settings so you only receive SMSs
- dilyevsky 1 year agoWhich is what lockdown mode already does
- hedora 1 year agoActually lockdown is better. It leaves E2E encryption alone, but restricts attachment types, which should be enough to block the initial exploit in the chain.
Disabling iMessage would fall back to SMS, allowing messages to be snooped / modified in transit.
Hopefully they’ll also have a way to disable RCS, since it allows attackers to modify messages, and also has a larger implementation attack surface than SMS.
- sevg 1 year agoNo, Lockdown Mode doesn't disable iMessage.
"Most message attachments are blocked and some features are unavailable."
iMessage with blue bubbles still works in Lockdown Mode. I think GIFs don't display properly and certain other attachments, but I can share photos, audio clips and video so I otherwise don't really notice that Lockdown Mode is enabled.
- 1 year ago
- hedora 1 year ago
- dilyevsky 1 year ago
- stefan_ 1 year agoI remember people were very passionately arguing iMessage can only be secure if the only client is the Apple sanctioned one
> the unknown attackers kept their campaign alive simply by sending devices a new malicious iMessage text shortly after devices were restarted.
- hedora 1 year agoThere are different aspects of security here. iMessage is tied to a physical device, so if you want to spam people, you have to purchase and burn through iPhones.
Rate limiting phishing attacks is certainly a useful security feature, but it does nothing to protect against targeted attacks.
- hedora 1 year ago
- teruakohatu 1 year agoCan someone explain to me why we can load vast quantities of untrusted code and a wide variety of image formats in our browsers all day long and be mostly safe today, but somehow even first party messenger apps seem to be a relatively easily compromised? Why can't messenger apps be sandboxed as well as browsers?
- madeofpalk 1 year agoSending these through messaging apps is appealing because that usually requires zero user action - you just send a message and the device runs the exploit as it generates preview thumbnails.
But browser exploits require the user to visit an infected website, which is much tougher. If I recieve an email or sms with "visit applesupport.info" I'm not going to click it.
- saagarjha 1 year agoNote that the second half of this exploit chain involves going around and exploiting the web browser.
- riwsky 1 year agothis exploit chain involved a browser vulnerability; your premise is flawed
- nvm0n2 1 year agoIt's all relative. Chrome has plenty of sandbox escapes. Microsoft found one lately where Chrome was passing strings from JS straight into the Windows TTS engine, which turned out to be parsing XML from it with a C++ parser that was full of memory errors.
- madeofpalk 1 year ago
- peddling-brink 1 year agoDo you believe your other messaging apps lack vulnerabilities? What is most popular will always be most picked on.
- 1 year ago
- I_Am_Nous 1 year agoIn the face of this kind of threat, it's pretty obvious why Apple treated Beeper as a security risk and took appropriate measures to secure iMessage.
- avidiax 1 year agoBeeper is the user's choice. And Apple is preventing other companies from providing a more secure iMessage alternative, e.g. one that doesn't even parse messages from people not in the contact list, or doesn't even parse anything without a click, etc.
Apple has had so many zero-click exploits in iMessage, yet they insist that you have to use Lockdown mode to do anything about it, and then proceed to bundle Lockdown mode with lots of potentially unwanted behavior.
I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.
- I_Am_Nous 1 year ago>Beeper is the user's choice.
Me deciding to ride the subway to work for free is a user's choice, but that doesn't mean it's right. Using infrastructure for free because I feel like it is certainly my choice but I can't justify anger when someone makes me pay to use it since I should have paid in the first place. Currently Apple doesn't run iMessage as an open standard so it runs in "authorized riders only" mode.
>I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.
This isn't what I claimed. I claimed Apple treated unauthorized 3rd party access to their infrastructure as a security risk and worked to shore up that risk. As you pointed out, there have been plenty of zero-click exploits in iMessage. Limiting the devices sending iMessages increases security. I believe Apple doesn't allow iOS VMs in general for the same reason.
- I_Am_Nous 1 year ago
- madeofpalk 1 year agoI don’t think that’s clear at all. I imagine it’s still trivial for attackers to still send specially crafted one-off payloads.
- I_Am_Nous 1 year agoThe attack vector is still smaller if Apple restricts iMessage to official devices only compared to any rooted Android phone being able to spam iMessage payloads.
- I_Am_Nous 1 year ago
- saagarjha 1 year agoThe security model is basically orthogonal.
- avidiax 1 year ago
- munk-a 1 year agoThey gotta, gotta, have those blue bubbles. Some teenagers fight to get an overpriced phone solely to avoid the deep deep shame of having a green bubble when chatting.
If apple is forced to shut down iMessage being the exclusive option and have some pure SMS application they might see a sudden noticeable drop in market share.
- vultour 1 year agoTeenagers wanting blue bubbles and people looking to uninstall iMessage because it's a threat vector are two completely disjoint sets of people.
- munk-a 1 year agoAbsolutely - but the business interest of wanting to keep teenagers on iPhones absolutely would impede Apple from allowing users to uninstall the application.
- paulmd 1 year agoBlue bubbles bad syndrome. Gotta bring it up when ever humanly possible.
Nvidia has a very similar green man bad syndrome going on too. As the amount of time a HN discussion on Nvidia increases, the probability of mentioning that Linus said “fuck you nvidia” approaches 1, even though it’s irrelevant to a topic, or that he's a mercurial asshole who's said a whole lot of things.
The casual fanboyism disrupts all discourse on these topics because there’s a large minority of users who have adopted what PG describes as “hater-ism” and allowed it to dominate their thinking on a topic. Negative parasocial attachment is the same process as positive parasocial attachment and just as problematic, but largely never called out.
http://www.paulgraham.com/fh.html
In short: lotta fanboys on these topics who don't even realize they're fanboys/adopting fanboy frames, because they don't realize that anti-fanboys are still parasocially attached too. And we've casually accepted the low level of discourse on these topics, and it pollutes the whole discussion of a lot of interesting topics because of who's doing them.
- munk-a 1 year ago
- Neil44 1 year agoThey knew exactly what they were doing when they chose that nice blue and that cheap looking green.
- marcellus23 1 year agoNo they didn't, because the green was first in 2007, when iPhone only supported SMS. It was 4 years later that iMessage launched. The conversation probably went like:
"Okay well, now that we're launching an alternative to SMS, how will we distinguish iMessage messages from regular SMS messages?"
"Hm, well, SMS messages are green, so what if we picked another color?"
"Yeah okay, blue? ¯\_(ツ)_/¯"
"Sounds good, mock it up and send it to the engineers"
edit: The reason for picking green originally was probably because all the "communication"-related apps had a green color scheme, including Messages. This persists today — the app icons for Phone, Messages, and FaceTime are all green.
- sadjad 1 year agoNever forget the icon they used for Windows servers: https://i.stack.imgur.com/5rYVr.png
- marcellus23 1 year ago
- bdcravens 1 year agoThey've already announced that they will be adding RCS support.
- munk-a 1 year ago... And they've already announced[1] that they will be retaining the exclusive blue bubble for iMessage messages for... reasons? The green/blue bubble distinction will continue even when there is no technical difference between messages.
- munk-a 1 year ago
- vultour 1 year ago
- transpute 1 year ago
- kornhole 1 year agoWho had motive to target Russian government officials, knowledge of the attack vectors, history of doing so, and technical and logistical ability to perform it leads Kaspersky and myself to the only rational conclusion: that Apple cooperated with the NSA on this exploit. I assume they only use and potentially burn these valuable methods in rare and perhaps desperate instances. I expect the Russian and Chinese governments' ban on use of Iphones will not be lifted and expand to other governments. Similarly to how the sanctions have backfired, this tactic will also backfire by reducing trust in Apple which is the core of their value proposition.
- hedora 1 year agoThis looks like a typical modern security hole. There’s a giant stack of layers of unnecessary complexity, and all of them are garbage. The composition is also garbage.
All the NSA needs to launch attacks like this is to get a bunch of mediocre engineers to layer complexity atop complexity. They don’t need Apple to know about the attack.
Honestly, they probably didn’t actually have to do anything to get Apple (or any other large company) to self-pwn itself by hiring and promoting engineers and project managers for adding features, but not for improving product stability or software correctness, or deleting forgotten legacy cruft.
Anyway, the most effective approach to sabotage is to be indistinguishable from incompetence, so it’s hard to say if the people responsible for the vulnerability chain were working with the NSA or not.
- kornhole 1 year agoYou make a good point that a team of mediocre engineers could be responsible for the vulnerabilities. Those doing code review and change control would also need to be mediocre. It could be a combination of compromised and mediocre coordinated by a manager who is in service of the apparatus. Knowledge of the operation would better not go all the way up the ranks to keep it quiet.
- kornhole 1 year ago
- pvg 1 year agoleads Kaspersky and myself to the only rational conclusion: that Apple cooperated with the NSA on this exploit.
Kapersky reaches no such conclusion. That's from an FSB release.
- kornhole 1 year agoIt is true that Kaspersky by policy does not make attribution without concrete proof. It is the responsibility of intelligence agencies to make the call based on preponderance of evidence. The video linked above leads suspicion to a very few options. The attacker left a list of Apple ID's in the code in one place to check against. Kaspersky provided them to Apple, and Apple did not respond with any details about the users of those Apple ID's. One of the main vulnerabilities has been available for over ten years.
- pvg 1 year agoWhat is more true is that the article posted explicitly says the exact opposite of what you suggested upthread - a fact you should acknowledge.
- pvg 1 year ago
- SXX 1 year ago[flagged]
- fortran77 1 year agoThis is a complete lie.
- fortran77 1 year ago
- kornhole 1 year ago
- tuetuopay 1 year ago> leads Kaspersky [..] to the [..] rational conclusion: that Apple cooperated with the NSA on this exploit
doesn't the article states precisely otherwise? that while the FSB accuses Apple of cooperation, Kaspersky does not have any reason to believe so, especially since it does not look like any known state actor.
- LargeTomato 1 year agoKaspersky can't prove anything so they opted to present the facts. They didn't state any opinion about who they believe is behind the incident.
- kornhole 1 year agoKaspersky only said they could not prove it. They did not make conclusion but laid out the evidence.
- LargeTomato 1 year ago
- lame-robot-hoax 1 year agoHow did sanctions backfire?
- kornhole 1 year agoGermany's economy shrunk last year while Russia's grew. Dedollarization has accelerated which will impact the US not immediately but in near future.
- exceptione 1 year agoYou are talking about an unsustainable war economy that is overheating. Soaring inflation, brain drain and a falling ruble are only just the short term phenomena.
--> https://www.reuters.com/breakingviews/russian-war-economy-is...
If you would truly believe what you say, you should convert all your savings from dollar to rubles. No serious economist would think that doing so would be a masterstroke though.
- jacooper 1 year agoTo be fair, other European countries are doing better. It's a Problem specific to Germany and their mishandling of the energy shock.
- pas 1 year agothe dollar as the reserve currency already has a serious impact on the US (ie. the big upside is that it allows the US to borrow for very cheap, but the nasty downside is keeping the purchasing power of the USD artificially high, which is not great for the non-finance sectors of the US, not great for people who work in those sectors, and double-plus-not-great for US exports [which are not the dollar itself]), basically it's the "natural resource curse" again
that said, dedollarization is unlikely even in the mid-term https://www.noahpinion.blog/p/threats-to-the-dollar-are-just...
- exceptione 1 year ago
- kornhole 1 year ago
- dilyevsky 1 year agoThat’s only “rational” for kaspersky bc in their world they can’t function without having actual intelligence operatives on staff. I seriously doubt nsa needed help here
- CharlesW 1 year agoMy adjacent conspiracy theory is that the NSA and other state agencies do both original research and pay hackers for exploits that Apple hasn’t yet discovered.
- doakes 1 year agoThe Darknet Diaries episode "Zero Day Brokers" goes into this. Apparently Argentina hosts a lot of outsourced exploit development. Here's the transcript: https://darknetdiaries.com/transcript/98/
- ironyman 1 year agoThey have the budget to do both easily.
Like how the NRO used to design and launch satellites that cost more than aircraft carriers but are now working closely with private companies like Maxar to find more economical solutions.
https://www.maxar.com/press-releases/nro-awards-maxar-a-10-y...
- ls612 1 year agoThing is the fundamental laws of physics give us a good idea as to the capabilities of the NRO given a certain launch platform. Like how when scientists in the late 70s were figuring out the best telescope they could launch they ended up with almost the exact specs of the Keyhole spy satellites, a spare of which became Hubble.
- ls612 1 year ago
- kornhole 1 year agobut why pay hackers to try to find a backdoor when you can just walk in the front door and use the carrot and stick to get what you want?
- CharlesW 1 year agoHere's my serious answer that still works if you hate Apple.
Your question assumes two things: (1) That Apple intentionally leaves vulnerabilities in the stack, and (2) that Tim Apple is occasionally willing to share this candy with governments.
Having worked at Apple, I don't believe (1) can be true. Not only is it extremely unlikely that it could be kept a secret, but Apple's thing is "obsessive control", a mindset borne of organizational PTSD which originated with its near-death experience in the mid-to-late 90s. The Apple I know would not risk intentionally leaving back doors unlocked for enemies to find and leverage.
As for (2), the existence of a "Binder of Vulns" by nation-states would expose Apple to existential risk. It's possible that it could be kept secret within Apple's walls if it were never used, but once shared with a government it could not be contained. The splash damage of such a discovery could easily kill Apple.
- StayTrue 1 year agoThis happened at a company I worked at so it’s not out of the question. I figured it out by reverse engineering and quit on the spot. They tried to tell me I’d never work again if spying on users was a dealbreaker. They showed me a natsec slide deck that identified other collaborating companies as a way of making their point. Among them was Apple.
- CharlesW 1 year ago
- doakes 1 year ago
- 1 year ago
- hedora 1 year ago
- nothercastle 1 year agoState actor attacks on another state actor. Incredible sophisticated and just goes to show you that it basically can’t be defended against
- whartung 1 year agoIt can be defended against. The detail is that the only way to harden those defenses is to toss it out in the world and let folks poke holes in it.
This was an extremely complex exploit. It was complex because of all of the defenses put in place by Apple and others. It required State level resources to pull it off.
We also don't know what, if any, external skullduggery was involved in the exploit. Did someone penetrate Apple/ARM and get internal documentation? Compromise an employee? Did Apple/ARM participate? Maybe they just dissolved a CPU cover, and reverse engineered it.
But, that cat is not out of the bag, and it's been patched.
Progress.
As many folks say, when it comes to dealing with security, consider the threat model. Being under the lens of an advanced State is different from keeping your young brother out of your WoW account.
This exploit wasn't done by a bunch of scammers selling "PC Support". That's the good news.
When stuff like this happens, I always go back to Stuxnet, where not only did they breach an air gap, they went in and did a sneak and peek into some other company to get the private signing keys so that their corrupted payload was trusted. There's a difference between an intelligence operation and a "hack".
Making stuff like this very expensive is part of the defensive posture of the platform.
- ogurechny 1 year ago> Compromise an employee?
An official visits the headquarters, and informs that certain employees need to be hired at certain departments “to help with national security”. End of story.
What even makes people think that executives whose job is to deal with everyone in order to “do business” are their long distance friends, or some kind of punks who'd jump on the table and flip birdies into faces of people making such an offer?
- hnburnsy 1 year ago>It was complex because of all of the defenses put in place by Apple and others.
I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.
This makes everything written on this page worthless...
>Prevent anyone except you from using your devices and accessing your information. https://www.apple.com/privacy/control/
- tuetuopay 1 year ago> I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.
well you are in trouble then. all of modern hardware have such hidden parts in them, and are most of the time referenced as "undocumented" instead of "unknown". I know this seems pedantic, but from a public eye, anything undocumented is unknown. what makes those special however, is those are not used at all by public software, thus truly unknown as one can only guess their use or even their mere existence.
- tedunangst 1 year ago> I don't know jack about hardware
Could have stopped writing right there.
- aidenn0 1 year ago> I don't know jack about hardware but it would seem obvious that when one designs a chip, you make sure it does not have 'unknown hardware registers' or unknown anything when you get it back from the manufacture.
Either Apple or Arm has employees that know what these registers do. They are likely used for debugging and/or testing.
A lot of those registers can do very interesting things, since e.g. fault-injection is an important part of testing. A security-minded implementation will allow these to either be fused off or disabled very early in the boot process. The latter is probably more common, and any disconnect between the hardware and software side can cause this step to get missed.
- dagmx 1 year agoYou’re assuming the registers are unknown to the chip designer.
The article doesn’t state that. It says it’s undocumented for the security researchers.
- tuetuopay 1 year ago
- ogurechny 1 year ago
- kornhole 1 year agoHowever this one seems to have been coordinated with Apple. A nonprofit nonaligned independently managed project could be more immune to pressures of the national security apparatus. I think it is incredibly naïve to think that the largest US corporation does not cooperate. This is why I keep donating to GrapheneOS.
- 1 year ago
- whartung 1 year ago
- haecceity 1 year agoThis wouldn't be zero click if iMessage didn't parse attachments without user consent.
- ThinkBeat 1 year agoAttack by CIA/NSA?
They have the best possible insight into the hardware and software at all stages I should think.
- jacooper 1 year agoThis really looks like the NSA just flexing their muscles and their vulnerability arsenal.
- cf1241290841 1 year agoAnd motivate state actors to get their supply chain in check. After all, whats the difference between a secure coprocessor and a silicone bug?
- 1 year ago
- cf1241290841 1 year ago
- joanne123 1 year ago[dead]
- fragmede 1 year ago[flagged]
- sampa 1 year ago[flagged]
- chatmasta 1 year agoReading between the lines of TFA, it seems the researchers may also suspect that to be the case:
> Our guess is that this unknown hardware feature was most likely intended to be used for debugging or testing purposes by Apple engineers or the factory, or that it was included by mistake. Because this feature is not used by the firmware, we have no idea how attackers would know how to use it.
However, keep in mind that this level of "bugdooring" is possible without Apple's explicit cooperation. In fact, the attackers don't even need to force a bug into the code. It would probably be sufficient to have someone on staff who is familiar with the Apple hardware development process (and therefore knows about the availability of these tools), or to simply get a copy of the firmware's source code. Sophisticated attackers likely have moles embedded within Apple. But they don't even need that here; they could just hire an ex-Apple employee and get all the intel they need.
- sampa 1 year agowell of course nobody would have NSA_friendly_override() in the source
plausible deniability is essential in such cases, hence the term bugdoor
- halJordan 1 year agoThis is the same conspiracy mindset of flat earthers, and you deserve your own netflix mockumentary over it.
Because a bug is a bug, it's very nature means you cannot prove it isn't malicious, therefore you take it as positive proof of malice and sit pretty bc no one can prove a negative.
- halJordan 1 year ago
- xvector 1 year agoI always get weird consultants reaching out to me on LinkedIn asking for deets on my org's layout and - curiously - our tech stack. They offer something like $500+ an hour but I don't want to be complicit in some compromise. Private intelligence is such a fascinating industry.
- jetrink 1 year agoSince they've gone to the trouble of protecting it with an insecure hash, couldn't they also have designed this hardware feature so that it could be completely disabled until the device is rebooted? This vulnerability doesn't persist through reboots, so it would be sufficient to have the firmware lock the feature out during startup outside of development or manufacturing contexts.
- withinboredom 1 year ago> This vulnerability doesn't persist through reboots
I suspect, once you stop receiving data from the device, you just text it the invisible message every few minutes until you start getting data again.
- withinboredom 1 year ago
- sampa 1 year ago
- halJordan 1 year agoI just dont get this mentality. Here is proof positive (if you believe attribution) that the NSA is using exquisite and exotic techniques to force their way into iphones and you look at it and come up with the exact opposite conclusion that Apple is letting them into the iphone. Its not a backdoor if you're smashing in the window.
- ElMocambo_x4 1 year agoSure, it's not like we've been made aware of a history of backdoors through the Snowden or Shadowbrokers leaks...
- dagmx 1 year agoYeah I don’t understand these conspiracy theories.
If the NSA had partnered with Apple, they sure as hell would have asked for something much more convenient and resilient.
I think it’s just down to a lot of people not understanding hardware and falling back to “magical” thinking
- ElMocambo_x4 1 year ago
- photochemsyn 1 year agoBased on past history, it would be more surprising if Apple wasn't actively cooperating with the NSA, that was the case with PRISM (wiki):
> "The documents identified several technology companies as participants in the PRISM program, including Microsoft in 2007, Yahoo! in 2008, Google in 2009, Facebook in 2009, Paltalk in 2009, YouTube in 2010, AOL in 2011, Skype in 2011 and Apple in 2012. The speaker's notes in the briefing document reviewed by The Washington Post indicated that '98 percent of PRISM production is based on Yahoo, Google, and Microsoft'"
With the rise of end-to-end encryption in the wake of the Snowden revelations, this put large tech corporations in a bind, given the conflict between consumer desire for secure snoop-proof devices, and government desire for backdoor access. Pressure might have been applies by government contracting decisions, so no cooperation == no big government contract. The general rise of end-to-end encryption also meant that things like deep packet inspection along the trunk no longer worked, putting a premium on breaking into devices to install keyloggers etc.
All the fear of China doing this with Huawei (probably well-justified fear) may have risen in part as projection by politicians and insiders who knew the US government was doing it already with Apple, Android, Intel, ARM, etc. The US government has certainly retained legalistic justification for such behavior, even though the Act expired in 2020[1]. Also, corporations have been given retroactive immunity for similar illegal activites before [2], so Apple has that precedent to go by.
[1] https://www.cjr.org/the_media_today/section_702_renewal_pres...
[2] https://www.aclu.org/news/national-security/retroactive-tele...
- 1 year ago
- WhackyIdeas 1 year agoNSA will have the same special relationship with Apple as they do with AT&T.
- chatmasta 1 year ago
- gustavus 1 year ago[flagged]
- woodruffw 1 year agoThere's a fundamental category error at play here: exploit chains like this one and the one behind FORCEDENTRY[1] cost millions, if not tens of millions, of dollars to discover and weaponize, even before operationalization.
The people finding and building these chains are doing so as part of nation-state intelligence operations; they go well beyond what any reasonable civilian threat model contains.
Put another way: if someone in a competent nation state's IC decides that you're worth $10+ million dollars to compromise, they are going to get you. This is true whether you have an Android, an iPhone, or a Tamagotchi. The only thing that sets Apple apart here is that they've historically beaten Google to the punch on mitigations for these kinds of exploits. But from a threat modeling perspective, this attack is not comparable to the kind that most people have to deal with. Treating it as indicative of an overall security differentiator will not help you make ordinary security decisions, because anybody who gets this kind of attention will be Mossad'ed upon[2].
[1]: https://en.wikipedia.org/wiki/FORCEDENTRY
[2]: https://www.usenix.org/system/files/1401_08-12_mickens.pdf
- stefan_ 1 year agoSure they do, and yet at the bottom of them we keep finding.. iMessage. Which is like a funnel that takes untrusted external input and feeds it into various ancient unmaintained native code blobs that were thrown into iOS for the "time to market". This time it's an 90s Apple extension to TrueType in a 90s Apple library that presumably no font on an iPhone actually uses, last time it was the 90s fax machine image compression algorithm in a never updated open source library. You see, the full exploit cost many many millions, but at the bottom there are entirely self-inflicted basic failures.
It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
- avianlyric 1 year ago> It would be so great if someone at Apple could get the buy-in to clean out this zoo but try explaining that to a product manager at these places.
It’s happening! Admittedly it’s happening slowly, but it is happening. PostScript support recently got stripped out of MacOS and iOS explicitly because the security risk was too great, and effort to make parsers and renders safe was greater than any residual benefit from the postscript format.
It also looks like the “fix” for one for the TrueType exploit was to simply strip out the ancient extension because it’s not used anymore. As for why it didn’t happen before now, that probably just because nobody knew it still existed.
- woodruffw 1 year agoAbsolutely no disagreement there. iMessage's attack surface is ludicrously large for the actual behavior it delivers on the average user's phone.
- avianlyric 1 year ago
- eviks 1 year agoIt may cost a million, but it doesn't follow that every use(r) costs the same (could even also call this a category error).
Neither is "going to get you" a given, maybe another agency is in charge of the alternative methods of getting you, and they have different priorities that doesn't include your target (or alternative ways are much more expensive or too slow to be worth it)
- woodruffw 1 year agoThe point is that it's incorrect to think of the US (or any other country's) IC as a force of nature, blasting out 0days to random civilians just for kicks. These things are expensive, very expensive, and are carefully orchestrated. They don't look anything like the average civilian's security breach, which is somewhere between "accidentally leaked their own password" and "TSA asks you to unlock your phone."
- woodruffw 1 year ago
- ogurechny 1 year agoYour “threat model analysis” takes for granted that a “civilian” is a billion times less important than a “nation-state”. It makes no sense to waste any time analyzing anything after such a conclusion. Therefore, something is wrong here.
- woodruffw 1 year agoI think you've misunderstood. The point was that there are (to simplify) two different threat models at play here: one where your most powerful adversary is somewhere between your family and domestic law enforcement, and another where you are worth $10+ million to a nation state.
99.99% of the world lives in threat model 1; our goal as security minded people is to protect these people. These people want general purpose networked computers in their pockets.
0.01% of the world lives in threat model 2; our goal is also to protect these people. But these people don't get protected while also having general purpose networked computers in their pockets.
Both groups are civilians, and both deserve security. But they also have different demands; if Apple forced Lockdown Mode's usability restrictions onto a billion people tomorrow, a large percentage of them would switch to materially less secure hardware and software vendors.
- woodruffw 1 year ago
- matheusmoreira 1 year agoWhy is it that everyone balks at including these shadowy government agencies in threat models? It feels like people just don't want the heat. Would people just give up if it was some corrupt narcostate instead?
They've proven numerous times they couldn't care less about the rights of their own citizens. The US agencies in particular can't even muster any respect for their own allies. I don't even want to imagine what they feel justified in doing to foreigners. They're basically a threat to everyone on earth at this point and we all need the ability to defend against people like them.
So it costs millions to compromise someone? We need to find ways to make it cost billions then. Then we make it cost trillions. They should have to commit crimes against humanity in order to get anyone at all.
- woodruffw 1 year agoNobody's balking at it. Apple and Google both dedicate significant engineering efforts towards making these kinds of exploit chains even more expensive and unreliable. See for example Lockdown Mode in iOS 16.
The point is this: good security means being able to intelligibly state your threat model and respond to its specific capabilities. Failing to do this results in all kinds of muddied thinking, making it harder to defend against more quotidian adversaries. If your threat model genuinely involves the US IC, then turning on Lockdown Mode is about the best you can do short of throwing your phone in the ocean. By all appearances, that would have prevented this chain.
- woodruffw 1 year ago
- saagarjha 1 year agoThere haven’t really been all that many hardware exploits for us to judge Apple on this, have there?
- woodruffw 1 year agoNot that I know of. There are other hardware-ish exploits (like checkm8), but I think most have been purely software.
(Hopefully what I said wasn't interpreted as a value judgement about hardware security specifically -- the only point I was trying to make is that ICs spend significant resources discovering exploits on all of these platforms.)
- woodruffw 1 year ago
- stefan_ 1 year ago
- FergusArgyll 1 year ago> “Due to the closed nature of the iOS ecosystem, the discovery process was both challenging and time-consuming, requiring a comprehensive understanding of both hardware and software architectures... " -Kaspersky researcher Boris Larin
supports your point but it's not an easy argument to win either way. It's "everyone can see it so the good guys will find it first" vs "bad guys have harder time discovering vulns but once they do they have gold"
- manuelabeledo 1 year agoTo be fair, that was just Kaspersky taking a jab at Apple, after being absolutely gutted by hackers because of their own poor security posture.
- saagarjha 1 year agoI don’t really see anything wrong with their security posture here.
- saagarjha 1 year ago
- manuelabeledo 1 year ago
- DenisM 1 year ago> it turns out that more people having access to the source code makes it more secure.
The OpenSSL debacle kinda disproved that point, didn’t it?
- dagmx 1 year agoAnd just looking up the Linux CVE list https://www.cvedetails.com/vulnerability-list/vendor_id-33/p...
Imho end of the day, open source vs closed doesn’t matter for number/severity of security issues and ends up just being ideological posturing. The bugs exist for a variety of other reasons and tend to have the same root causes attached.
OSS has other considerations though around security. Flaws may be easier to identify and either exploit or fix. Flaw fixing is trickier though because you need to do it in such a way as to not advertise it to the world either before it’s sufficiently deployed.
- esafak 1 year agoHow so? You need to quantify it; e.g., something like number of bugs found per year per LOC.
- dagmx 1 year ago
- ogurechny 1 year agoIt has never been said it's your security. It's their security, of their data, on their devices, against their threats and competitors/partners. The user is just an unprivileged data input daemon digitizing “unique personal experiences”, or some other corporate language term.
It's easy to laugh at Juicero users, it's harder to notice the bigger elephant in the room.
- oivey 1 year agoThis involved an extremely low level hardware exploit and a ton of other insanity. It is really nothing alike to Microsoft vs open source.
- 1 year ago
- mikestew 1 year ago[flagged]
- rapsey 1 year agoDoes iPhone boast security? Pretty sure Pixel phones were always way ahead of the pack in terms of security.
- aaomidi 1 year agoWake me up when google drive gets e2ee.
- aaomidi 1 year ago
- woodruffw 1 year ago
- vGPU 1 year ago[flagged]
- TaylorAlexander 1 year agoYeah people keep talking about reverse engineering but it’s just as real a possibility that this was simply engineered to be there. Apple and the government made a big public show about the San Bernardino iPhone situation[1] but that could have easily been a cover to convince people the government can’t get in to iPhones - because eventually the government dropped the court case, got in anyway, and the whole thing was quickly forgotten.
We can imagine that the government either has ideological capture of apple - that the management of apple agree to install hard to exploit vulnerabilities tailored for US government use - or legal capture through FISA rulings.
I’d be curious if anyone can summarize the latest understanding of FISA court actions in this realm.
[1] https://www.theguardian.com/technology/2016/mar/28/apple-fbi...
- jrexilius 1 year ago"the government" isn't really a single entity. domestic LE and foreign intelligence have different laws and processes enforced by the constitution (thankfully). Its certainly reasonable that domestic LE really can't force Apple to handover US citizens data, while foreign intelligence services can effect supply chain attacks, back-dooring and other methods not permitted for US citizens..
- JumpCrisscross 1 year ago> that could have easily been a cover
The problem with conspiracies is everyone involved knows it’s a secret. If you’re the CIA, it’s much less risky to compromise a chip design engineer than have everyone from the CEO down at Apple in on the plant.
- TaylorAlexander 1 year agoMaybe but then again what’s another secret when at a high level these firms are already very secretive.
It’s not apple but I think a lot about how Eric Schmidt of google was directly meeting with US military officials and talking about how important US defense was.
You can end up with a situation where the chip designer and some higher up both know what is happening and the higher up is there as a check to provide cover in case the chip designer is caught up in suspicion. (“No we asked for this for the manufacturing team.” Kind of thing.)
Of course this is all conjecture with no evidence and I understand why we don’t want to spend much energy on discussions we can’t confirm, but at the same time it is frustrating when the default assumption is that apple had no knowledge about this. The truth is that we don’t know and likely will never know.
- TaylorAlexander 1 year ago
- jrexilius 1 year ago
- OneLeggedCat 1 year ago> several of MMIO addresses the attackers used to bypass the memory protections weren’t identified in any device tree documentation, which acts as a reference for engineers creating hardware or software for iPhones. Even after the researchers further scoured source codes, kernel images, and firmware, they were still unable to find any mention of the MMIO addresses
It sure quacks like a duck.
- TheCaptain4815 1 year agoI'd disagree with this. Apple execs surely know if this information gets leaked they're losing 30% market cap in a single day, why would they risk something like that when administrations change every 4-8 years?
- MertsA 1 year agoI think the charitable explanation here is that this was an undocumented debugging interface. Apple knew about it and did not disclose it in any publicly available material. The NSA almost certainly has access to Apple's source code and documentation. Just look at the Snowden leaks when it was disclosed that the NSA was mitming Google's DC to DC links. They already knew Google wasn't encrypting those links before they surreptitiously dug up the fiber and they already knew enough about the system architecture to make sense of that firehose of data. Clearly either through NSL or bribing some insiders, they already exfiltrated a bunch of internal documentation and source code. Why would Apple be any different?
I wouldn't expect them to have HSM keys or anything but a mirror of their VCS? Yeah the NSA probably has that.
- d0mine 1 year agoHighly unlikely. Nobody cares.
What stock was crushed by Snowden revelations?
- cf1241290841 1 year agohttps://finance.yahoo.com/quote/AAPL/ seems you are wrong
Compare with minute 22:40 of the talk
As well as https://imgur.com/a/82JV7I9
- kornhole 1 year agoPower is more important than profit. Those running the national security apparatus have been in power for 60 years. The fact that they still haven't released the documents on the JFK assassination evidences that they are still in power.
- SV_BubbleTime 1 year agoEh.
Let’s say your old boss was embezzling and got away with it. Now you are the boss, and if you go public with it, not only are they out of power and likely nothing will happen, but all the freedom and flexibility you have in the same position is gone, and you or your friends have an island-problem you would rather not get into.
Maybe it’s just better to not rile up the shareholders.
- westhanover 1 year agoPeople will call you a crank or a conspiracy theorist but that is only because they are afraid to think about the answers to those questions themselves. Its easier to pretend it couldn't happen.
- SV_BubbleTime 1 year ago
- MertsA 1 year ago
- TaylorAlexander 1 year ago
- codedokode 1 year agoNow I am thinking Kaspersky should not have published this information. What a wrong decision. Instead they should have sold it to Russian government which I am sure could find lot of interesting uses for these "debugging features" and offer a good reward.
- Klaster_1 1 year agoKaspersky are already firmly under the full control of the state, selling anything is redundant. The whole video comes off as a massive flex, giving off the same vibes as athletes representing the country at the Olympics.
I was disappointed that nobody in the audience dared to ask the obvious question of how much time has passed between disclosing the vulnerability to the state agencies and Apple. I very much doubt the state didn't seize the opportunity to use the exploit against its enemies first and tactically disclose it later. If anything, the talk demonstrates that if they opted to disclose such a valuable exploit, they could afford to because they have the capability to discover more and have other exploits that did yet not outlive their use. I bet there is an interesting story behind the talk, hopefully, the details will eventually surface up.
- Klaster_1 1 year ago
- hnburnsy 1 year ago>The resulting shellcode, in turn, went on to once again exploit CVE-2023-32434 and CVE-2023-38606 to finally achieve the root access required to install the last spyware payload.
Why isn't Apple detecting the spyware\malware payload? If only Apps approved by Apple are allowed on an iPhone, detection should be trivial.
And why has no one bothered to ask Apple or ARM about this 'unknown hardware'?
>If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.
And finally does Lockdown mode mitigate any of this?
- docfort 1 year agoI think Lockdown would help here since it doesn’t decode message attachments. So the original link in the chain (decoding a PDF) would be impossible.
As for detecting unauthorized apps, I would imagine that once you’ve taken over control of the OS kernel, it’s game over for such software-based restrictions. The Halting theorem guarantees such limitations to any software-based restriction. And as long as you can form a Turing complete mechanism from pieces of the computer, such software limitations will apply.
- saagarjha 1 year agoThis chain isn’t delivered via an app, it is sent through iMessage. The checks for “only apps approved by Apple” are not relevant if you exploit your way past them.
- hnburnsy 1 year agoThanks I did see the researchers posted how the malware gets into memory, but I still feel like since Apple tightly controls the enviornment it ahould be able to detect anything running there that should not be.
- hnburnsy 1 year ago
- chasil 1 year agoThere is a PNG in the original article with detail of the malware gaining a foothold on a device:
https://cdn.arstechnica.net/wp-content/uploads/2023/12/trian...
As you can see, it starts with a PDF coming into iMessage, and that PDF has a font that is able to exploit ROP gadgets.
- twobitshifter 1 year ago>Apple declined to comment for this article.
- hnburnsy 1 year ago> >Apple declined to comment for this article.
Asshats
- hnburnsy 1 year ago
- hulitu 1 year ago> Why isn't Apple detecting the spyware\malware payload? If only Apps approved by Apple are allowed on an iPhone, detection should be trivial.
Because Apple is busy fixing exploits discovered by Citizenlab. /s
But hey, Apple is secure.
- docfort 1 year ago
- WhackyIdeas 1 year agoIt’s kind of simple imo. Apple is an American company and after Jobs died, Apple quickly signed up to working with the NSA and enrolled in the Prism programme.
Apple, like any other USA company, has to abide by the laws and doing what they are told to do. If that means hardware backdoors, software backdoors, or giving NSA a heads up over a vulnerability during the time it takes to fix said vulnerability (to give time for NSA to make good use of it) then they will.
Only someone with great sway (like Jobs) could have resisted something like this without fear of the US Govt coming after him. His successor either didn’t have that passion for privacy or the courage to resist working with the NSA.
Anyone, anywhere with an iPhone will be vulnerable to NSA being able to break into their phone anytime they please, thanks to Apple. And with Apple now making their own silicon, the hardware itself will be even more of a backdoor.
Almost every single staff member at Apple will be none the wiser about this obv and unable to do anything about it even if they did - and their phones will be just as fair game to tap whenever the spies want.
I am speculating. But in my mind, it’s really quite obvious. Just like how Prism made me win an argument I had with someone who was a die hard Apple fan and thought they would protect privacy at all costs… 6 months later, Snowden came along and won me that argument.
- onetokeoverthe 1 year ago[dead]
- onetokeoverthe 1 year ago