Low-Level Software Security for Compiler Developers

128 points by struct 2 years ago | 22 comments
  • JonChesterfield 2 years ago
    There's a tension here.

    Emitting 'secure' code almost always means emitting 'slower' code, and one of the few things compilers are assessed on is the performance of the code they generate.

    Compilers are built as a series of transformation passes. Normalisation is a big deal - if you can simplify N different patterns to the same thing, you only have to match that one canonical form later in the pipeline.

    So if one pass makes code slower/secure, later passes are apt to undo that transform and/or to miss other optimisations because the code no longer looks as expected.

    So while it is useful to know various make-it-secure transforms, which this book seems to cover, it's not at all obvious how to implement them without collateral damage.

    On a final note, compiler transforms are really easy to get wrong, so one should expect the implementation of these guards to be somewhat buggy, and those bugs themselves may introduce vulnerabilities.

    • pjmlp 2 years ago
      1960's systems were already taking security first approach and the industry would have kept down that route if it wasn't for UNIX and C adoption.

      IBM even did their RISC research in PL.8 taking into consideration safety and pluggable compiler infrastructure, similar to what people nowadays know from LLVM approach.

      Some would say that security measures in the car industry also slow drivers down and are a nuisance.

      https://en.m.wikipedia.org/wiki/Unsafe_at_Any_Speed

      • loup-vaillant 2 years ago
        > Some would say that security measures in the car industry also slow drivers down and are a nuisance.

        Not sure about that: what are brakes for? Slowing down & stop, right? But then I ask, how fast would you drive if your car had no brakes? I would guess not very fast at all. Thus, one important role of breaks is to allow you to drive faster.

        In practice, the more safety measures you put, the more confident people grow and the faster they drive. To a point, of course.

        Same with programming: I prototype faster with a good static type system than I do with a dynamic one. One reason is that I just write fewer tests (including those one-off verifications in the REPL).

        • MobiusHorizons 2 years ago
          I think you are thinking about the wrong type of security measures. I believe the op is talking about features like traction control, stability control, and ecu features that prevent engine power and braking at the same time. In performance driving situations (eg track driving) it is standard practice to disable these for the best track times. As safety features on the road they make a lot of sense, but can get in the way during high performance driving.
          • wizzard0 2 years ago
            A fire alarm at home is important, a fire alarm in the chimney or the engine cylinder makes it unusable.

            Exploit mitigations do work, but

            a) compiler /does not/ know what are you building and what are your requirements

            b) they only protect from /specific, known/ threats the same way a generic fire alarm won't protect you from CO leak or an electric shock.

            c) but they waste time, energy and RAM whether they are relevant or not

            The only way to get systems that are secure, performant and easy to maintain is to invest in tools that make it easier for developers and users (!) to analyze what the system actually does. Not trying to make everything "magically secure".

            Pretending a microwave with Super Safety Cat Detector is a Magic Pasta Heater will only end up with lawsuits from owners of dead hamsters - and rightfully so, because it's trying to defraud and dumb down users instead of educating them.

      • staunton 2 years ago
        The vast majority of businesses choose speed over security and avoid investing in security since they can offload the cost of incidents to their users. One of the main reasons such "more secure tools" projects are interesting for users is that they provide an easy and cheap avenue towards claiming an effort towards security was made and avoiding liability. On one hand, such tools actually help make things secure, on the other hand, speed and ease of use (not security) being the top priorities, the effect is probably limited. People who care much more about security than average would not start a new project in C/C++ to begin with and where legacy code is involved, dealing with it is hard enough already without trying to "make it secure".

        The only way to really improve the level of security in the industry is to assign responsibility and damages to those who fail to implement it. So far, it seems all market participants are content with 90% of security concerns being addressed by security theater.

        • pjmlp 2 years ago
          This is thankfully changing.

          Returns in digital stores, increasing visibility of how it actually costs in real money to fix those issues, warranty clauses in consulting gigs (usually free of charge), and introduction of cyber security laws like in Germany [0].

          [0] - https://www.bsi.bund.de/EN/Das-BSI/Auftrag/Gesetze-und-Veror...

          • WalterBright 2 years ago
            > The only way to really improve the level of security in the industry is to assign responsibility and damages to those who fail to implement it.

            This is the punishment approach. What it inevitably leads to is denial, coverup, unwillingness to innovate, and not fixing problems because fixing them is an implicit admission of fault.

            The better way is for no-fault, encouraging disclosure and openness about bugs, and collaboration in fixing them.

            • staunton 2 years ago
              > What it inevitably leads to is denial, coverup, unwillingness to innovate

              ... and finally adoption of the required methods and reaching required standards, like countless cases of successful regulation since times immemorial.

              How do you give companies a positive incentive to fix an issue if the issue does not cost them money? Fixing such an issue is a competitive disadvantage.

              > The better way is for no-fault, encouraging disclosure and openness about bugs, and collaboration in fixing them.

              What does that look like? Paying companies per disclosed bug in their software? State sponored white-hat hacker teams that find and fix the companies' bugs for them without disclosure? I can't think of anything that sounds realistic.

              • WalterBright 2 years ago
                > What does that look like?

                The D Language Foundation operates that way, for a real world example. The bug list is open, anyone can /view/comment on/submit a fix/ for any bug.

                > Fixing such an issue is a competitive disadvantage.

                Fixing problems is an advantage, not a disadvantage.

            • loup-vaillant 2 years ago
              > The vast majority of businesses choose speed over security

              I would add that the vast majority of businesses also choose features over speed.

              In some cases they pay lip service to speed, for instance by choosing C++, but pay zero attention to actual speed, because they end up writing in a pointer fest RAII style that destroys memory locality and miss the cache all the time. Compared to that, even Electron doesn’t look too unreasonable.

              • rightbyte 2 years ago
                If they write bad and slow Cpp code surely their Electron code will be even slower?
                • loup-vaillant 2 years ago
                  It depends. The main reasons C++ code can be slow are costly abstractions (don't laugh!), inappropriate data structures, and pointer fest (that leads to cache misses).

                  Writing the same program in JavaScript won't change much of the above. You'll have the JIT overhead for sure, but the basic data structures will remain relatively efficient, and if you're starting out with a pointer fest it won't be any worse in JavaScript. As for the GUI itself, well… I expect it'll be as fast as any browser.

              • WalterBright 2 years ago
                > The vast majority of businesses choose speed over security

                The D compiler would be faster if we turned off array bounds checking and assert checking. But we leave those security features turned on for release builds.

                • ngneer 2 years ago
                  The security industry has a lot of people looking for shortcuts or claiming to provide them.
                • tptacek 2 years ago
                  This is pretty great; they've done the work to produce useful capsule summaries of a bunch of memory safety topics (like forward- and backward-edge CFI, JOP, and PAC). Looking forward to seeing how far they can take it. The assembler snippets are useful and could be fleshed out more.
                  • vonimo 2 years ago
                    That was an excellent read. I look forward to enjoying their section on JIT compiler vulnerabilities, a whole fascinating topic in itself, when it is completed.
                    • _a_a_a_ 2 years ago
                      Compilers are an interest of mine so I'll read the article later, but I'm curious whether this is talking about C variety compilers which are generally unsafe, or compilers for managed languages which should never emit code allowing attacks (for some definition of 'never'). Which of these is this article/book discussing?
                      • tptacek 2 years ago
                        It's a book (in progress), not an article, and it is talking in large part about compilers for unmanaged languages.
                        • eimrine 2 years ago
                          As far as I've understood the article is about how to fool a CPU.
                          • _a_a_a_ 2 years ago
                            Hardly that, is about corrupting a state to take control. The CPU just does what it's told (not true, halfway down it starts on Covert channels and side-channels which are a different thing entirely).

                            On the 1st half, I'm uncertain how it relates to compilers.

                          • hummus_bae 2 years ago
                            From skimming a few pages, it seems to be C-family compilers in general.