The six dumbest ideas in computer security (2005)
267 points by lsb 11 months ago | 284 comments- tptacek 11 months agoWe're doing this again, I see.
https://hn.algolia.com/?q=six+dumbest+ideas+in+computer+secu...
You can pick this apart, but the thing I always want to call out is the subtext here about vulnerability research, which Ranum opposed. At the time (the late 90s and early aughts) Marcus Ranum and Bruce Schneier were the intellectual champions of the idea that disclosure of vulnerabilities did more harm than good, and that vendors, not outside researchers, should do all of that work.
Needless to say, that perspective didn't prove out.
It's interesting that you could bundle up external full-disclosure vulnerability research under the aegis of "hacking" in 2002, but you couldn't do that at all today: all of the Big Four academic conferences on security (and, obviously, all the cryptography literature, though that was true at the time too) host offensive research today.
- ggm 11 months agoMaybe they were right for their time? I'm not arguing that, I just posit that post fact rationalisation about decisions made in the past have to be considered against the evidence in the past.
Things network exploded size-wise and the numbers of participants in the field exploded too.
- michaelt 11 months agoIt was 100% a reasonable-sounding theory before we knew any better.
In the real world if you saw someone going around a car park, trying the door of every car you'd call the cops - not praise them as a security researcher investigating insecure car doors.
And in the imagination of idealists, the idea of a company covering up a security vulnerability or just not bothering to fix it was inconceivable. The problems were instead things like how to distribute the security patches when your customers brought boxed floppy disks from retail stores.
It just turns out that in practice vendors are less diligent and professional than was hoped; the car door handles get jiggled a hundred times a day, the people doing it are untraceable, and the cops can't do anything.
- michaelt 11 months ago
- andrecarini 11 months ago> the Big Four academic conferences on security
Which ones are those?
- ericpauley 11 months agoCompletely agree that offensive research has (for better or for worse) become a mainstay at the major venues.
As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation. Unfortunately vendors are often too unskilled or obstinate to properly respond to disclosure from academics.
For their part academics have room to improve as well. Rather than the pendulum swinging back the other way, I anticipate that the majors will eventually have more involved expectations for reducing harm from disclosures, such as by expanding the scope of the “vendor” to other possible mitigating parties, like OS or Firewall vendors.
- bawolff 11 months ago> As a result, we’re continually seeing negative externalities from these disclosures in the form of active exploitation.
That assumes that without these disclosures we wouldn't see active exploits. I'm not sure i agree with that. I think bad actors are perfectly capable of finding exploits by themselves. I suspect the total number of active exploits (and especially targeted exploits) would be much higher without these disclosures.
- ericpauley 11 months agoBoth can be true. It’s intellectually lazy to throw up our hands and say attacks would happen anyway instead of doing our best to mitigate harms.
- ericpauley 11 months ago
- tptacek 11 months agoI was going to respond in detail to this, but realized I'd be recapitulating an age-old debate about full- vs. "responsible-" disclosure, and it occurred to me that I haven't been in one of those debates in many years, because I think the issue is dead and buried.
- bawolff 11 months ago
- ggm 11 months ago
- lobsang 11 months agoMaybe I missed it, but I was surprised there was no mention of passwords.
Mandatory password composition rules (excluding minimum length) and rotating passwords as well as all attempts at "replacing passwords" are inherintly dumb in my opinion.
The first have obvious consequences (people writing passwords down, choosing the same passwords, adding 1) leading to the second which have horrible / confusing UX (no I don't want to have my phone/random token generator on me any time I try to do something) and default to "passwords" anyway.
Please just let me choose a password of greater than X length containing or not containing any chachters I choose. That way I can actually remember it when I'm not using my phone/computer, in a foreign country, etc.
- II2II 11 months ago> Mandatory password composition rules (excluding minimum length) and rotating passwords as well as all attempts at "replacing passwords" are inherintly dumb in my opinion.
I suspect that rotating passwords was a good idea at the time. There was some pretty poor security practices several decades ago, like sending passwords as clear text, which took decades to resolve. There are also people like to share passwords like candies. I'm not talking about sharing passwords to a streaming service you subscribe to, I'm talking about sharing access to critical resources with colleagues within an organization. I mean, it's still pretty bad which is why I disagree with them dismissing educating end users. Sure, some stuff can be resolved via technical means. They gave examples of that. Yet the social problems are rarely solvable via technical means (e.g. password sharing).
- marshray 11 months agoMuch of the advice around passwords comes from time-sharing systems and predates the internet.
Rules like "don't write passwords down," "don't show them on the screen", and "change them every N days" all make a lot more sense if you're managing a bank branch open-plan office with hardwired terminals.
- 01HNNWZ0MV43FF 11 months agoIt's funny, writing passwords down is excellent advice on today's Internet.
Physical security is easy, who's gonna see inside your purse? How often does that get stolen? Phones and laptops are high-value targets for thieves, and if they're online there could be a vuln. Paper doesn't have RCE.
(That said, I use KeePass, because it's nice to have it synced and encrypted. These days only my KeePass password is written down.)
- 01HNNWZ0MV43FF 11 months ago
- Jerrrrrrry 11 months ago
yes, when all password hashes were available to all users, and therefore had an expected bruteforce/expiration date.>I suspect that rotating passwords was a good idea at the time.
It is just another evolutionary artifact from a developing technology complexed with messy humans.
Repeated truisms - especially in compsci, can be dangerous.
NIST has finally understood that complex password requirements decrease security, because nobody is attacking the entrophy space - they are attacking the post-it note/notepad text file instead.
This is actually a good example of an opposite case of Chesterton’s Fence
- nmadden 11 months ago> NIST has finally understood that complex password requirements decrease security, because nobody is attacking the entrophy space - they are attacking the post-it note/notepad text file instead.
Actually NIST provide a detailed rationale for their advice [1]. Attackers very much are attacking the entropy space (credential stuffing with cracked passwords is the #1 technique used in breaches). But password change and complexity rules are pointless precisely because they don’t increase the entropy of the passwords. From NIST:
> As noted above, composition rules are commonly used in an attempt to increase the difficulty of guessing user-chosen passwords. Research has shown, however, that users respond in very predictable ways to the requirements imposed by composition rules [Policies]. For example, a user that might have chosen “password” as their password would be relatively likely to choose “Password1” if required to include an uppercase letter and a number, or “Password1!” if a symbol is also required.
- n4r9 11 months ago> yes, when all password hashes were available to all users, and therefore had an expected bruteforce/expiration date.
Pretty much anyone I've spoken to candidly about rotating passwords has said that they use a basic change to derive the next password from the old. For example, incrementing a number and/or a letter. If that was as common a practise as I suspect, then rotating passwords doesn't add much security. It just meanstthat hackers had to go through a few common manipulation strategies after breaking the hash.
- marshray 11 months agoIt's not crazy to want a system to be designed such that it tends to converge to a secure state over time. We still have expiration dates on ID and credit cards and https certificates.
The advantages just didn't outweigh the disadvantages in this scenario.
- nmadden 11 months ago
- marshray 11 months ago
- raesene9 11 months agoTBH where you see this kind of thing (mandatory periodic password rotation every month or two) being recommended, it's people not keeping up with even regulators view of good security practice.
Both NIST in the US (https://pages.nist.gov/800-63-FAQ/) and NCSC in the UK (https://www.ncsc.gov.uk/collection/passwords/updating-your-a...) have quite decent guidance that doesn't have that kind of requirement.
- crngefest 11 months agoWell, my experience working in the industry is that almost no company uses good security practices or goes beyond some outdated checklists - a huge number wants to rotate passwords, disallow/require special characters, lock out users after X attempts, or disallow users to choose a password they used previously (never understood that one).
I think the number of orgs that follow best practices from NIST etc is pretty low.
- 8xeh 11 months agoIt's not necessarily the organization's fault. In several companies that I've worked for (including government contractors) we are required to implement "certifications" of one kind or another to handle certain kinds of data, or to get some insurance, or to win some contract.
There's nothing inherently wrong with that, but many of these require dubious "checkbox security" procedures and practices.
Unfortunately, there's no point in arguing with an insurance company or a contract or a certification organization, certainly not when you're "just" the engineer, IT guy, or end user.
There's also little point in arguing with your boss about it either. "Hey boss, this security requirement is pointless because of technical reason X and Y." Boss: "We have to do it to get the million dollar contract. Besides, more security is better, right? What's the problem?"
- ivlad 11 months ago> disallow users to choose a password they used previously (never understood that one)
That’s because you never responded to an incident when user changed their compromised password because they were forced to only to change it back next day because “it’s too hard to remember a new one”.
- jay_kyburz 11 months ago>disallow users to choose a password they used previously
I think Epic Game Store hit me with that one the other day. Had to add a 1 to the end.
A common pattern for me is that I create an account at home, and make a new secure password.
Then one day I log in a work but don't have the password on me so I reset it.
Then I try and login again at home, don't have the password from work, so try and reset it back to the password I have at home.
- Dalewyn 11 months ago>lock out users after X attempts
Legitimate users usually aren't going to fail more than a couple times. If someone (or something) is repeatedly failing, lock that shit down so a sysadmin can take a look at leisure.
>disallow users to choose a password they used previously (never understood that one)
It's so potentially compromised passwords from before don't come back into cycle now.
- 8xeh 11 months ago
- crngefest 11 months ago
- temporallobe 11 months agoI’ve been preaching this message for many years now. For example, since password generators basically make keys that can’t be remembered, this has led to the advent of password managers, all protected by a single password, so your single point of failure is now just ONE password, the consequences of which would be that an attacker would have access to all of your passwords.
The n-tries lockout rule is much more effective anyway, as it breaks the brute-force attack vector in most cases. I am not a cybersecurity expert, so perhaps there are cases where high-complexity, long passwords may make a difference.
Not to mention MFA makes most of this moot anyway.
- nouveaux 11 months agoMost of us can't remember more than one password. This means that if one site is compromised, then the attacker now has access to multiple sites. A password manager mitigates this issue.
- cardanome 11 months agoPeople used to memorize the phone numbers of all important family members and close friends without much trouble. Anyone without a serious disability should have no trouble memorizing multiple passwords.
Sure, I do use password managers for random sites and services but I probably have at lower double digit amount of passwords memorized for the stuff that matters. Especially for stuff that I want to be able to access in an emergency when my phone/laptop gets stolen.
- userbinator 11 months agoVary the password per site based on your own algorithm.
- soupbowl 11 months agoMost people can surely remember beyond one password.
- cardanome 11 months ago
- wruza 11 months agoMy bitwarden plugin locks out after a few minutes of inactivity. New installations are protected by totp. So one has to physically be at one of my devices few minutes after I leave even if they have a password. This reduces the attack source to a few people that I have to trust anyway. Also I can lock / logout manually if situation suggests. Or not log in at all and instead type the password from my phone screen.
I understand the conceptual risk of storing everything behind a single “door”. That’s not ideal. But in practice, circumstances force you to create passwords, expose passwords, reset passwords, so you cannot remember them all. You either write them down (where? how secure?) or resort to having only a few “that you usually use”.
Password managers solve the “where? how secure?” part. They don’t solve security, they help you to not do stupid things under pressure.
- borski 11 months ago> so your single point of failure is now just ONE password, the consequences of which would be that an attacker would have access to all of your passwords.
Most managers have 2FA, or an offline key, to prevent this issue, and encrypt your passwords at rest so that without that key (and the password) the database is useless.
- nottorp 11 months ago> and encrypt your passwords at rest
I haven't turned off my desktop this year. How does encryption at rest help?
- nottorp 11 months ago
- unethical_ban 11 months agoThe one password and the app that uses it are more secure than most other applications. Lock out is just another term for DDoS if a bad actor knows usernames.
I love proton pass.
- nouveaux 11 months ago
- red_admiral 11 months agoI was going to say passwords too ... but now I think passkeys would be a better candidate for dumbest ideas. For the average user, I expect they will cause no end of confusion.
- cqqxo4zV46cp 11 months agoThat’s just recency bias.
- cqqxo4zV46cp 11 months ago
- uconnectlol 11 months agoPassword policies are a joke since you use 5 websites and they will have 5 policies.
1. Bank etc will not allow special characters, because that's a "hacking attempt". So Firefox's password generator, for example, won't work. The user works around this by typing in suckmyDICK123!! and his password still never gets hacked because there usually isn't enough bruteforce throughput even with 1000 proxies or you'll just get your account locked forever once someone attempts to log into it 5 times and those 1000 IPs only get between 0.5-3 tries each with today's snakeoil appliances on the network. There's also the fact that most people already know that "bots will try your passwords at superhuman rate" by now. Then there's also the fact that not even one of these password policies stops users from choosing bad passwords. This is simply a case of "responsible" people trying and wasting tons of times to solve reality. These people who claim to know better than you have not even thought this out and have definitely not thought about much at all.
2. For everything that isn't your one or two sensitive things, like the bank, you want to use the same password. For example the 80 games you played for one minute that obnoxiously require making an account (for the bullshit non-game aspects of the game such as in game trading items). Most have custom GUIs too and you can't paste into them. You could use a password manager for these but why bother. You just use the same pass for all of them.
- ivlad 11 months agoDear user with password “password11111111111” logging in from a random computer with two password stealers active, from a foreign country, and not willing to use MFA, incident response team will thank you and prepare a warm welcome when you are back to office.
Honestly, this comment shows, that user education does not work.
- belinder 11 months agoMinimum length is dumb too because people just append 1 until it fits
- hunter2_ 11 months agoBut when someone tries to attack such a password, as long as whatever the user devised isn't represented by an entry in the attack dictionary, the attack strategy falls back to brute force, at which point a repetition scheme is irrelevant to attack time. Granted, if I were creating a repetitive password to meet a length requirement without high mental load, I'd repeat a more interesting part over and over, not a single character.
- borski 11 months agoSure. But most people add “111111” or “123456” to the end. That’s why it’s on top of every password list.
- borski 11 months ago
- NeoTar 11 months agoUndisclosed minimum length is particularly egregious.
It's very frustrating when you've got a secure system and you spend a few minutes thinking up a great, memorable, secure password; then realize that it's too few (or worse, too many!) characters.
Even worse when the length requirements are incompatible with your password generation tool.
- rekabis 11 months agoI would love to see most drop-in/bolt-on authentication packages (such as DotNet’s Identity system) to adopt “bitwise complexity” as the only rule: not based on length or content, only the mathematical complexity of the bits used. KeePass uses this as an estimate of password “goodness”, and it’s altered my entire view of how appropriate any one password can be.
- Terr_ 11 months agoIIRC the key point there is that it's contextual to whatever generation method scheme you used--or at least what method you told it was used--and it assumes the attacker knows the generation scheme.
So "arugula" will score is very badly in the context of a passphrase of English words, but scores better as a (supposedly) random assortment of lowercase letters, etc.
- jamesfinlayson 11 months agoI'm told that at work we're not allowed to have the same character appear three or more times consecutively in a password (I have never tried).
- Terr_ 11 months ago
- hunter2_ 11 months ago
- fragmede 11 months agoon the other hand, to gave us the password game, so there's that.
- cuu508 11 months ago> That way I can actually remember it when I'm not using my phone/computer, in a foreign country, etc.
I'd be very wary of logging into accounts on any computer/phone other than my own.
- izacus 11 months agoBased on the type of this rant - all security focused with little thought about usability of systems they're talking about - the author would probably be one of those people that mandate password rotation every week with minimum of 18 characters to "design systems safely by defatult". Oh, and prevent keyboards from working because they can infect computers via USB or something.
(Yes, I'm commenting on the wierd idea about not allowing processes to run without asking - we're now learning from mobile OSes that this isn't practically feasible to build a universally useful OS that drove most of computer growth in the last 30 years).
- hot_gril 11 months agoI don't get how it took until present day for randomly-generated asymmetric keys to become somewhat commonly used on the Web etc in the form of "passkeys" (confusing name btw). Password rotation and other rules never worked. Some sites still require a capital letter, number, and symbol, as if 99% of people aren't going to transform "cheese" -> "Cheese1!".
- JJMcJ 11 months ago> people writing passwords down
Which is better, a strong password written down, or better yet stored a secured password manager, or a weak password committed to memory?
As usual, XKCD has something to say about it: https://xkcd.com/936/
- II2II 11 months ago
- kstrauser 11 months agoHacking is cool. Well, gaining access to someone else's data and systems is not. Learning a system you own so thoroughly that you can find ways to make it misbehave to benefit you is. Picking your neighbor's door lock is uncool. Picking your own is cool. Manipulating a remote computer to give yourself access you shouldn't have is uncool. Manipulating your own to let you do things you're not suppose to be able to is cool.
That exploration of the edges of possibility is what make moves the world ahead. I doubt there's ever been a successful human society that praised staying inside the box.
- janalsncm 11 months agoWe can say that committing crimes is uncool but there’s definitely something appealing about knowing how to do subversive things like pick a lock, hotwire a car, create weapons, or run John the Ripper.
It effectively turns you into a kind of wizard, unconstrained by the rules everyone else believes are there.
- kstrauser 11 months agoWell put. There’s something inherently cool in knowledge you’re not suppose to have.
- jay_kyburz 11 months agoYou have to know how to subvert existing security in order to build better secure systems. You _are_ supposed to know this stuff.
- jay_kyburz 11 months ago
- Sohcahtoa82 11 months agoAnd sometimes, knowing that information is useful for legit scenarios.
When my grandma was moving across the country to move in with my mom, she got one of those portable on-demand storage things, but she put the key in a box that got loaded inside and didn't realize it until the POD got delivered to my mom's place.
I came over with my lock picks and had it open in a couple minutes.
- account42 11 months ago> We can say that committing crimes is uncool
Disagree in general. Laws != morals. Often enough laws are unjust and ignoring them is the cool thing to do.
- kstrauser 11 months ago
- tinycombinator 11 months agoManipulating a remote computer to give yourself access you shouldn't have can be cool if that computer was used in phone scam centers, holding the private data of countless elderly victims. Using that access to disrupt said scam business could be incredibly cool (and funny).
It could be technically illegal, and would fall under vigilante justice. But we're not talking about legality here, we're talking about "cool": vigilantes are usually seen as "cool" especially when done from a sense of personal justice. Again, not talking about legal or societal justice.
- janalsncm 11 months ago
- Hendrikto 11 months agoThis is full of very bad takes.
> I know other networks that it is, literally, pointless to "penetration test" because they were designed from the ground up to be permeable only in certain directions and only to certain traffic destined to carefully configured servers running carefully secured software.
”I don‘t need to test, because I designed, implemented, and configured my system carefully.“ might be the actual worst security take I ever heard.
> […] hacking is a social problem. It's not a technology problem, at all.
This is security by obscurity. Also it‘s not always social. Take corporate espionage and nation states for example.
- CM30 11 months agoI think the main problem is that there's usually an unfortunate trade off between usability and security, and most of the issues mentioned as dumb ideas here come from trying to make the system less frustrating for your average user at the expense of security.
For example, default allow is terrible for security, and the cause of many issues in Windows... but many users don't like the idea of having to explicitly permit every new program they install. Heck, when Microsoft added that confirmation, many considered it terrible design that made the software way more annoying to use.
'Default Permit', 'Enumerating Badness' and 'Penetrate and Patch ' are all unfortunately defaults because of this. Because people would rather make it easier/more convenient to use their computer/write software than do what would be best for security.
Personally I'd say that passwords in general are probably one of the dumbest ideas in security though. Like, the very definition of a good password likely means something that's hard to remember, hard to enter on devices without a proper keyboard, and generally inconvenient for the user in almost every way. Is it any wonder that most people pick extremely weak passwords, reuse them for most sites and apps, etc?
But there's no real alternative sadly. Sending links to email means that anyone with access to that compromises everything, though password resets usually mean the same thing anyway. Physical devices for authentication mean the user can't log in from places outside of home that they might want to login from, or they have to carry another trinket around everywhere. And virtually everything requires good opsec, which 99.9% of the population don't really give a toss about...
- freeone3000 11 months agoIt’s that insight that brought forward passkeys, which have elements of SSO and 2FA-only logins. Apple has fully integrated, allowing cloud-sync’d passkeys: on-device for apple devices, 2FA-only if you’ve got an apple device on you. Chrome is also happy to act as a passkey. So’s BitWarden. It can’t be spoofed, can’t be subverted, you choose your provider, and you don’t even have to remember anything because the site can give you the name of the provider you registered with.
- thyrsus 11 months agoI recommend using a well respected browser based password manager, protected by a strong password, and having it generate strong passwords that you never think of memorizing. Web sites that disable that with JavaScript on the password field should be liable for damages with added penalties - I'm looking at you, banks.
- bpfrh 11 months agomeh passwords where a good idea for a long time.
The first 10(20?) years there where no devices without a good keyboard.
The big problem imho was the idea that passwords had to be complicated and long, e.g. a random, alpanumeric, some special chars and at least 12 characters long, while a better solution would have been a few words.
Edit: To be clear I agree with most of your points about passwords, just wanted to point out that we often don't appreciate how much tech changed after the smartphone introduction and that for the environemnt before that (computer/laptops) passwords where a good choice.
- CM30 11 months agoThat's a fair point. Originally devices generally had decent keyboards, or didn't need passwords.
The rise of not just smartphones, but tablets, online games consoles, smart TVs, smart appliances, etc had a pretty big impact on their usefulness.
- CM30 11 months ago
- freeone3000 11 months ago
- moring 11 months ago> Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole.
No, it means that you learn practical aspects alongside theory, and that's very useful.
- move-on-by 11 months agoI also took issue with this point. One does not become an author without first learning how to read. The usefulness of reading has not diminished once you publish a book.
You must learn how known exploits work to be able to discover unknown exploits. When the known exploits are patched, your knowledge of how they occurred has not diminished. You may not be able to use them anymore, but surely that was not the goal in learning them.
- Sohcahtoa82 11 months agoNot necessarily.
There are a lot of script kiddies that don't know a damn thing about what TCP is or what an HTTP request looks like, but know how to use LOIC to take down a site.
- move-on-by 11 months ago
- Zak 11 months agoI'd drop "hacking is cool" from this list and add "trusting the client".
I've seen an increase in attempts to trust the client lately, from mobile apps demanding proof the OS is unmodified to Google's recent attempt to add similar DRM to the web. If your network security model relies on trusting client software, it is broken.
- strangecharm2 11 months agoIt's not about security, it's about control. Modified systems can be used for nefarious purposes, like blocking ads. And Google wouldn't like that.
- Zak 11 months agoIt's about control for Google and friends. If your bank's app uses SafetyNet, it's probably about some manager's very confused concept of security.
- account42 11 months ago> If your bank's app uses SafetyNet, it's probably about some manager's very confused concept of security.
Or about making the auditor for the government-imposed security certification happy with the least amount of effort. It's always more work to come up with good answers why you are not doing the industry standard thing.
- account42 11 months ago
- Zak 11 months ago
- strangecharm2 11 months ago
- munchausen42 11 months agoAbout 'Default Deny': 'It's not much harder to do than 'Default Permit,' but you'll sleep much better at night.'
Great that you, the IT security person, sleeps much better at night. Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department. And, btw., the more annoyed people are, the more likely they are to use workarounds that undermine your IT security concept (e.g., think of the typical 'password1', 'password2', 'password3' passwords when you force users to change their password every month).
So no, good IT security does not just mean unplugging the network cable. Good IT security is invisible and unobtrusive for your users, like magic :)
- TheRealDunkirk 11 months agoA friend of mine has trouble running a very important vendor application for his department. It stopped working some time ago, so he opened a ticket with IT. It was so confusing to them that it got to a point that they allowed him to run Microsoft's packet capture on his machine. He followed their instructions, and captured what was going on. Despite the capture, they were unable to get it working, so out of frustration, he sent the capture to me. Even though our laptops are really locked down, as a dev, I get admin on my machine, and I have MSDN, so I downloaded Microsoft's tool, looked over the capture, and discovered that it the application was a client/server implementation ON THE LOCAL MACHINE. The front end was working over networking ports to talk to the back end, which then talked to the vendor's servers. I only knew that I had just undergone a lot of pain with my own development workflow, because the company had started doing "default deny," and it was f*king with my in several ways. Ways that, as you say, I found workarounds for, that they probably aren't aware of. I told him what to tell IT, and how they could whitelist this application, but he's still having problems. Why am I being vague about the details here? It's not because of confidentiality, though that would apply. No, it's because my friend had been "working with IT" for over a year to get to this point, and THIS WAS TWO YEARS AGO, and I've forgotten a lot of the details. So, to say that it will take "3 extra rounds" is a bit of an understatement when IT starts doing "default deny," at least in legacy manufacturing companies.
- pif 11 months ago> Good IT security is invisible and unobtrusive for your users
I wish more and more IT administrators would use seat belt and airbags as models of security: they impose a tiny, minor annoyance in everyday usage of your cars, but their presence is gold when an accident happens.
Instead, most of them consider it normal to prevent you from working in order to hide their ignorance and lack of professionalism.
- thyrsus 11 months agoWise IT admins >know< they are ignorant and design for that. Before an application gets deployed, its requirements need to be learned - and the users rarely know what those requirements are, so cycles of information gathering and specification of permitted behavior ensue. You do not declare the application ready until that process converges, and the business knows and accepts the risks required to operate the application. Few end users know what a CVE is, much less have mitigated them.
I also note that seatbelts and airbags have undergone decades of engineering refinement; give that time to your admins, and your experience will be equally frictionless. Don't expect it to be done as soon as the download finishes.
- pif 11 months agoI think you are missing the main point of my analogy: seatbelts and airbags work on damage mitigation, while the kind of security that bothers users so much is the one focused on prevention.
Especially in IT, where lives are not at stake, having a good enough mitigation strategy would help enormously in relaxing on the prevention side.
- pif 11 months ago
- horsawlarway 11 months agoSo much this.
There is a default and unremovable contention between usability and security.
If you are "totally safe" then you are also "utterly useless". Period.
I really, really wish most security folks understood and respected the following idea:
"A ship in harbor is safe, but that is not what ships are built for".
Good security is a trade. Always. You must understand when and where you settle based on what you're trying to do.
- trey-jones 11 months agoReally well put and I always tell people this when talking about security. It's a sliding scale, and if you want your software to be "good" it can't be at either extreme.
- trey-jones 11 months ago
- thyrsus 11 months ago
- clwg 11 months agoGood IT security isn't invisible; it's there to prevent people from deploying poorly designed applications that require unfettered open outbound access to the internet. It's there to champion MFA and work with stakeholders from the start of the process to ensure security from the outset.
Mostly, it's there to identify and mitigate risks for the business. Have you considered that all your applications are considered a liability and new ones that deviate from the norm need to be dealt with on a case by case basis?
- RHSeeger 11 months agoBut it needs to be a balance. IT policy that costs tremendous amounts of time and resources just isn't viable. Decisions need to be made such that it's possible for people to do their work AND safety concerns are address; and _both_ of them need to compromise some.
As a simplified example
- You have a client database that has confidential information
- You have some employees that _must_ be able to interact with the data in that database
- You don't want random programs installed on a computer <that has access to that database> to leak the information
You could lock down every computer in the company to not allow application installation. This would likely cause all kinds of problems getting work done.
You could lock down access to the database so nobody has access to it. This also causes all kinds of problems.
You could lock down access to the database to a very specific set of computers and lock down _those_ computers so additional applications cannot be installed on them. This provides something close to a complete lockdown, but with far less impact on the rest of the work.
Sure it's stupidly simple example, but it just demonstrates the idea that compromises are necessary (for all participants)
- darby_nine 11 months agoI think the idea is that if you don't work with engineering or product, people will perceive you as friction rather than protection. Agreeing on processes to deploy new applications should satisfy both parties without restrictions being perceived as an unexpected problem.
- RHSeeger 11 months ago
- cmiles74 11 months agoI believe a "default deny" policy for security infrastructure around workstations is a good idea. When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.
That being said, in my opinion, application servers and other public facing infrastructure should definitely be working under a "default deny" policy. I'm having trouble thinking of situations where this wouldn't be the case.
- hulitu 11 months ago> When some new tool that uses a new port or whatever comes into use, the hassle of getting IT to change the security profile is far less expensive then leaking the contents of any particular workstation.
Many years ago, we had , in our company's billing system a "Waiting for IT". They weren't happy.
Some things got _days_ to get fixed.
- hulitu 11 months ago
- eadmund 11 months agoCompany IT exists to serve the company. It should not cost more than it benefits.
There’s a balancing act. On the one hand, you don’t want a one-week turnaround to open a port; on the other you don’t want people running webservers on their company desktops with proprietary plans coincidentally sitting on them.
- causal 11 months agoThe problem is that security making things difficult results in employees resorting to workarounds like running rogue webservers to get their jobs done.
If IT security's KPIs are only things like "number of breaches" without any KPIs like "employee satisfaction", security will deteriorate.
- graemep 11 months agoThe biggest problem I can see with default deny is that it makes if far harder to get uptake for new protocols once you get "we only allow ports 80 and 443 through the firewall".
- account42 11 months agoWich also makes the security benefit moot as now all malware also knows to use ports 80 and 443.
- account42 11 months ago
- cjalmeida 11 months agoOne-week turnaround to open a port would be a dream in most large companies.
- causal 11 months ago
- kabouseng 11 months agoThat's because IT security reports to the C level, and their KPI's are concerned with security and vulnerabilities, but not the performance or effectiveness of the personnel.
So every time, if there is a choice, security will be prioritized at the cost of personnel performance / effectiveness. And this is how big corporations become less and less effective to the point where the average employee rarely has a productive day.
- 7bit 11 months ago> Meanwhile, the rest of the company is super annoyed because nothing ever works without three extra rounds with the IT department
This is such an uninformed and ignorant opinion.
1. Permission concepts don't always involve IT. In fact, they can be designed by IT without ever involving IT again - such is the case in our company.
2. The privacy department sleeps much better knowing that GDPR violations require an extra u careful action, than being a default. Management sleeps better knowing that confidential projects need to be shared, instead of forgetting to deny access for everybody first. Compliance sleeps better because all of the above. And users know that data they create is private until explicitly shared.
3. Good IT security is not invisible. Entering a password is a visible step. Approving MFA requests is a visible step. Granting access to resources is a visible step. Teaching users how to identify spam and phishing is a visible step. Or teaching them about good passwords.
- munchausen42 11 months agohm I don't think that passwords are an example of good IT security. There are much better options like physical tokens, biometric features, passkeys etc. that are less obtrusive and don't require the users to follow certain learned rules and behaviors.
If the security concept is based on educating and teaching people how to behave it's prone to fail anyway, as there will always be that one uninformed and ignorant person like me that doesn't get the message. As soon as there is one big gaping hole in the wall, the whole fortress becomes useless (Case in point: haveibeenpwned.com) Also, good luck teaching everyone in the company how to identify a personalized phishing message crafted by ChatGPT.
For the other two arguments: I don't see how "But we solved it in my company" and "Some other departments also have safety/security-related primary KPIs" justifies that IT security should be allowed to just air-gap the company if it serves these goals.
- munchausen42 11 months ago
- delusional 11 months ago> Meanwhile, the rest of the company is super annoyed because nothing ever works
Who even cares if they're annoyed. The IT security gets to sleep at night, but the entire corporation might be operating illegally because they can't file the important compliance report because somebody fiddled with the firewall rules again.
There is so much more to enterprise security than IT security. Sometimes you don't open a port because "it's the right thing to do" as identified by some process. Sometimes you do it because the alternative RIGHT NOW is failing an audit.
- spogbiper 11 months ago> Good IT security is invisible and unobtrusive for your users, like magic
Why is this a standard for "good" IT security but not any other security domain? Would you say good airport security must be invisible and magic? Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?
Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.
- graemep 11 months ago> Would you say good airport security must be invisible and magic?
Very possibly. IMO a lot of the intrusive airport security is security theatre. Things like intelligence do a lot more. Other things we do not notice too, I suspect.
THe thing about the intrusive security is that attackers know abut it and can plan around it.
> Are you troubled by having to use a keycard or fingerprint to enter secure areas of a building?
No, but they are simple and easy to use, and have rarely stopped me from doing anything I needed to.
> Security is always a balance between usability and safety. Expecting the user to be completely unaffected through some magic is unrealistic.
Agree entirely.
- Jcowell 11 months agoI never quite understood the security theater thing. Isn’t the fact that at each airport , you will be scanned and possibly frisked a deterrent and you can’t measure what dissent occur so the only way to know if it works is observe a timeline where it doesn’t exist?
- Jcowell 11 months ago
- pc86 11 months agoIf you have two security models that provide identical actual security, and one of them is invisible to the user and the other one is outright user-hostile like the TSA, yes of course the invisible one is better.
- w10-1 11 months agoIt is the standard for all security domains - police, army, etc.
I would reword it to say that security should work for the oblivious user, and we should not depend on good user behavior (or fail to defend against malicious or negligent behavior).
I would still say the ideal is for the security interface to prevent problems - like having doors so we don't fall out of cars, or ABS to correct brake inputs.
- graemep 11 months ago
- lencastre 11 months agoThat’s what I gave my firewall, all out traffic is default deny, then as the screaming began, I started opening the necessary ports to designated IPs here and there. Now the screaming is not so frequent. A minor hassle… the tricky one is the DNS over HTTPS… that is a whack-a-mole if I ever saw one.
- michaelcampbell 11 months ago"If you're able to do your job, security/it/infosec/etc isn't doing theirs." Perhaps necessary at times, but true all too often.
- cowboylowrez 11 months agothe article is great, but reading some of the anti security comments are really triggering for me.
- manvillej 11 months agogood IT security is invisible, allows me to do everything I need, protects us from every threat, costs nothing, and scales to every possible technology the business buys. /s
- 11 months ago
- TheRealDunkirk 11 months ago
- trey-jones 11 months agoMost security-oriented articles are written by extremely security-minded people. These people in my experience ignore the difficulties that a purely security-oriented approach imposes on users of the secure software. I always present security as a sliding scale. On one end "Secure", and on the other "Convenient". Purely Secure software design will almost never have any users (because it's too inconvenient), and purely Convenient software design will ultimately end up the same (because it's not secure enough).
That said, this is a good read for the most part. I heavily, heavily disagree with the notion that trying to write exploits or learn to exploit certain systems as a security professional is dumb (under "Hacking is C00L"). I learned more about security by studying vulnerabilities and exploits and trying to implement my own (white hat!) than I ever did by "studying secure design". As they say, "It takes one to know one." or something.
- delusional 11 months ago> These people in my experience ignore the difficulties that a purely security-oriented approach imposes on users
Your scale analogy is probably more approachable, but I'm a little more combative. I usually start out my arguments with security weenies with something along the lines of "the most secure thing would be to close up shop tomorrow, but we're probably not going to get sign-off on that." After we've had a little chuckle at that, we can discuss the compromise we're going to make.
I've also had some luck with changing the default position on them by asserting that if they tell me no, then I'll just do it without them, and it'll have whatever security I happen to give it. I can always find a way, but they're welcome to guide me to something secure. I try to avoid that though, because it tends to create animosity.
- delusional 11 months ago
- billy99k 11 months agoThis is a mostly terrible 19-year old list.
Here is an example:
"Your software and systems should be secure by design and should have been designed with flaw-handling in mind"
Translation: If we lived in a perfect world, everything would be secure from the start.
This will never happen, so we need to utilize the find and patch technique, which has worked well for the companies that actually patch the vulnerabilities that were found and learn from their mistakes for future coding practices.
The other problem is that most systems are not static. It's not release a secure system and never update it again. Most applications/systems are updated frequently, which means new vulnerabilities will be introduced.
- duskwuff 11 months agoSteelmanning for a moment: I think what the author is trying to address is overly targeted "patches" to security vulnerabilities which fail to address the faulty design practices which led to the vulnerabilities. An example might be "fixing" cross-site scripting vulnerabilities in a web application by blocking requests containing keywords like "script" or "onclick".
- CookieCrisp 11 months agoI agree, an example that if you say something dumb with enough confidence a lot of people will think it's smart.
- dartos 11 months agoThe CTO at my last company was like this.
In the same breath he talked about how he wanted to build this “pristine” system with safety and fault tolerance as priority and how he wanted to use raw pointers to shared memory to communicate between processes which both use multiple threads to read/write to this block of shared memory because he didn’t like how chatty message queues are.
He also didn’t want to use a ring buffer since he saw it as a kind of lock
- SoftTalker 11 months agoThat sounds pretty deep in the weeds for a CTO. Was it a small company?
- marshray 11 months agoI've had the CTO who was also a frustrated lock-free data structure kernel driver developer too.
Fun times.
- SoftTalker 11 months ago
- dartos 11 months ago
- msla 11 months agoIt's also outright stupid. For example, from the section about hacking:
> "Timid people could become criminals."
This fully misunderstands hacking, criminality, and human nature, in that criminals go where the money is, you don't need to be a Big Burly Wrestler to point a gun at someone and get all of their money at the nearest ATM, and you don't need to be Snerd The Nerd to Know Computers. It's a mix of idiocy straight out of the stupidest 1980s comedy films.
Also:
> "Remote computing freed criminals from the historic requirement of proximity to their crimes."
This is so blatantly stupid it barely bears refutation. What does this idiot think mail enables? We have Spanish Prisoner scams going back centuries, and that's the same scam as the one the 419 mugus are running.
Plus:
> Anonymity and freedom from personal victim confrontation increased the emotional ease of crime, i.e., the victim was only an inanimate computer, not a real person or enterprise.
Yeah, criminals will defraud you (or, you know, beat the shit out of you and threaten to kill you if you don't empty your bank accounts) just as easily if they can see your great, big round face. It doesn't matter. They're criminals.
Finally, this:
> Your software and systems should be secure by design and should have been designed with flaw-handling in mind.
"Just do it completely right the first time, idiot!" fails to be an actionable plan.
- sulandor 11 months agothough, frequent updates mainly serve to hide unfit engineering practices and encourage unfit products.
the world is not static, but most things have patterns that need to be identified and handled, which takes time that you don't have if you sprint from quick-fix to quick-fix of your mvp.
- chefandy 11 months agoThere's definitely a few worthwhile nuggets in there, but at least half of this reads like a cringey tirade you'd overhear at the tail end of the company holiday party from the toasted new helpdesk intern. I'm surprised to see it from a subject matter expert, that he kept it on his website for 20 years, and also that it was so heavily upvoted.
- notagainlol 11 months agoI really didn't think "write secure software" would be controversial, but here we are. How is the nihilist defeatism going? I'll get back to you after I clean up the fallout from having my data leaked yet again this week.
- Pannoniae 11 months ago*Translation: If we didn't just pile on dependencies upon dependencies, everything would be secure from the start.
Come on. The piss-poor security situation might have something to do with the fact that the vast majority of software is built upon dependencies the authors didn't even look at...
Making quality software seems to be a lost art now.
- worik 11 months ago> Making quality software seems to be a lost art now
No it is not. Lost that is
Not utilised enough....
- worik 11 months ago
- 11 months ago
- strangecharm2 11 months ago[flagged]
- ang_cire 11 months agoNo one is objecting to writing secure software, but saying "just do it" is big "draw the rest of the owl" energy. It's hard to do even for small-medium programs, nevermind enterprise-scale ones with 100+ different components all interacting with each other.
- strangecharm2 11 months ago[flagged]
- strangecharm2 11 months ago
- ongy 11 months agoIt's not controversial to write secure software.
Saying that it's what should be fine is useless. Since it's not instructive.
Don't fix implementation issues because that just papers over design issues? Great. Now we just need a team that never makes mistakes in design. And then a language that doesn't allow security issues outside business logic.
- strangecharm2 11 months ago[flagged]
- strangecharm2 11 months ago
- izacus 11 months agoTrying to do security wrong often leads to much worse outcomes for data leakage than not doing it optimally. It's counter intuitive, but a lot of things in security are such.
- ang_cire 11 months ago
- worik 11 months ago> This is a mostly terrible 19-year old list.
This is an excellent list that is two decades over due, for some
> software and systems should be secure by design
That should be obvious. But nobody gets rich except by adding features, so this needs to be said over and over again
> This will never happen, so we need to utilize the find and patch technique,
Oh my giddy GAD! It is up to *us* to make this happen. Us. The find and patch technique does not work. Secure by design does work. The article had some good examples
> Most applications/systems are updated frequently, which means new vulnerabilities will be introduced.
That is only true when we are not allowed to do our jobs. When we are able to act like responsible professionals we can build secure software.
The flaw in the professional approach is how to get over the fact that features sell now, for cash, and building securely adds (a small amount of) cost for no visual benefit
I do not have a magic wand for that one. But we could look to the practices of civil engineers. Bridges do collapse, but they are not as unreliable as software
- ang_cire 11 months ago> The flaw in the professional approach is how to get over the fact that features sell now, for cash, and building securely adds (a small amount of) cost for no visual benefit
Because Capitalism means management and shareholders only care about stuff that does sell now, for cash.
> But we could look to the practices of civil engineers
If bridge-building projects were expected to produce profit, and indeed increasing profit over time, with civil engineers making new additions to the bridges to make them more exciting and profitable, they'd be in the same boat we are.
- ang_cire 11 months ago
- duskwuff 11 months ago
- teleforce 11 months agoThis article is quite old and has been submitted probably every year since it's published with past submissions well into double pages.
For modern version and systematic treatment of the subject check out this book by Spafford:
Cybersecurity Myths and Misconceptions: Avoiding the Hazards and Pitfalls that Derail:
https://www.pearson.com/en-us/subject-catalog/p/cybersecurit...
- dale_glass 11 months ago"We're Not a Target" isn't a minor dumb. It's the standpoint of every non-technical person I've ever met. "All I do with my computer is to read cooking recipes and upload cat photos. Who'd want to break in? I'm boring."
The best way I found to change their mind is to make a car analogy. Who'd want to steal your car? Any criminal with an use for it. Why? Because any car is valuable in itself. It can be sold for money. It can be used as a getaway vehicle. It can be used to crash into a jewelry shop. It can be used for a joy ride. It can be used to transport drugs. It can be used to kill somebody.
A criminal stealing a car isn't hoping that there are Pentagon secrets in the glove box. They have an use for the car itself. In the same way, somebody breaking into your computer has uses for the computer itself. They won't say no to finding something valuable, but it's by no means a requirement.
- jampekka 11 months agoA major dumb is that security people think breaking in is the end of the world. For vast majority of users it's not, and it's a balance between usability and security.
I know it's rather easy to break through a glass window, but I still prefer to see outside. I know I could faff with multiple locks for my bike, but I rather accept some risk for it to be stolen for the convenience.
If there's something I really don't want to risk stolen, I can take it into a bank's vault. But I don't want to live in a vault.
- dspillett 11 months ago> I know it's rather easy to break through a glass window, but I still prefer to see outside.
Bad analogy. It is not that easy to break modern multi-layer glazing, and it is also a lot easier to get away with breaking into a computer or account than breaking a window, undetected, until it is time to let the user know (for a ransom attempt or other such). Locking your doors is a much better analogy. You don't leave them unlocked in case you forget your keys do you? That would be a much better analogy for choosing convenience over security in computing.
> I know I could faff with multiple locks for my bike, but I rather accept some risk for it to be stolen for the convenience.
Someone breaking into a computer or account isn't the same as them taking a single object. It is more akin to them getting into your home or office, or on a smaller scale a briefcase. They don't take an object, but that can collect information that will help in future phishing attacks against you and people you care about.
The intruder could also operate from the hacked resource to continue their attack on the wider Internet.
> A major dumb is that security people think breaking in is the end of the world.
The major dumb of thinking like this is that breaking in is often not the end of anything, it can be the start or continuation of a larger problem. Security people know this and state it all the time, but others often don't listen.
- jampekka 11 months ago> The major dumb of thinking like this is that breaking in is often not the end of anything, it can be the start or continuation of a larger problem. Security people know this and state it all the time, but others often don't listen.
This is exactly the counter productive attitude I criticized. I told you why others don't often listen, but you don't seem to listen to that.
Also, people do listen. They just don't agree.
- jampekka 11 months ago
- dale_glass 11 months ago> A major dumb is that security people think breaking in is the end of the world. For vast majority of users it's not, and it's a balance between usability and security.
End of the world? No. But it's really, really bad.
When you get your stolen car back, problem over.
But your broken into system should in most cases be considered forever tainted until fully reinstalled. You can't enumerate badness. That the antivirus got rid of one thing doesn't mean they didn't sneak in something it didn't find. You could be still a DoS node, a CSAM distributor, or a spam sender.
- jampekka 11 months ago> But your broken into system should in most cases be considered forever tainted until fully reinstalled.
Reinstalling an OS is not really, really bad. It's an inconvenience. Less so than e.g. having to get new cards after a lost wallet or getting a new car.
Security people don't seem to really assess what are the actual consequences of breaches. Just that they are "really really bad" and have to be protected against all costs. Often literally the cost being an unusable system.
- SoftTalker 11 months ago> When you get your stolen car back, problem over.
> But your broken into system should in most cases be considered forever tainted
Actually this is exactly how stolen cars work. A stolen car that is recovered will have a branded title from then on (at least it will if an insurance company wrote it off).
- kazinator 11 months ago> When you get your stolen car back, problem over.
Not if it contains computers; either the original ones it had before being stolen, or some new, gifted ones you don't know about.
- 11 months ago
- jampekka 11 months ago
- smokel 11 months agoI'm afraid this is a false dichotomy.
People can use HTTPS now instead of HTTP, without degrading usability. This has taken a lot of people a lot of work, but everyone gets to enjoy better security. No need to lock and unlock every REST call as if it were a bicycle.
Also, a hacker will replace the broken glass within milliseconds, and you won't find out it was ever broken.
- izacus 11 months agoYou're ignoring that HTTPS took decades to be default thanks to massive work of a lot of security engineers who UNDERSTOOD that work and process around certificates was too onerous and hard for users. It took them literally decades of work to get HTTPS cert issuance to such a low cost process that everyone does it. It *really* cannot be understated how much important work that was.
Meanwhile, other security zealots were just happy to scream at users for not sending 20 forms and thousands of dollars to cert authorities.
Usability matters - and the author of this original rant seems to be one of those security people who don't understand why the systems they're guarding are useful, used and how are they used. That's the core security cancer still in the wild - security experts not understanding just how transparent the security has to be and that it's sometimes ok to have a less secure system if that means users won't do something worse.
- jampekka 11 months agoIt shouldn't be a dichotomy, but security zealots not caring about usability or putting the risks in context makes it such.
HTTPS by default is good, especially after Let's Encrypt. Before that is was not worth the hassle/cost most of the time.
E.g. forced MFA everywhere is not good.
> Also, a hacker will replace the broken glass within milliseconds, and you won't find out it was ever broken.
This is very rare in practice for normal users. Again, risks in context please.
- izacus 11 months ago
- 11 months ago
- kazinator 11 months ago> but I still prefer to see outside
Steel bars?
- dijit 11 months agoI think this is why the analogy holds up.
Some situations definitely call for steel bars, some for having no windows at all.
But for you and me, windows are fine, because the value of being inside my apartment is not the same value as being in a jewellers or in a building with good sight-lines to something even more valuable -- and the value of having unrestricted windows is high for us.
- dijit 11 months ago
- gmuslera 11 months agoThe act of breaking in is not even the end of it. It is not a broken glass that you clearly see and just replace to forget about it. It may be the start of a process, and you don’t know what will happen down the road. But it won’t be something limited to the affected computer or phone.
- strangecharm2 11 months ago[flagged]
- dspillett 11 months ago
- jampekka 11 months ago
- Arch-TK 11 months ago> #3) Penetrate and Patch
This is one of the reasons why I feel my job in security is so unfulfilling.
Almost nobody I work with really cares about getting it right to begin with, designing comprehensive test suites to fuzz or outright prove that things are secure, using designs which rule out the possibility of error.
You get asked: please look at this gigantic piece of software, maybe you get the source code, maybe it's written in Java or C#. Either way, you look at <1% of it, you either find something seriously wrong or you don't[0], you report your findings, maybe the vendor fixes it. Or the vendor doesn't care and the business soliciting the test after purchasing the software from the vendor just accepts the risk, maybe puts in a tissue paper mitigation.
This approach seems so pointless that it's difficult to bother sometimes.
edit:
> #4) Hacking is Cool
I think it's good to split unlawful access from security consultancy.
You don't learn nearly as much about how to secure a system if you work solely from the point of view of an engineer designing a system to be secure. You can get much better insight into how to design a secure system if you try to break in. Thinking like a bad actor, learning how exploitation works, etc. These are all things which strictly help.
[0]: It's crazy how often I find bare PKCS#7 padded AES in CBC mode. Bonus points if you either use a "passphrase" directly, or hash it with some bare hash algorithm before using various lengths of the hash for both the key and IV. Extra bonus points if you hard code a "default" password/key and then never override this in the codebase.
- regularfry 11 months agoIt's very easy for a big organisation with a leadership that needs to show it is doing something to pass down a mandate that relies on throwing money at the problem for little tangible benefit. "Everything needs to be pen tested" is the sort of thing that sounds like the right thing to do if you don't understand the problem space; it's exactly as wrong as using lines of code as a productivity metric.
All it does is to say, very expensively, "there are no obvious vulnerabilities". If it even manages that. What you want to say is "there are obviously no vulnerabilities" but if you're having to strap that onto a pre-existing bucket of bugs then it's a complete rebuild. And nobody has time for that when there's an angry exec breathing down your neck asking why the product keeps getting hacked.
The fundamental problem is the feature factory model of software development. Treating software design and engineering as a cost to be minimised means that anything in the way of getting New Shiny Feature out of the door is Bad. And that approach, where you separate product design from software implementation, where you control what happens in the organisation with budgetary controls and the software delivery organisation is treated as subordinate because it is framed as pure cost, drives the behaviour you see.
- regularfry 11 months ago
- cwbrandsma 11 months agoPenatrate and Patch: because if it doesn’t work the first time then throw everything away, fire the developers, hire new ones, and start over completely.
- oschvr 11 months ago> "If you're a security practitioner, teaching yourself how to hack is also part of the "Hacking is Cool" dumb idea."
lol'd at the irony of the fact that this was posted here, Hacker News...
- smokel 11 months agoAt the risk of stating the obvious, the word "hacker" has (at least) two distinct meanings. The article talks about people who try to break into systems, and Hacker News is about people who like to hack away at their keyboard to program interesting things.
The world might have been a better place if we used the terms "cracker" and "tinkerer" instead.
- jrm4 11 months agoDoubt it; it's utterly naive and wishful thinking to think that those two things are easily separable; it's never as simple as Good Guys wear White, Bad Guys wear Black, which is the level of intelligence this idea operates at.
- smokel 11 months agoIt might be naive, but I'm not the only one in using this distinction. See the "Definitions" section of https://en.wikipedia.org/wiki/Hacker
- smokel 11 months ago
- jrm4 11 months ago
- strangecharm2 11 months ago[flagged]
- smokel 11 months ago
- nottorp 11 months ago> #4) Hacking is Cool
Hacking is cool. Why the security theater industry has appropriated "hacking" to mean accessing other people's systems without authorization, I don't know.
- kstrauser 11 months agoFrom Britannia: https://www.britannica.com/topic/hacker
> Indeed, the first recorded use of the word hacker in print appeared in a 1963 article in MIT’s The Tech detailing how hackers managed to illegally access the university’s telephone network.
I get what you’re saying, but I think we’re tilting at windmills. If “hacker” has a connotation of “breaking in” for 61 years now, then the descriptivist answer is to let it be.
- nottorp 11 months agoOhh here it is:
- nottorp 11 months ago
- Kamq 11 months ago> Why the security theater industry has appropriated "hacking" to mean accessing other people's systems without authorization, I don't know.
A lot of early remote access was done by hackers. Same with exploiting vulnerabilities.
One of my favorite is the Robin Hood worm: https://users.cs.utah.edu/~elb/folklore/xerox.txt
TL;DR: Engineers from Motorola exploited a vulnerability illustrate it, and they did so in a humorous way. Within the tribe of hackers, this is pretty normal, but the only difference between that and stealing everything once the vulnerability has been exploited is intent.
Normies only hear about the ones where people steal things. They don't care about the funny kind.
- supertrope 11 months agoIf owners are able to tweak or upgrade the machine themselves, it will hurt sales of next year’s model. If “hacking” helped corporations make money they would spend billions promoting it. The old meaning of hacking has been replaced with “maker.”
- kstrauser 11 months ago
- dingody 11 months agoI don’t entirely agree with the author’s viewpoint on “Hacking is Cool.” There was a time when I thought similarly, believing that “finding some system vulnerabilities is just like helping those system programmers find bugs, and I can never be better than those programmers.” However, I gradually rejected this idea. The appeal of cybersecurity lies in its “breadth” rather than its “depth.” In a specific area, a hacker might never be as proficient as a programmer, but hackers often possess a broad knowledge base across various fields.
A web security researcher might simultaneously discover vulnerabilities in “PHP, JSP, Servlet, ASP.NET, IIS, and Tomcat.” A binary researcher might have knowledge of “Windows, Android, and iOS.” A network protocol researcher might be well-versed in “TCP/IP, HTTP, and FTP” protocols. More likely, a hacker often masters all these basic knowledge areas.
So, what I want to say is that the key is not the technology itself, nor the nitty-gritty of offense and defense in some vulnerabilities, but rather the ability to use the wide range of knowledge and the hacker’s reverse thinking to challenge seemingly sound systems. This is what our society needs and it is immensely enjoyable.
- esjeon 11 months ago> Educating Users
isn't dumb, because "users" are proven to be the weakest link in the whole security chain. Users must be aware of the workplace security, just like how they should be trained w/ the workplace safety.
Also, there's no security to deal with if the system is unusable by the users. The trade-off b/w usability and security is simply unsolvable, and education is a patch to that problem.
- iandanforth 11 months agoAnd yet the consequence of letting people like this run your security org is that it takes a JIRA ticket and multiple days, weeks, never to be able to install 'unapproved' software on you laptop.
Then if you've got the software you need to do your job you're stuck in endless cycles of "pause and think" trying to create the mythical "secure by design" software which does not exist. And then you get hacked anyway because someone got an email (with no attachments) telling them to call the CISO right away, who then helpfully walks them through a security "upgrade" on their machine.
Caveats: Yes there is a balance and log anomaly detection followed by actual human inspection is a good idea!
- strangecharm2 11 months ago[flagged]
- strangecharm2 11 months ago
- tonnydourado 11 months agoI've seen "Penetrate and Patch" play out a lot on software development in general. When a new requirement shows up, or technical debt starts to grow, or performance issues, the first instinct of a lot of people is to try and find the smallest, easiest possible change to achieve the immediate goal, and just move to the next user story.
That's not a bad instinct by itself, but when it's your only approach, it leads to a snowball of problems. Sometimes you have to question the assumptions, to take a step back and try to redesign things, or new addition just won't fit, and the system just become wonkier and wonkier.
- woodruffw 11 months agoSome of this has aged pretty poorly -- "hacking is cool" has, in fact, largely worked out for the US's security community.
- hot_gril 11 months agoYeah, we're not gonna convince people in other countries that hacking is uncool. Better to have the advantage.
- hot_gril 11 months ago
- gnabgib 11 months ago(2015) Previous discussions:
2015 (114 points, 56 comments) https://news.ycombinator.com/item?id=8827985
2023 (265 points, 202 comments) https://news.ycombinator.com/item?id=34513806
- philipwhiuk 11 months agoFrom the 2015 submission:
> Perhaps another 10 years from now, rogue AI will be the primary opponent, making the pro hackers of today look like the script kiddies.
Step on it OpenAI, you've only got 1 year left ;)
- philipwhiuk 11 months ago
- AtlasBarfed 11 months ago#1) Great, least secure privilege, oh wait, HOW LONG DOES IT TAKE TO OPEN A PORT? HOW MANY APPROVALS? HOW MANY FORMS?
Least secure privilege never talks about how security people in charge of granting back the permissions do their jobs at an absolute sloth pace.
#2) Ok, what happens when goodness becomes badness, via exploits or internal attacks? How do you know when a good guy becomes corrupted without some enumeration of the behavior of infections?
#3) Is he arguing to NOT patch?
#4) Hacking will continue to be cool as long as modern corporations and governments are oppressive, controlling, invasive, and exploitative. It's why Hollywood loves the Mafia.
#5) ok, correct, people are hard to train at things they really don't care about. But "educating users", if you squint is "organizational compliance". You know who LOVES compliance checklists? Security folks.
#6) Apparently, there are good solutions in corporate IT, and all new ones are bad.
I'll state it once again: my #1 recommendation to "security" people is PROVIDE SOLUTIONS. Parachuting in with compliance checklists is stupid. PROVIDE THE SOLUTION.
But security people don't want to provide solutions, because they are then REALLY screwed when inevitably the provided solution gets hacked. It's way better to have some endless checklist and shrug if the "other" engineers mess up the security aspects.
And by PROVIDE SOLUTIONS I don't mean "offer the one solution for problem x (keystore, password management), and say fuck you if someone has a legitimate issue with the system you picked". If you can't provide solutions to various needs of people in the org, you are failing.
Corporate Security people don't want to ENGINEER things, again they just want to make compliance powerpoints to C-suite execs and hang out in their offices.
- dasil003 11 months agoI was 5 years into a professional software career when this was written, at this point I suspect I'm about the age of the author at the time of its writing. It's fascinating to read this now and recognize the wisdom coming from experience honed in the 90s and the explosion of the internet, but also the cultural gap from the web/mobile generation, and how experience doesn't always translate to new contexts.
For instance, the first bad idea, Default Permit, is clearly bad in the realm of networking. I might quibble a bit and suggest Default Permit isn't so much an idea as the natural state of when one invents computer networking. But clearly Default Deny was a very very good idea and critical idea necessary for the internet's growth. It makes a lot of sense in the context of global networking, but it's not quite as powerful in other security contexts. For instance, SELinux has never really taken off, largely because it's a colossal pain in the ass and the threat models don't typically justify the overhead.
The other bad idea that stands out is "Action is Better Than Inaction". I think this one shows a very strong big company / enterprise bias more than anything else—of course when you are big you have more to lose and should value prudence. And yeah, good security in general is not based on shiny things, so I don't totally fault the author. That said though, there's a reason that modern software companies tout principles like "bias for action" or "move fast and break things"—because software is malleable and as the entire world population shifted to carrying a smartphone on their person at all times, there was a huge land grab opportunity that was won by those who could move quickly enough to capitalize on it. Granted, this created a lot of security risk and problems along the way, but in that type of environment, adopting a "wait-and-see" attitude can also be an existential threat to a company. At the end of the day though, I don't think there's any rule of thumb for whether action vs inaction is better, each decision must be made in context, and security is only one consideration of any given choice.
- worik 11 months agoThis true in a very small part of our business
Most of us are not involved in a "land grab", in that metaphorical world most of us are past "homesteading " and are paving roads and I filling infrastructure
Even small companies should take care when building infrastructure
"Go fast and break things" is, was, an irresponsible bad idea. It made Zuck rich, but that same hubris and arrogance is bringing down the things he created
- worik 11 months ago
- dvfjsdhgfv 11 months ago> The cure for "Enumerating Badness" is, of course, "Enumerating Goodness." Amazingly, there is virtually no support in operating systems for such software-level controls.
Really? SELinux and AppArmor have existed since, I don't know, late nineties? The problem is not that these controls don't exist, it's just they make using your system much, much harder. You will probably spent some time "teaching" them first, then actually enable, and still fight with them every time you install something or make other changes in your system.
- ivlad 11 months ago> You will probably spent some time "teaching" them first
SELinux works well out of the box in RHEL and its derivatives since many years. You comment shows, you did not actually try it.
> fight with them every time you install something or make other changes in your system
If you install anything that does not take permissions into account, it will break. Try running nginx with nginx.conf permissions set to 000, you will not be surprised, it does not work.
- dvfjsdhgfv 11 months agoI'm glad SELinux works better than in the past and at the same time I'm sorry it didn't from the start as many people were frustrated by it at that time (e.g. [0]). On the other hand, it looks like some people still get upset by it[1].
- dvfjsdhgfv 11 months ago
- ivlad 11 months ago
- jeffrallen 11 months agomjr (as I always knew him from mailing lists and whatnot) seems to have given up on security and enjoys forging metal instead now.
> Somewhere in there, security became a suppurating chest wound and put us all on the treadmill of infinite patches and massive downloads. I fought in those trenches for 30 years – as often against my customers (“no you should not put a fork in a light socket. Oh, ow, that looks painful. Here, let me put some boo boo cream on it so you can do it again as soon as I leave.”) as for them. It was interesting and lucrative and I hope I helped a little bit, but I’m afraid I accomplished relatively nothing.
Smart guy, hope he enjoys his retirement.
- ricktdotorg 11 months agoit's 2024! if you run your own infrastructure in your own DC and your defaults are NOT:
- to heavily VLAN via load type/department/importance/whatever your org prefers
- default denying everything except common infra like DNS/NTP/maybe ICMP/maybe proxy arp/etc between those VLANs
- every proto/port hole poked through is a security-reviewed request
then you are doing it wrong.
"ahh but these ACL request reviews take too long and slow down our devs" -- fix the review process, it can be done.
spend the time on speeding up the security review, not on increasing your infrastructure's attack surface.
- voidUpdate 11 months agoI'm not sure if I'm completely misunderstanding #4 or if it's wrong. Pentesting an application is absolutely a good idea, and its not about "teaching yourself a bunch of exploits and how to use them" in the same way programming isn't just "learning a bunch of sorting algorithms and how to use them". It's about knowing why an exploit works, and how it can be adapted to attack something else and find a new vulnerability, and then it goes back to the programming side of working out why that vulnerability works and how to fix it
- bawolff 11 months ago> If you're a security practitioner, teaching yourself how to hack is also part of the "Hacking is Cool" dumb idea. Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole.
I would strongly disagree with that.
You can't defend against something you don't understand.
You definitely shouldn't spend time learning some script-kiddie tool, that is pointless. You should understand how exploits work from first principles. The principles mostly won't change or at least not very fast, and you need to understand how they work to make systems resistant to them.
One of the worst ideas in computer security in my mind is cargo culting - where people just mindlessly repeat practises thinking it will improve security. Sometimes they don't work because they have been taken out of their original context. Other times they never made sense in the first place. Understanding how exploits work stops this.
- strangecharm2 11 months agoTrue security can only come from understanding how your system works. Otherwise, you're just inventing a religion, and doing everything on faith. "We're fine, we update our dependencies." Except you have no idea what's in those dependencies, or how they work. This is, apparently, a controversial opinion now.
- strangecharm2 11 months ago
- mrbluecoat 11 months ago> sometime around 1992 the amount of Badness in the Internet began to vastly outweigh the amount of Goodness
I'd be interested in seeing a source for this. Feels a bit anecdotal hyperbole.
- julesallen 11 months agoIt's a little anecdotal as nobody was really writing history down at that point but it feels about the right timing.
The first time the FTP server I ran got broken into was about then, it was a shock as why would some a-hole want to do that? I wasn't aware until one of my users tipped me off a couple of days after the breach. They were sharing warez rather than porn at least, having the bandwidth to download even crappy postage stamp 8 bit color videos back then would take you hours.
When this happened I built the company's first router a few days later and put everything behind it. Before that all the machines that needed Internet access would turn on TCP/IP and we'd give them a static IP from the public IP range we'd secured. Our pipe was only 56k so if you needed it you had to have a really good reason. No firewall on the machines. Crazy, right?
Very different times for sure.
- strangecharm2 11 months ago[flagged]
- julesallen 11 months ago
- motohagiography 11 months agowhat has changed since 2005 is that these ideas are no longer dumb, but describe the factors of the dynamic security teams have to manage now. previously, security was an engineering and gating problem when systems were less interdependent and complex, but now it's a policing and management problem where there is a level of pervasive risk that you find ways to extract value from.
I would be interested in whether he still thinks these are true, as if you are doing security today, you are doing exactly these things.
- educating users: absolutely the most effective and highest return tool available.
- default permit: there are almost no problems you can't grow your way out of. there are zero startups, or even companies, that have been killed by breaches.
- enumerating badness: when managing a dynamic, you need measurements. there is never zero badness, that's what makes it valuable. the change in badness over time is a proxy for the performance of your organization.
- penetrate and patch: having that talent on your team yields irreplacable expereince. the only reason most programmers know about stacks and heaps today is from smashing them.
- hacking is cool: 30 years later, what is cooler, hacking or marcus?
- amelius 11 months ago> "Let's go production with it now and we can secure it later" - no, you won't. A better question to ask yourself is "If we don't have time to do it correctly now, will we have time to do it over once it's broken?"
I guess the idea is generally that if we go in production now we will make profits, and with those profits we can scale and hire real security folks (which may or may not happen).
- mikewarot 11 months agoMy "security flavor of the month" is almost universally ignored... Capability Based Security/Multilevel Secure Computing. If it's not ignored, it's mis-understood.
It's NOT the UAC we all grew to hate with Windows 8, et al. It's NOT the horrible mode toggles present on our smartphones. It's NOT AppArmor.
I'm still hoping that Genode (or HURD) makes something I can run as my daily driver before I die.
- zzo38computer 11 months agoI also think that capability based security is a good idea, and that proxy capabilities should also be possible. (This would include all I/O, including measuring time.)
But, how the UI is working with capability based security, is a separate issue (although I have some ideas).
(Furthermore, I also think that capability based security with proxy capabilities can solve some other problems as well (if the system is designed well), including some that are not directly related to security features. It can be used if you want to use programs to do some things that it was not directly designed to do; e.g. if a program is designed to receive audio directly from a microphone, you can instead add special effects in between by using other programs before a program receives the audio data, or use a file on the computer instead (which can be useful in case you do not have a microphone), etc. It can also be used for testing; e.g. to test that a program works correctly on February 29 even if the current date is not February 29, or if the program does have such a bug, to bypass it by telling that program (and only that program) that the date is not February 29; and you can make fault simulations, etc.)
- zzo38computer 11 months ago
- cratermoon 11 months agoThe real dumbest idea in computer security is "we'll add security later, after the basic functionality is complete"
- jrm4 11 months agoMissed the most important.
"You can have meaningful security without skin-in-the-game."
This is literally the beginning and end of the problem.
- lsb 11 months agoDefault permit, enumerating badness, penetrate and patch, hacking is cool, educating users, action is better than inaction
- Etheryte 11 months agoI think commenting short summaries like this is not beneficial on HN. It destroys all nuance, squashes depth out of the discussion, and invites people to comment solely on the subtitles of a full article. That's not the kind of discussion I would like HN to degrade into — if I wanted that, I'd go to Buzzfeed. Instead I hope everyone takes the time to read the article, or at least some of the article, before commenting. Short tldrs don't facilitate that.
- smokel 11 months agoAs much as I agree with this, I must admit that it did trigger me to read the actual article :)
I assume that in a not-so-distant future, we get AI powered summaries of the target page for free, similar to how Wikipedia shows a preview of the target page when hovering over a link.
- chuckadams 11 months agoOne one hand I agree, on the other is that the article itself is pretty much a bunch of grumpy and insubstantial hot takes …
- arcbyte 11 months agoWhile i agree with all the potential downsides you mentioned, i still lean heavily on the side of short summaries being extremely helpful.
This was an interesting title, but having seen the summary and discussion, im not particularñy keen to read it. In fact i would never have commented on this post except to reputation yours.
- omoikane 11 months agoLooks like lsb was the one who submitted the article, and this comment appears to be submitted at the same time (based on same timestamp and consecutive id in the URL), possibly to encourage people to read the article in case if the title sounded like clickbait.
- 11 months ago
- smokel 11 months ago
- Etheryte 11 months ago
- kazinator 11 months agoPenetrate and Patch is a useful exercise, because it lets the IT security team deliver some result and show they have value, in times when nothing bad is happening and everyone forgets they exist.
- jibe 11 months ago"We're Not a Target" deserves promotion to major.
- kuharich 11 months agoPast comments: https://news.ycombinator.com/item?id=28068725
- Jean-Papoulos 11 months ago>As a younger generation of workers moves into the workforce, they will come pre-installed with a healthy skepticism about phishing and social engineering.
hahahahahaha
- al2o3cr 11 months ago
I've got bad news for ya, 2005... :PMy guess is that this will extend to knowing not to open weird attachments from strangers.
- jojobas 11 months ago>My prediction is that the "Hacking is Cool" dumb idea will be a dead idea in the next 10 years.
19 years later, hacking is still cool.
- janalsncm 11 months ago> hacking is cool
Hacking will always be cool now that there’s an entire aesthetic around it. The Matrix, Mr. Robot, even the Social Network.
- 11 months ago
- rkagerer 11 months ago#7 Authentication via SMS
- uconnectlol 11 months ago> Please wait while your request is being verified...
Speaking of snakeoil
- michaelmrose 11 months ago> Educating Users
This actually DOES work it just doesn't make you immune to trouble any more than a flu vaccine means nobody gets the flu. If you drill shit into people's heads and coach people who make mistakes you can decrease the number of people who do dumb company destroying behaviors by specifically enumerating the exact things they shouldn't do.
It just can't be your sole line of defense. For instance if Fred gives his creds out for a candy bar and 2FA keeps those creds from working and you educate and or fire Fred not only did your second line of defense succeed your first one is now stronger without Fred.
- tracerbulletx 11 months agoHacking is cool.
- amelius 11 months agoAlso needs mention:
- Having an OS that treats users as "suspicious" but applications as "safe".
(I.e., all Unix-like systems)
- vrighter 11 months agoxdg-desktop-portal was created to allow applications running in a sandbox to access system resources. Nowadays, more and more, regular applications have to pass through this piece of crap to do their usual work, one which by design was never intended for them, but for flatpakked applications.
Oh the joy of going through the bluetooth pairing process for my controller, or physically getting up and connecting a physical wire to it, only for the system to wait until I'm in the game and touch the controller and immediately the game hangs because a pop up appears asking me if i want to give permission to my damn controller to control stuff. Or having to manually reposition my windows every single time, because a window knowing where it is is somehow "insecure"
I'm the one putting the software on there and deciding what to run. If I run it, then it's because I wanted the application to do what it does. If someone else is in the position of running software on my machine, they're already on the other side of the airtight hatchway. They can already give themselves the permissions they need. They can just click yes on any pop up that appears. Yes, the applications should be considered safe. Because the OS cannot possibly make any informed assumptions about what's legitimate and what's malicious.
To me it feels like I can't do certain stuff on my PC because someone else might misuse something on theirs. How is that my problem?
- zzo38computer 11 months ago> xdg-desktop-portal was created to allow applications running in a sandbox to access system resources.
There are many problems with it; I do not use it on my computer. A better sandbox system would be possible, but xdg-desktop-portal is not designed very well.
> Oh the joy of going through the bluetooth pairing process for my controller, or physically getting up and connecting a physical wire to it, only for the system to wait until I'm in the game and touch the controller and immediately the game hangs
That is also a problem of a bad design. If a permission is required, it should be possible to set up the permissions ahead of time (and to configure it to automatically grant permission if you do not want to restrict it; possibly could even be the default setting), instead of waiting for that to ask you and to hang like that.
> Or having to manually reposition my windows every single time, because a window knowing where it is is somehow "insecure"
I would think that the window manager should know where the windows are and automatically position them if you have configured it to remember where they are. (The windows themself should not usually need to know where they are, since the window manager would handle it instead, and the applications should not need to know what window manager is in use, since different window managers will work in different ways and if the application program assumes it knows how it works then that can be a problem.)
> I'm the one putting the software on there and deciding what to run. If I run it, then it's because I wanted the application to do what it does.
Yes, although sometimes you do not want it to do what it does, which is why it should be possible to configure the security, preferably with proxy capabilities.
> Because the OS cannot possibly make any informed assumptions about what's legitimate and what's malicious.
I agree, although that is why it must be possible for the operator to specify such things. I think that proxy capabilities would be the way to be done (which, in addition to improving security, also allows more control over the interaction between the programs and other parts of the system).
> To me it feels like I can't do certain stuff on my PC because someone else might misuse something on theirs.
Yes, it seem like that, because it is the badly design of some programs, protocols, etc.
- zzo38computer 11 months ago
- vrighter 11 months ago
- grahar64 11 months ago"Educating users ... If it worked, it would have worked by now"
- ang_cire 11 months agoIt does work, but user training is something that- whether for security or otherwise- is a continuous process. New hires. Training on new technologies. New operating models. etc etc etc...
IT is not static; there is no such thing as a problem that the entire field solves at once, and is forever afterward gone.
When you're training an athlete, you teach them about fitness and diet, which underpins their other skills. And you keep teaching and updating that training, even though "being fit" is ancillary to their actual job (i.e. playing football, gymnastics, etc). Pro athletes have full-time trainers, even though a layman might think, "well haven't they learned how to keep themselves fit by now?"
- ang_cire 11 months ago
- chha 11 months ago> If you're a security practitioner, teaching yourself how to hack is also part of the "Hacking is Cool" dumb idea. Think about it for a couple of minutes: teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole.
If only this was true... Injection has been on Owasp Top 10 since its inception, and is unlikely to go away anytime soon. Learning some techniques can be useful just to do quick assessments of basic attack vectors, and to really understand how you can protect yourself.
- umanghere 11 months ago> 4) Hacking is Cool
Pardon my French, but this is the dumbest thing I have read all week. You simply cannot work on defensive techniques without understanding offensive techniques - plainly put, good luck developing exploit mitigations without having ever written or understood an exploit yourself. That’s how you get a slew of mitigations and security strategy that have questionable, if not negative value.
- klabb3 11 months agoAgreed, eyebrows were elevated at this point in the article. If you want to build a good lock, you definitely want to consult the lock picking lawyer. And its not just a poor choice of title either:
> teaching yourself a bunch of exploits and how to use them means you're investing your time in learning a bunch of tools and techniques that are going to go stale as soon as everyone has patched that particular hole
Ah yes, I too remember when buffer overflows, xss and sql injections became stale when the world learned about them and they were removed from all code bases, never to be seen again.
> Remote computing freed criminals from the historic requirement of proximity to their crimes. Anonymity and freedom from personal victim confrontation increased the emotional ease of crime […] hacking is a social problem. It's not a technology problem, at all. "Timid people could become criminals."
Like any white collar crime then? Anyway, there’s some truth in this, but the analysis is completely off. Remote hacking has lower risk, is easier to conceal, and you can mount many automated attacks in a short period of time. Also, feelings of guilt are often tamed by the victim being an (often rich) organization. Nobody would glorify, justify or brag about deploying ransomware on some grandma. Those crimes happen, but you won’t find them on tech blogs.
- blablabla123 11 months agoThat. Also not educating users is a bad idea but it also becomes quite clear that the article was written in 2005 where the IT/security landscape was a much different one.
- crngefest 11 months agoI concur with his views on educating users.
It’s so much better to prevent them from doing unsafe things in the first place, education is a long and hard undertaking and I see little practical evidence that it works on the majority of people.
>But, but, but I really really need to do $unsafething
No in almost all cases you don’t - it’s just taking shortcuts and cutting corners that is the problem here
- blablabla123 11 months agoThe attacks with the biggest impact are usually social engineering attacks though. It can be as simple as shoulder surfing, tailgating or as advanced as an AI voice scam. Actually these are widely popularized since the early 90s by people like Kevin Mitnick
- blablabla123 11 months ago
- crngefest 11 months ago
- watwut 11 months agoYou do not have to be able to build actual sql injection yourself in order to have properly secured queries. Same with xss injection. Having rough ideas about attacks is probably necessary, but beyond that you primary need the discipline and correct frameworks that wont facilitate you to shoot yourself in the foot.
- TacticalCoder 11 months agoI don't think the argument is that dumb. For a start there's a difference between white hack hackers and dark hat hackers. Then here he's talking specifically about people who do pentesting known exploits on broken systems.
Think about it this way: do you think Theo Deraadt (from OpenBSD and OpenSSH fame) spends his time trying to see if Acme corp is vulnerable to OpenSSH exploit x.y.z, which has been patched 3 months ago?
I don't care about attacking systems: it is of very little interest to me. I've done it in the past: it's all too easy because we live in a mediocre work full of insecure crap. However I love spending some time making life harder for dark hat hackers.
We know what creates exploits and yet people everywhere are going to repeat the same mistakes over and over again.
My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure". That is the mindset we need. But it didn't stop people using Unicode in places where we should never have used it, like in domain names for examples. Then when you test an homoglyphic attack on IDN, it's not "cool". It's lame. It's pathetic. Of course you can do homglyphic attacks and trick people: an actual security expert (not a pentester testing known exploits on broken configs) warned about that 30 years ago.
There's nothing to "understand" by abusing such exploit yourself besides "people who don't understand security have taken stupid decisions".
OpenBSD and OpenSSH are among the most secure software ever written (even if OpenSSH had a few issues lately). I don't think Theo Deraadt spends his time pentesting so that he can be able to then write secure software.
What strikes me the most is the mediocrity of most exploits. Exploits that, had the software been written with the mindset of the person who wrote TFA, would for the most part not have been possible.
He is spot on when he says that default permit and enumerate badness are dumb ideas. I think it's worth trying to understand what he means when he says "hacking is not cool".
- SoftTalker 11 months ago> My favorite example is Bruce Schneier writing, when Unicode came out, that "Unicode is too complex to ever be secure".
The same is true of containers, VMs, sandboxes, etc.
The idea that we all willingly run applications that continuously download and execute code from all over the internet is quite remarkable.
- SoftTalker 11 months ago
- 11 months ago
- ang_cire 11 months ago[flagged]
- klabb3 11 months ago
- zepolen 11 months ago[flagged]