TL;DR: When someone suggests a specific countermeasure (i.e. software, service, encryption method, new device), ask which threat model it applies to. Ask what the risk is for yourself and your specific situation. Let’s raise the level of conversation in the privacy community by not supporting blanket countermeasures and paranoia-derived decision making.

Assess your own threat model for your choices and your life. Learn the opsec thought process and apply it for yourself. Say no to the countermeasure-first fallacy.


Lots of bad advice

The vast majority of security or privacy related posts and comments on reddit are asking for help or advice on or related to specific countermeasures, like how a cook might ask for help with a specific ingredient. It implies the cook understands the recipe. The problem is most people don’t understand the recipe of security or privacy for themselves (i.e. their own opsec threat model) and end up asking the equivalent of “how to properly fry an egg” for a cake recipe because “eggs” were listed.

Efforts to educate the community to a proper opsec mindset are often met with resistance through these common pseudo-intellectual arguments:

“a proper threat model isn’t necessary because everyone can benefit from extreme protections from everything no matter their situation”

This is patently false, and easily demonstrated by trying to log into your bank account over Tor or VPN. Once your account is locked for suspicious activity, you’ll need to question if you should in fact “use Tor or VPN all the time for everything”.

“we shouldn’t share any information with governments”

This is mostly false. Try having a driver’s license and not sharing any data to get it. Try being a citizen of a country and asking for help overseas without proving your citizenship. Try running a legitimate legal business and hiring an employee and paying them anonymously.

There simply is no getting around sharing information with government as a part of life. We can limit and push back on the types of information we share, but the idea that all information is equal and should be private is an anti-opportunity position, often based on paranoia and introduces complexity and inconvenience often for no real benefit.

As for corporations, you can often limit the amount of information they collect through isolation in most cases, but even when you can't, the type of information they are requesting and how they use it may be completely acceptable in some limited cases (troubleshooting software, fraud and cheating prevention, providing access credentials, etc). It will always come down to a personal threat model as to whether the information shared is an acceptable risk or not.

“if its free, you’re the product and you shouldn’t use it”

This is mostly false. You can be the product even when you pay, and while many free (donation, tax, confused investor funded) services exist, the takeaway is that all users of a product or system are a product in one way or another— what matters is if the benefits outweigh the risks.


If you have nothing to lose..

When someone says “don’t use _____”, ask them what you stand to lose. This forces them to define the risk — a critical component to a threat model. In most cases, you’ll deduce from their response that they are assuming a threat model of a spy, a victim of state persecution, a terrorist, or other very high value target which makes their advice akin to "you should never drive a car because people can die from car accidents".

Let’s raise the level of conversation in the privacy community by not supporting blanket countermeasures and paranoia-derived decision making.

When someone suggests a specific countermeasure (e.g. ingredient), ask which threat model (e.g. recipe) it applies to. Ask what the risk is for yourself and your specific situation. If there isn’t any, you’re probably wasting your energy at best, or at worst adding additional potential liability and vulnerabilities to yourself.

Instead, assess your own threat model for your choices and your life. Learn the opsec thought process and apply it for yourself.

Let's say no to the countermeasure-first fallacy in the privacy community.

Comments (65)

Well for me personally, increasing my privacy is not something i soley do for my own benefit, but rather because i don't want certain companies to make money off me. Especially when for instance newspapers make it really hard to opt out of something and use maliciously design cookie banners, to milk customers as much as possible, my motivation to take my time for it quickly goes up(or i use reading mode in firefox, or outline, if they have measures against this and also an unbearable cookie banner then i just don't use the site).

I hate it in particular, how these companies act as closely on the border of what is legal in terms of data collection as possible(or often just do it blatantly illegal, because the potential income is higher than the fine they'll get) and even though it makes my browsing experience slightly worse, it is something i'm willing to take, just so that i'm not a product of any of these companies.

So basically, the more privat you are surfing around the less of a product you become and that is good if you want companies like google or facebook to be just a tiny bit poorer at the end of the day.

Furthermore does data very often get leaked so trying to minimize the pages where i have to log in is my goal too, aswell as to prevent any form of blackmailing through the data collected about me.

Atlast data is insanely valuabe, especially for authorities and government actors. The Cambridge analytica scandal has shown what kind of power you can get with the data of millions and how strongly you can influence public opinions with it. Means again, if you surf more private, prevent tracking you take away power from those companies and authorities.

At the end it is impossible to prevent any and all tracking. I do have a reddit and google account, use online banking etc. It is stuff you basically need these days and a certain amount of comfort is something i desire too, but i feel like minimizing the amount of data collected about you, which doesn't really take too much effort is something worth doing.

That’s a popular activist philosophy that I share to some degree. I will of course apply it to my preferences, but I start with a sound threat model.

[deleted]

That is arguably more complicated to do in infosec, I agree, in comparison to say physec for example. Protect your person today and you’ve succeeded. Protect your digital diary today and it doesn’t guarantee tomorrow.

I think this is more suited for r/opsec . I understand the importance of discussing your threat model. But I think r/privacy is something else. We discuss how corporations spy on us and get us to consent to data Collection using dark patterns. We discuss that things should be privacy by default, since most of us never asked to get our data collected.

These actions by corporations are immoral and thus we discuss ways to prevent them from doing it and who does it.

Things like encrypted text chat is nice to have. Having unecrypted text chat COULD allow for humans or machines (AI) to read your messages, isn't really a good thing and with the accelerating development og AI, there could be derived much more data about you.

This is also why there are so many discussions about encrypted chatting.

I also believe this is a place for discussion of where the privacy of all individuals is heading, not necessarily what the single individual's preference and threat model is.

Hi, moderator of r/opsec and r/privacy here. I think opsec threat modeling should be practiced by everyone who is interested in privacy, just like diet needs to be practiced when exercising.

It’s fine to say “Google is a bad company”, as that’s a personal opinion. It’s not okay to say “don’t use the golang coding language because its made by google and therefor probably spies on you”. It’s also not okay to say “don’t use anything based on chromium as its not private”, nor “Google sponsored that open source project therefor it should be considered compromised”as that’s not true and just mixing personal philosophical beliefs into privacy advice.

It’s also not okay to give blanket privacy advice when it can adversely affect peoples lives, without understanding (or teaching them to understand their own) threat model. Telling someone to always use Tor or worse, that “Tor is not illegal anywhere so its fine to use” (I had to submit a pull-request to get Tor’s website to change that ridiculous claim) is not good advice. It’s careless, and you can usually tell the careless advice from the careful advice by which one considers the parties’ threat model.

I agree with you that r/privacy is where threat modeling and social philosophy come together to resolve a better future, but that’s only possible when you understand both. Most posts here are about the philosophy, this one is about the threat modeling.

I think it is very reasonable not to trust Google

not to trust Google

.. to do/not do what?

This approach is deceptive, a few examples:

“we shouldn’t share any information with governments or corporations”

Governments and corporations are not the same thing at all, in fact all your legit examples refer to governments, I have no problems in having my data stored in some place for health service or road regulation, I laso vote for my government, I don't vote for a data company's board, nor they provide me any essential service, or services where knowing detailed data about me is necessary, most online services just need my id, password and eventually payment info. So putting in the same sentence "Governments and corporations" is extremely deceptive.

“if its free, you’re the product and you shouldn’t use it”

If I pay I create a different power dynamic with the business owner: I'm the customer, the business owner will look into my needs (as a collective entity) and rely on my money to go on making busness (take Netflix), not true if "I'm the product", in the latter case, I'm just part of a herd that must be maintained on the platform, while money is made elsewhere. Then sure Netflix could sell my data, but why on hearth should they take the chance being seen as "the evil guy" with the risk of pushing people towards amazon prime or Disney?

Your arguments are sketchy to say the least

Chromium != Chrome.

Chromium is open source. Chrome is closed source spyware.

Corporations are not governments, you’re right of course. You can limit sharing whatever data you like with them. You can also share whatever data you like with them. It’s your choice, and your threat model will only tell you what you can do and the consequences for it. You will be the one making the decisions based on that.

removed that part since I read chrome instead of chromium, the other arguments stand

I like your comment and I've modified the OP to reflect it. I suspect the changes won't change your opinion, but it helps me to be clearer so thank you for that.

[removed]

I personally feel like we (all internet users) need to challenge the paradigm shift that remaining private or anonymous via VPN or Tor automatically means you are bad actor doing something malicious.

When I wear my sysadmin hat, I completely agree that Tor and VPN users are usually "the bad guys". When I put on my "Tor/VPN user" hat, I am outraged by being blocked or judged. Perspective is reality.

If you want the world to stop thinking "anonymity = bad" then you need to develop nice things that anonymous people cannot ruin I suppose. Reputation systems can work, even when they are pseudonymous. I like ZKAPS by LeastAuthority (based on PrivacyPass) and "sharetokens" by Wireleap for that reason. They work to disassociate the user from the activity really well.

Expecting governments to do anything when they are filled with corrupt men and women who know nothing about technology is not viable. Therefore, the only weapon we have is total non compliance.

Or join the government and change it perhaps? This is why I write open source software and help build new technologies, to be part of the change. "non-compliance" sounds edgy, but it doesn't move anyone forward.

Advocating for people to give up their digital privacy (whatever their threat model may be) because some institution (I.e. bank) wants it for leaves me with the instinct you work for some three letter agency. The fact that you know so much about opsec (which is taught from day one at those agencies) also adds to my suspicion.

Your instincts are ignorant and accusations paranoid. I've not advocating for people to give up their digital privacy anywhere. I've advocating for people to understand rational threat modeling. The decision of what to do is always up to people. As for knowing opsec, I am the educational program director at OSPA (opsec professionals association non-profit), the group behind Operation: Safe Escape. I'm also the developer of CanaryTail, the machine-and-human-readable warrant canary generator and tracker.

For someone who is anti-government and pro-privacy, you sure are quick to prescribe guilt to someone based on their meta data.

Not the user you're responding to, but I like the conversation here, so hopefully it's okay to participate.

When I wear my sysadmin hat, I completely agree that Tor and VPN users are usually "the bad guys". When I put on my "Tor/VPN user" hat, I am outraged by being blocked or judged. Perspective is reality.

If you want the world to stop thinking "anonymity = bad" then you need to develop nice things that anonymous people cannot ruin I suppose. Reputation systems can work, even when they are pseudonymous. I like ZKAPS by LeastAuthority (based on PrivacyPass) and "sharetokens" by Wireleap for that reason. They work to disassociate the user from the activity really well.

Context is key, and I don't think this is as simple as whether it's possible to abuse a system or not. For example, think of the current Covid-19 situation. Wearing a mask that covers your face used to be illegal in many places, and generally, anyone who did that was assumed to be a criminal who wanted to avoid identification by the police.

Yet now that wearing masks are not only common but even mandatory in some places, they're just seen as the normal and even the right thing to do, and the fact that they could be abused is seen as a smaller evil than the benefits they're thought to provide.

While a global pandemic is definitely a net negative and not really worth this minor boost in privacy, I'm wondering if some kind of catalyst would be required for digital privacy to become more normalized the same way. Yet... having seen what kind of horrible things have already occurred (Cambridge Analytica, Vastaamo blackmailing case, totalitarian governments, etc), I also really really don't like the idea of things getting any worse before they get better.

I guess that to me, the threat of societal problems enabled by things like mass surveillance are far worse than the threat of potentially looking suspicious enough to have some government agent crack my communications or something, because I'm not really hiding from anyone as much as I'm just hoping to make it harder for anyone to keep track of everyone all the time in general, due to how easy it is to abuse that kind of power. Is that a valid concern?

How can a person define their threat model when they don't know what the threats are?

Example: some communities are targeted with some ads and others are not shown some ads (like facebook showing lesser job ads to some communities). Such threats are not known at all, or not known prior to a person unless research or whistle blowing reveals them.

And by not being able to define that threat in advance, they will then not take care of that aspect of their privacy, and will be vulnerable to such threats.

How can a person define their threat model when they don't know what the threats are?

Did you visit the link in the OP? opsec101.org explains it in an arguably approachable, obvious way. If it fails to do so for you, let me know.

some communities are targeted with some ads and others are not shown some ads

What's the perceived issue with that exactly? How is that violating their privacy? What are they risking? Why is that a threat?

My whole comment was about only one thing, unknown threats. By mentioning 'some communities being targeted with some ads and others not being shown some ads (job ads)', I didn't mean violation of user's privacy, but the threat the user couldn't have defined which is 'the risk of losing some job opportunities' that they otherwise wouldn't lose if they kept their community private.

I can understand that we can go for convenience before privacy. But what I dislike with this defining-threat-model approach is that we can't foresee all threats (like targeted misinformation, or targeted less opportunities, or other unknown threats).

There’s a slight issue with your argument. You are rightfully pointing out that not knowing all possible threats makes threat modeling more difficult in some situations (not all, as I don’t need to worry about something that has no chance of happening), but I gather you’re then saying the right approach would be to apply a countermeasure against all possible potential and even unrealistic threats at least to the point of inconvenience or permitted expense? That itself is a threat model, just based on unknowns.

It’s called the “Opsec thought process” for a reason — the outcome isn’t the focus. The process is. Understanding and practicing the process will deliver the best possible outcomes regardless of what you eventually decide as a countermeasure.

Right now I refuse to travel because of how poorly the world is handling COVID-19. Truthfully I’d probably be fine, and based on my threat model it’s completely acceptable risk, but there are connected unknowns such as economic, travel, etc that complicate it to the point that I feel the strongest countermeasure of simply barring my own travel is acceptable, even if in reality the risk might not be there.

If it suddenly became absolutely critical to travel though, I’d already understand the actual risks having practiced the opsec threat model and understanding the data, and could easily plan appropriate countermeasures.

My stance is not a rigid one that forces adopting countermeasures based only on available data, it’s one of understanding how to process it to make the best decision so that you’re not making those decisions based solely on something besides data — chiefly, paranoia.

It's not just you. It's all of your relationships. What if your friends desire privacy, and talking to you alone just defeat all of their purpose? In fact, reading metadata of your friends reveal everything about you, and vice versa.

What if your friends desire privacy

They need to ask themselves what their threat model is.

reading metadata of your friends reveal everything about you

You need to ask yourself what your threat model is.

It's not possible to answer without knowing that. For my own threat model, your scenario would not affect me negatively. I use Signal, Telegram, Whatsapp, LINE, Skype, Linkedin, Hangouts, etc. and all of them in their own isolated way.

In order to answer for your specific case, I'd need to know your threat model. Feel free to read opsec101.org for how to define it.

[deleted]

Not today, Sherlock. Revealing my threat model to a random Redditor just defeat the purpose of it. Good luck with your opsec101 and OSPA thing anyway.

Discussing your threat model does not defeat the purpose of it.

You can discuss a threat model without providing sensitive information, for example if you tell me "You can't use Whatsapp because they can read your messages", I know it doesn't matter to me because my threat model doesn't include a risk of having someone read those messages, nor does it allow the application itself access to any sensitive information or other applications.

This is moot however, as you don't need to discuss your threat model with me, you just have to understand it for yourself when discussing with others, and teach others to understand their own before suggesting countermeasures to them.

This is a fine opinion piece. But it is not so good to say 'countermeasures come last'´, as it may be interpreted as "one has to be fully finished with a perfectly thorough threat analysis before starting with any countermeasures".

It is fine to start out with: "I'm a bit worried, let me just stop browsing now, change my password, and stop giving out more data". You can then steadily conduct more analysis, while also adding/removing countermeasures. And while neither the amount and intensity of countermeasures nor the analysis is ever really complete - you may have gone in the right direction.

If you're worried about getting attacked - and you lock all your doors, you can remove some of the locks after you've done more analysis. If you wait to do all your threat analysis first, and fail to even lock the outside door while you analyse - that is the worst possible outcome.

In a time sensitive situation I completely agree with disconnecting liabilities (turning off computer, completely closing firewall, etc) and being functionally inactive. Once you’re ready to be active though, you need a plan.

I'm a grown man who's been through a ton of shit, but when I hear the "I have notching to hide" line, it makes me want to cry.

Idiots, extroverts and narcissists rule the world, fucking it up for everyone else.

I love this post and I feel like it should be pinned on my collection of saved posts

edit: This is coming from a mod of r/privacy as well, that's a surprise.

Usually I assume that the consensus of an internet group somewhat represents the team behind managing it (why? no reason I guess). The consensus to me has always been "no one and not even the people in charge will listen to you when you say that data still has some value or that some of these arguments are invalid, your comment might even get deleted". I always avoid saying something like this. ^(edit: like the post)

edit2: I guess that if I was looking for a way to improve this post, if I wrote it myself, the flaw I see is that there's no example or acknowledgement of legitimate reasons to protect your privacy (as a short side note at least). Maybe if there was one, people would not think this was written by a fbi agent or something like that. I personally interpreted this post as "be smart about what your beliefs are when it comes to privacy and what your threat model is", others probably interpreted it as "you are all paranoid and all your arguments are stupid"

But that's just me. I know I have the tendency of giving bad and unnecessary advice.

Edit3: The edits can be confusing I guess. I just want to clarify that I don't disagree with the post at all. edit2 is basically writing advice, not counterargument or something like that. edit1 is expressing surprise on who wrote this and explaining why (How it doesn't fit my perception of Reddit)

Remember, only a Sith deals in absolutes.

How is conducting an accurate, honest threat model for yourself before deciding which risks you want to address, to protect which assets, and what amount of resources you want to invest to do this, irrational or bad advice?

The linked article clearly says it’s the first of a series, and is solely about what operational security (threat modeling) is. It promises to explore more advanced situations in the follow-up, presumably with specific examples.

For folks considering skipping this important initial step and want to dive into the really kewl Jason Bourne Cosplay first, know that you’re in opposition to what The Electronic Frontier Foundation and every other knowledgeable group in the OpSec/NetSec space recommend.

I am confused by your comment

irrational or bad advice?

I didn't say making a threat model is irrational or bad advice, or I didn't mean to say that at least

The linked article

I didn't go to the link, I don't want to read an article right now and it didn't feel necessary

You should check it out later. It’s pretty good. :)

Alright, I will eventually, maybe in a few hours. It does sound good because I never followed a standard for threat modelling and it seems important for my career and studies

Edit: Oh, the link op cited was a page written by op, and it's basically a longer version of this post.

I mostly agree but:

How does this

You can be the product even when you pay

contradict this?

if its free, you’re the product and you shouldn’t use it

That phrase's meaning is basically "don't let corporations data-mine you", not "only use paid services because they are safe"

It’s just an example in context of how people use it, which is typically “you can only trust that which you pay for”, which is a fallacy. You can’t trust anything, but you have to trust something, which is why you assess its usage through your threat model.

People do in fact argue “only use paid services because they are not data mining you”. There are even people who argue that VPNs outside of the 14 eyes are trustworthy because of that. Lots of false equivalencies that shouldn’t be supported, but the only way to properly dismantle those poor arguments is by addressing the risk and applying opsec through a threat model.

By your logic, Linux is data mining people. Linux is free, but you're not the product. It doesn't data mine you either.

Reddit is data mining us right now as we type too. The question is “what do I risk?”. Only my threat model knows.

Are you referring to me or OP?

to you

I am not sure where is this coming from, where in my logic does it look like i think linux is datamining people?

I thought you supported the phrase "if its free, you’re the product and you shouldn’t use it" so I brought up the linux example

If you don't think that way then I misunderstood you

I don't think that way, I just provided a better phrasing for the same concept which is "don't let corporations datamine you"

It removes your argument (linux doesn't datamine) and OPs argument (if you pay you might still be the product)

Also, people do the “but open source software is free”.

Christ, people need to learn the terms “free as in beer” and “free as in speech”.

the prob with reddit is we pretend like we all here for the same reason

if i was a govvy i would be on here too!!!

Great post!!!

[deleted]

I disagree with accepting sinister personal data exploitation because it's just how things are and that it would make your life more difficult to reject them.

So do I. I'm glad we're in agreement. Who are we disagreeing with?

I see this threat model argument come up on the privacy subs all the time. Always using the far-fetched analogy of not being a spy or enemy of the state to dismiss wanting to eliminate as much spying and exploitation as possible.

Paranoia has no place in opsec. You can be paranoid with your decisions after developing your realistic, practical, rational threat model. If your threat model presents you with 10 practical options for countermeasures and you ignore those options and instead go for overkill, avoid a publisher or platform completely for political or philosophical reasons, etc, that's completely fine so long as it's not confused with being part of the threat model.

When someone asks for advice on how to stay private because they're afraid that Google might find out their location, you have to ask first -- "what would happen if they found out your location?", not only because it's the logical question to ask, but also because they probably already have it, and that person is living a delusion of a false sense of security.

Why should regular joes be okay with having no control over their personal information and habits?

Regular joes should always have agency over their data and the ultimate decision in whether to share it or not. Opsec helps you understand what you stand to lose (or not) from it, and the ultimate decision is always yours. There are softwares I refuse to use because by merely existing, they push the internet into a place I refuse to support. I will not misunderstand or misrepresent them as being "bad for privacy" though, I just won't support them, and if a user of said software asks me for help being "private" while using them, I'll help in useful ways not just tell them to "stop using it".

As for corporations, you can often limit the amount of information they collect through isolation in most cases,

Except when you opt out from Facebook and they track you anyway.

You can be the product even when you pay,

True. Though usually the company you bought the product from is using it internally. The problem arises when you later find the company was funded by the CIA, or when the company is simply bought by another company that doesn't care about your privacy.

This reads a bit self-centered, if well-meaning.

Often, protecting your privacy goes hand-in-hand with supporting your fellow human beings and a civil society.

Example: not using Facebook. I do not use Facebook to protect not only my personal privacy and that of my family, but even more importantly, democracy. Facebook has been repeatedly proven to be a convenient and powerful tool of deception and coercion for powerful corporations and those who wish to destroy democracy.

I think the same applies to all large, mostly U.S. tech corporations.

Like others, I certainly do not want to enrich these corporations and their shareholders. I consider this an important moral and ethical stance.

In a totalitarian state there is no privacy or free agency. I am not going to help the surveillance capitalists speed us along to that destination.

[deleted]

I think you're missing his point(s). He's not saying we should not care about privacy he's essentially saying "don't be over paranoid about everything", emphasis on the everything. Using your credentials to login to a government site isn't the end of your privacy. If your workplace uses windows as their OS, it doesn't mean you should refuse to work there etc

More precisely, I'm saying don't be paranoid period when threat modeling. Paranoia has no place in opsec. You can be paranoid with your decisions after developing your realistic, practical, rational threat model. If your threat model presents you with 10 practical options for countermeasures and you ignore those options and instead go for overkill, avoid a publisher or platform completely for political or philosophical reasons, etc, that's completely fine so long as it's not confused with being supported by the threat model.

I always wear a seatbelt in the car when driving, but I never wear one when sitting parked in a parking lot. I could if I felt extra worried for some reason, but I don’t personally see the need for it. If someone else said they’re more comfortable wearing it always, I wouldn’t try to convince them otherwise, but I would wonder how they walked outside their house that day with the chance of an asteroid landing on them.

Your post is an incomplete thought followed by a strange accusation. What are you responding to?

What you wrote sounds like something the folks in Congress who are trying to outlaw encryption would write. A "glowie" is an FBI/CIA agent or similar. They don't like security measures very much and would be very happy to see us all stop. I'm calling your post astroturfing.

Additionally, your "countermeasure-first fallacy" link explicitly recommends security by obscurity (the part just around the brachistochrome graph of security vs convenience) which is horrendous advice. If someone wants to ratchet it up all the way, that's their choice and a valid one at that. It is not a fallacy to put countermeasures in place, calling it a fallacy is pure lunacy.

"Just leave port 22 open with root login enabled so you don't have to worry about attackers suspecting you of hiding anything valuable" is about the level of arguments each of your points is. I concede your point about being the product - often even when paying you still are, yet you conveniently neglect to mention the obvious counter-argument to that point: Free, Libre, Open-Source Software (FLOSS). One of the only times "communism" actually worked and continues to do so. It is free and nobody is the product, as that's the entire point.

I work in InfoSec. The absolute method of protection is to put as many layers as possible in place without completely halting operations, and slowly try to change culture in the workplace to allow for more of it. It's basically law that we have to do as such - between GDPR, PCI compliance, CCPA, HIPAA, and so on - and many InfoSec principals apply to privacy as well (though it's important to note that they are not the exact same thing, some methods of InfoSec are at odds with privacy protection).

I think a lot of this is has been misinterpreted because it was designed for a consumer audience while you're reading it as a developer. Most of our members in r/privacy are consumers looking for answers to perceived problems, and much of the time the problem either doesn't exist or the blanket advice responses leave them believing in a silver bullet solution instead of educating them to think for themselves.

They don't like security measures very much and would be very happy to see us all stop

I agree. Many of them would love to completely limit our options for countermeasures. How does that apply to opsec or this post though? Not sure how you got "OPSEC and this post are all about not using countermeasures" when OPSEC and this post are actually all about using correct countermeasures by educating ourselves on what those risks actually are.

I'm calling your post astroturfing.

Insulting and patently false, as no where do I say we should stop discussing, educating, or using countermeasures, only that countermeasures should match the risk assessment. I'm not going to encrypt this public reddit post to you "for the sake of privacy" because the threat model assumes it's fine to discuss publicly or I wouldn't be doing it. If you actually work in infosec, you're very familiar with this concept.

link explicitly recommends security by obscurity

While some situations can call for obscurity, security by obscurity (especially for software and systems) is something I'm against. Let me know what section / citation you're referring to as it's probably just a misunderstanding.

If someone wants to ratchet it up all the way, that's their choice and a valid one at that.

No, it's not "valid". Ratcheting up security at great personal cost and sacrifice for a threat that doesn't exist and a risk that is essentially nothing is "pure lunacy". Do you keep a single $1 bill in a $500 safe by itself?

It is not a fallacy to put countermeasures in place

As countermeasures are a critical component to opsec, I challenge you to find any reference to not putting countermeasures in place. Quite the opposite, everything in this thread and the page you mention are educating on how to decide which countermeasures make the most sense. There are of course nuances that apply to infosec, appsec, netsec, and physec that aren't completely interchangeable, but the same concept applies -- if there is no risk, there is little need for a countermeasure. Your completely isolated island hut made from straw doesn't really need a door lock at all, because the people who could get to you won't be stopped by a lock either way. It doesn't mean you can't have a lock, but when you recommend someone put a lock on their door, you need to understand what door they're referring to first.

"Just leave port 22 open with root login enabled so you don't have to worry about attackers suspecting you of hiding anything valuable"

Ludicrous. Having no root password on a network blocked VM used for constant testing however makes perfect sense. No threat, no risk, no countermeasure.

It is free and nobody is the product, as that's the entire point.

Tor is a product, they make tons of money in donations to fund a workforce. They recommend everyone support by running all their traffic through it to further obfuscate the traffic of others, making you the product too. Being open source doesn't solve this, but I imagine my specific statement of "all users of a product or system are a product in one way or another" wasn't taken as abstractly as I had intended.

Infosec

Writing software and setting up systems is a bit different than being a consumer using systems to adopt security solutions as you can probably agree. When my open source company writes software, we do not ask "can we get away with this line of code being somewhat insecure?", we ask "how can we make it as clean and secure as possible" because, as you are right about, you cannot possibly know the future threats or vulnerabilities so you may take some extremes. I recall an article from Adam Shostack about this particular topic, in how (paraphrasing poorly) threat modeling for software development cannot be "adversary oriented". I agree.

When it comes to adopting consumer software and services, the user is not really in control at all. They did not design it, they are just trusting that it "solves their problem". Oftentimes, those problems it solves will not only never happen to them, but simply using said service or software complicates their lives at best and introduces additional vulnerabilities at worst.

An extreme example of this is forcing your grandma to use GPG in order to send you a Christmas greeting email. What exactly is the point? All it does is complicate the process for no benefit.

Your identity, the trust people have in you, bank details, health information that can be used against you, LGBT status in backwards locations (especially those that physically harm or kill those folks), etc.

These should all be taken into consideration when understanding your own threat model.

I hope this helps you understand the nature of the message better as you completely mischaracterized it as “set[ting] out to do […] harm” and astroturfing.

You don't happen to glow in the dark do you?

I’m stepping in with with my Mod hat on to ask, What are you thinking when accusing a fellow r/Privacy subscriber of being a government plant who is part of a conspiracy to mislead anyone reading this? And, undoubtably, to steal your precious bodily fluids.

Do you get out much? Have you traveled far? To another country? Lived in another country? What kind of social circles do you have where you’d think this is something thinking adults possessing adequate levels of mental health do?

My impression is that you’re spending way too much time online, in weird conspiracy-embracing, fetid chat rooms where irrational and unsocialized people gather together. And you’ve sadly fooled yourself into thinking your group represents a normal baseline.

This is not normal. It’s not normal to accuse strangers of things like this with no foundation. We’re a community here, and if you can’t behave like it, then unsubscribe. Then seek help.

If you do this again here, you’ll be banned, Rule #5. Final warning.

I’m stepping in with with my Mod hat on to ask, What are you thinking when accusing a fellow r/Privacy subscriber of being a government plant who is part of a conspiracy to mislead anyone reading this? And, undoubtably, to steal your precious bodily fluids.

My impression is that you’re spending way too much time online, in weird conspiracy-embracing, fetid chat rooms where irrational and unsocialized people gather together. And you’ve sadly fooled yourself into thinking your group represents a normal baseline.

Some mocking reply in a time of government curbing civil rights and detaining innocent people. There is legitimate concern that (constitutional) rights we take for granted now, may be gone in next year.

I repeat:

It’s not normal to accuse strangers of things like this with no foundation.

It's beyond the pale to suggest this, for no reason other than someone disagreeing with an opinion expressed in good faith: that part of developing a good operational security outlook is to first conduct an honest threat model for yourself.

For operational reasons, it's a wise choice too. Cranking your privacy shields to "11" is exhausting, and exhausted people make mistakes, resulting in their being more vulnerable than had they used a more appropriate model for themselves and that given situation.

I'll take this soapbox opportunity to hype, as we always do, the EFF's Surveillance Self-Defense how-to. For those who haven't checked it out yet, please do! :)

That the source of the "Glowie" meme is a risible Far Right conspiracy board is icing on the cake:

Glownigger, or Glowie, is a slang term popular on 4chan's /pol/ board used in reference to CIA or other governmental agents posing as ordinary 4chan users in order to bait others into sharing incriminating information. It is also used to call out potential bait posts made by agents in an insulting way. The term originates from schizophrenic computer programmer Terry Davis, who claimed in 2017 that "CIA niggers glow in the dark" in a video. Glowposting is also used to refer to agents who post bait content.

So, uhh, yeah. Blatant violations of our sidebar rules #5 and #12. Inimical to fostering good-faith, productive debate, a cornerstone of r/Privacy's existence.

Kids, don't rely on schizophrenic computer programmers as part of your daily media diet. You'll live happier, more productive lives filled with lots more real friends who will genuinely enrich your life!

There is an entirely different framework to approach privacy that is effectively a boycott of surveillance capitalism that is every bit as legitimate as eliminating attack vectors from nefarious actors inside a security framework as you’ve proposed.

While I agree with you that blanket privacy countermeasures aren’t productive, it’s a great way for non-technical people to experiment for themselves with what works and what doesn’t.

Furthermore, it’s a hell of a lot easier for a lot of people to take blanket countermeasures and prune as they learn than it is for them to fit each privacy decision into a more sophisticated security-based threat analysis framework.

We want people to start to care. Not to overwhelm them.

I'm giving you a gold award for this. You're 100% right. Privacy is a personal journey to your privacy standards. I love this sub and learned so much but I hate the tin foil crew saying that the only way is to install Linux and live in Tor. I hope to see more posts like these as some of us tend to forget about the relationship we have with privacy.

It's not debunked, in the sense that scared sheeple will keep on using it for years, as things get worse & worse, to keep the wool over their eyes. So it's bullshit, but prepare to hear it over & over.

Exactly, but how can I make my threat model? I mean I dont know how many ways they can attack me.

You don't always need to know how you can be attacked or even who they are in order to know what you can't afford to risk.

I don't always know how my passport will be stolen, I just know I can't afford to have it stolen while traveling (for example). For the times I can guess, I will deploy the appropriate countermeasures. For the times I can't, I will either take a risk or take the overkill route depending on the situation.

See opsec101.org for help.

You say: “nothing to hide” argument has been thoroughly debunked.

Can you lay down the argument? Cause I still face difficulties when people bring it up. And many do

https://www.huffpost.com/entry/glenn-greenwald-privacy_n_5509704

[removed]

We appreciate you wanting to contribute to /r/privacy and taking the time to post but we had to remove it due to:

Posts that contain claims or issues that can be anecdotal are impossible to verify, and there is a concern of people larping to generate sympathy, karma, or just to troll the community as well. You may also be revealing too much personally-identifiable information, which we want to protect you from doing.

While r/privacy is a community that cares about its own, certain topics are best handled by professionals or are better suited for other subreddits.

You might also want to try using Reddit’s search function to see if there were posts on topics similar to yours.

Best of luck!

If you have questions or believe that there has been an error, contact the moderators.

Why people seek privacy also should be private if they want to. Privacy is a human right. A right is not an obligation. If the person feels good with privacy, it is his/her business, not mine. It has nothing to do with supporting ilicit things. Privacy is fundamental today more then ever. Your personal records can be use against you out of context at any time and also against people you love. Cancel culture is out there. Someone can lose his job, his family, his reputation by a leak out of context, the mob dont care about if it is true or not or the context. Privacy is linked directly to all human rights. And im not talking just about tyrant governments or crazy apocaliptic cenarios like soviet russia or nazi germam. Imagine the jews in nazi german with our tech? There would be no one survival. But cancel culture is out there today, times changes, we never know what will be canceled by the mob in 10, 20, 30 years. We nerver know what kind of collage of your writings, voice, images etc can be made up to put you on trouble. You never know how nor when some hacker can sell your personal information to someone that by any reason want to destroy your life. You also need to protect those around you. Privacy is fundamental to a free world.