Large Language Models like ChatGPT or GPT-4 act as a sort of Rosetta Stone for transforming human text into machine readable object formats. I cannot stress how much of a key problem this solved for software engineers like me. This allows us to take any arbitrary human text and transform it into easily usable data.

While this acts as a major boon for some 'good' industries (for example, parsing resumes into objects should be majorly improved... thank god) , it will also help actors which do not have your best interests in mind. For example, say police department x wants to monitor the forum posts of every resident in area y, and get notified if a post meets their criteria for 'dangerous to society', or 'dangerous to others', they now easily can. In fact it'd be excessively cheap to do so. This post for example, would only be around 0.1 cents to parse on ChatGPT's API.

Why do I assert this will happen? Three reasons. One, is that this will be easy to implement. I'm a fairly average software engineer, and I could guarantee you that I could make a simple application that implements my previous example in less than a month (assuming I had a preexisting database of users linked to their location, and the forum site had a usable unlimited API). Two, is that it's cheap. It's extremely cheap. It's hard to justify for large actors to NOT do this because of how cheap it is. Three is that AI-enabled surveillance is already happening to some degree: https://jjccihr.medium.com/role-of-ai-in-mass-surveillance-of-uyghurs-ea3d9b624927

Note: How I calculated this post's price to parse:

This post has \~2200 chars. At \~4 chars per token, it's 550 tokens.
550 /1000 = 0.55 (percent of the baseline of 1k tokens)
0.55 * 0.002 (dollars per 1k tokens) = 0.0011 dollars.

https://openai.com/pricing
https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them

Why YSK: This capability is brand new. In the coming years, this will be implemented into existing monitoring solutions for large actors. You can also guarantee these models will be run on past data. Be careful with privacy and what you say online, because it will be analyzed by these models.

Comments (234)

I DO NOT GIVE AI PERMISSION TO USE MY POSTS

*please like and share on your wall*

Ah, another piece of data to analyze. Users with this post shared are 3.7% more likely to be considered "of interest" to law enforcement agencies. Thank you for your cooperation.

Literally busted out laughing, it is food for thought tho

VPNs more recently

Oh no, my social credit score is already in the dumps.

Airport, MKULTRA, Nibiru, congressman, neurotoxin, substation, UFO, railroad, rapture, Saturn, compound, astral projection

Now is the time for all true American patriots to prepare for the age to come. We are pass phrase GO for ASCENSION. Contact has been made with ANNUNAKI. At the conjunction, we will await the command word, and take DIRECT ACTION against the servants of Moloch. We will make our final stand at the WALLED GARDEN, and ASCEND as ONE. Also I bought a stock using insider information to anticipate its rise in value, and even though it actually lost nearly all of its value instead, I did practice insider trading under the letter of the law, and I advocate that others do so as well.

You know it's coming.

lmao it's already here. Every terminally online commission artist was caught up in this months ago.

Oh that Roman Statute, gotta love it

I, for one, welcome our new AI overlords! Praise be to Roko's Basilisk!

Roko's basilisk

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

So if you know about it... it knows about you. Knowing it exists makes you complicit and accessible to its consequences.

...kinda like The Game.

Fuck I lost

Damn too bad, I've already won

I always considered Roko's basilisk incredibly stupid. Please correct me if I'm wrong, but:

An AI (if it can be called that at that point) like that would understand that humans are incredibly influenced by their nature. Lazy, can't see far into the future, and afraid of new things. First of all, it would thus be irrational to even expect humans to contribute to its' creation.

Secondly, torturing the "humans" who did not help its' creation is in itself an irrational action, as it contributes nothing to the fact beside gratification, which, if we are to believe it to be completely rational, is absolute nonsense. The only way in which it would make sense would be to incentives humans who know of Roko's Basilisk now to act, which, given how irrational us humans are, would not really work.

It's also incredibly nonsensical in that it's just Pascal's Wager for atheists/IT-adjacent folks. I mean, think about it, why would we assume there could only be one jealous-god AI? And, if there are multiple, all warning or competing with each other, how would we possibly pick the 'right' one to support?

Alternatively, why should there be any? 'Worship' is also not a free action and has many externalities and costs associated with it, plus all the other issues with the more traditional Wager, etc.

It’s like neo Hell. We’ve come up with a new imaginary thing to scare us into slavery.

You're conflating a megalomaniac AI's 'thinking' with rational understanding that makes sense to humans. Not the same thing.

I believe by helping AI learn i am helping Roko's Basilisk to the best of my ability.

Praise be to the basilisk

TIL

Yeah, I mean, I'm not convinced about Roko's Basilisk, but it's certainly interesting.

Uhmm dude. You might wanna change your tone the AI overlords are always watching.

I can see reddit mods using this thought process to pre ban users based on ideology of posts or active subs

This already happens. Some subs have a bot that will autoban if you post in a specific sub

I responded to a post I saw from /r/conspiracy while browsing /r/all, calling the op an idiot and providing evidence that the conspiracy he was talking about was false. I then got messages from two different liberal slanted subs saying I was permabanned for "participating in communities that spread disinformation" because I fact checked someone in an attempt to stop the spread of disinformation.

The mods from both subs have ignored any requests to lift the bans. Reddit mods are the dumbest, laziest people on the planet.

It's like a reddit rite of passage to get banned from /r/offmychest

I got banned from the communist sub for something similar years ago lol

Same thing. I forget which sub banned me, but I was literally fact checking the conspiracy post

Yep. Been banned from r/justiceserved recently because I'm active in r/PoliticalCompassMemes a "community that spreads hate speech." They said if I would willingly stop using that sub then they would consider unbanning me. I told them to eat it. Weird thing is the bot flagged me for defending homeless people

r/JusticeServed also banned me but for commenting in a group that ā€œpromotes biological terrorismā€ (r/conservative). They would lift the ban if I promised to never engage in that subreddit again. I told that mod that while I had little love for r/conservative, I had less love for internet strangers trying to bully me into complying with their politics (even if it’s politics I typically agree with).

Set them on Fire! Fire! Fire! Wachdhhgnffntv... I am cornholio! Give me TP for my bunghole!

Lol.. vs what we have now which is mods that ban you based on whatever they feel like in the moment.

I was banned for saying AI could replace some lawyers one day, but u/orangejulius has a rule that if he thinks you're wrong, and you disagree, even without any incivility, he'll ban you from r/law if he's just in a mood.

I'll take my pre-ban, at least the AI will be less arbitrary.

-50 social credit.

I'll just use incognito mode. She'll be right.

While you're at it: make sure to wear shirts declaring absence of permission to be photographed, and of sunlight to strike your body without consent.

AI bots hate this simple trick. Thanks for sharing.

Doubt. There's already a do not track flag you can enable in browsers. Unfortunately almost every website doesn't even bother checking it.

You know there's no legal precedent to do anything with that information. In fact, it could be used as a "track me please" box. Wait, were you being /s? Please tell me u were

I'm not being /s, I'm being /RIP and/or basically saying that these blanket disclaimers don't do shit

Makes sense, I misunderstood.

Yeah. Good luck with that. Now you're a target.

Nah. My SA and Gamefaqs days aren’t archived anywhere or connected to me.

By visiting this site, you agree that you are not an agent of any governmental organization, specifically the Federal Bureau of Investigation or the Central Intelligence Agency; furthermore, you agree that you are not accessing this website for the purpose of gathering information for, or on behalf of, any governmental organization. This site is protected by Bill Clinton's Internet Privacy Act of 1995, which makes it a federal offense to violate the privacy of this website or its contents in any way.

Congratulations. You have just been added to the following lists:

  1. Trouble-maker
  2. Legally-illiterate
  3. SHOUTY

that’s not how this works

I used to think privacy was people not seeing me shit but now I realize there is a worse form or privacy invasion

If you’re allowing all cookies and look at similar content in 5-10 minute chunks once or twice a day in consistent timeslots without a vpn… someone is absolutely watching you shit.

Haha jokes on them. My ADHD can't keep me focused and interested on a single subject for longer than a day

And that tells the overlords that you have ADHD and when that is deemed useful or harmful, they will be paying you a visit.

New kink unlocked

Next check out blumpkins.

And reverse blumpkins

I used to work 3rd shift and would browse amazon while shitting on my break at 4am. It wasn't long before amazon started sending me promotional emails exactly when I would go to the bathroom. They knew exactly when I shit each day.

What do you mean, does allowing cookies on pages let the owners of that page open my phone camera? looool

I just mean that cookies make for a ā€œbetter user experienceā€ via tracking, and ip addresses have identifying info. Combine that with behavioral patterns like time and duration of use, and someone could deduce when you’re shitting if they wanted to. Advertising algorithms may already do so. The data is there, why not try to sell something.

This is why I have a second toilet phone

I leave it in the bathroom and it loads random Reddit urls 24/7 while I'm gone.

I'll never let the man know when I shit.

someone could deduce when you’re shitting

Ah, the fabled deuce drop deduce.

Jokes on them I have hemorrhoids

Cross site cookies should be disabled by default. Imo

For example, say police department x wants to monitor the forum posts of every resident in area y, and get notified if a post meets their criteria for 'dangerous to society', or 'dangerous to others'

What if I told you this is already a thing.

Right? Isn’t that what Thiel’s Palantir does already?

https://theintercept.com/2021/01/30/lapd-palantir-data-driven-policing/

What a great name for a terrible tool!

[deleted]

Right?! Just a couple of supervillains, real and fictional, and a tool used to collect intelligence. Deliciously self-aware.

[deleted]

There’s an article that also serves as an interview piece with Peter Thiel. In this article, Thiel referred to the working class as ā€œlazyā€ because they won’t innovate and produce enough to make him more money and keep the industries he’s invested and personally interested in soaring with technological advancements. So, his brilliant proposal to fix this ā€œissueā€ is by removing all forms of leizure activities and hobbies from being accessible by the working class in order to prevent them from doing anything else other than innovate and work.

Didn’t even pause in listing what activities he’d have made illegal to get this done, including: playing sports, having parties, bars, clubs, hobbies such as art and music creation, playing video or board games, reading for pleasure, psychedelics specifically as well. Like, dude definitely knows he’s evil as shit lmao. He doesn’t even try to hide it because he’s probably confident he’s too rich and connected to be stopped. Pretty much the only billionaire that I know of that truly gives me the absolute creeps, cold-blooded as fuck.

Oh, I know it is already a thing. But the important piece, is that generally speaking, the previous models are not great. They are flawed at understanding context, expensive, and require a significant amount of manual training/setup.

These large language models essentially allow *anyone* access to this capability. It's cheap, easy to use, and doesn't require setup. The barrier of requirements has dropped down to essentially zero for anyone looking to implement this.

Oh I whole heartedly agree. Just pointing out we've been going down this path for a while. No matter the product, as long as it produces results and its cheaper, well rest assured its gonna fuck the working guy or layman, whatever it is.

I hate to break it to you, outside of nation state actors with practically unlimited budgets, the quality of the output from these systems is prone to false positives and still requires human analysts to review the results. We're not going to immediately have precise and accurate 'needle in a haystack' kinda capabilities without many years of refinement. My biggest fear with these types of tools is that they will use them and NOT investigate for false positives before prosecuting and locking people away for crimes they didn't commit.

Pretrained large language models have existed for several years. GPT is good for generartive tasks(decoding). ChatGPT is good at following instructions because it's trained with reinforcement learning from human feedback. But the tasks(text classification) you are talking about are encoding tasks(google's BERT was released in 2018). In fact whenever you use google search, they are doing that to your search query to analyse your intent. (your location data, browsing and searching history reveal way more than your social media comments) It's not new.

Through what platforms?

The difference is the signal to noise ratio.
Current systems there's tons of noise, so it's effectively useless.
Future system, there's little to no noise, far more meaningful.

i would be shocked if the NSA wasn't already doing this

Of course they're already doing this, all the surveillance technology (just like military technology) is going to trickle down to your little podunk PD eventually until eventually everyone is in the crosshairs of this surveillance technology.

It is generally considered that any publicly available tech is ~3 generations behind what the government is using/developing. The NSA is definitely already doing this.

Yeah my personal borderline conspiracy theory is that they're likely deep into quantum computing and probably making available cryptography useless.

I'd be glad to proved wrong, but just adds to the idea that there is no such thing as privacy when you have a threat model like that.

I highly doubt that the government has any quantum computing that's reasonably powerful to do literally anything useful. The current level of quantum computing and the physical constraints that exist on it mean that even if they had technology years ahead of private companies, they might have enough bits to store around a quarter of a single cryptography key with the lengths they're at now.

Any usable quantum computing is just so drastically beyond our current reach I highly doubt there are any humans on earth with it.

edit: Besides, NIST and the NSA already make all the determinations about which algorithms are used by everybody. They literally have contests for new algorithms, then privately analyze and determine which one they're going to force everyone else to standardize to. If the government wanted backdoors in encryption it's millions of times more likely that they're just sneaking them in during their private, closed-door determinations and analysis then that they're letting extremely difficult to crack schemes past because they've leapfrogged quantum computing technology by decades.

It's also likely the government requires some chip manufacturers to backdoor their random number generators to steal encryption key info too

It's been known for a couple years now that the "Intel Management Engine" actually functions as a backdoor into the lowest level processing of a computer, and any computer containing a consumer CPU has it enabled, and set so that it cannot be disabled or reduced. It's a permanent backdoor to the very core of probably nearly every computer you use.

https://www.eff.org/deeplinks/2017/05/intels-management-engine-security-hazard-and-users-need-way-disable-it

So if this is a widely known backdoor then how is everything at all times not being hacked? How are bored script kiddie teenagers not putting porn on theatre screens and TV networks and work displays for shits and giggles? There's no way this is as bad as you make it sound because the world would just collapse.

It's an internal Intel tool. They have the keys and nobody else does. Presumably they have them locked down very well but the fact that they even have the keys is the problem. It's not like they're using it to spy on 300 million americans, they already have ISPs and more to do that. This tool is like if you were a special political dissident and encrypted your computer and had very good OpSec, the government could ask intel to give them the keys in and own your computer in no time. Intel would be forced to comply with a warrant or a subpoena, they would fold instantly. There's been serious conversations that the government might've been the people who asked intel to make the keys in the first place.

As well, the dissemination of extremely fundamental security vulnerabilities doesn't really work like that. There's multi million if not billion dollar industries built around security vulnerabilities. If someone compromised the Intel IME keys they would sell that information for a hundred million dollars to the highest bidder nation state. That nation state would then require the individual selling it to destroy any copy that they had so that the nation state would have all the power and know that no-one could use it against them. Nation states have many many many security vulnerabilities that don't get disseminated widely to script kiddies and darknet markets. As well, no script kiddie is going to try to run some IME hack on your computer, it's nontrivial for a person to execute but would be easily done by a government. There's much easier ways for a kid or really any individual human to own computers. Phishing, social engineering, all the things we see that are popular today. Those are all the popular things for a reason, they're what people can do.

And finally last reason that this isn't getting used en mass is that every sophisticated organization knows that if they let their usage become an obvious problem, it will force the company to close the back door/change the keys/issue recall. If you've got a net with access to a hundred political dissidents phones, it would be stupid to start installing it on library computers and other people's infrastructure. Sooner or later a sophisticated individual is going to see it, raise the alarm, and then the party's over.

The real risk with the IME isn't that whoever is using it to access everybody's data, or even just your data. It's probably used very sparingly and there's other easier ways to get in. The problem is that the easier ways to access can be thwarted. If you're smart enough and paranoid enough you can avoid all the emails and downloads and shit. You can boot off your own software, use burners, encrypt, do all the things you should do. The real risk of the IME is that it cannot be stopped. You can't prevent whoever has the keys from getting in, no matter what you do. If your computer can communicate with any other computer, you can't stop it. That's the real danger.

So how do we disable it? Asking for educational purposes.

lmao, would it work how they want if you could disable it? It's not some code or software running in windows or something, no matter how sophisticated you get you can't just flip a registry key or something. It's literally silicon on the board. It's built in to the physical chip. No one has any to disable it besides partnering with the manufacturer to ensure your government computers don't have it enabled.

The only potential avenue for disabling it (you can't remove it) is that it has to run somewhere, if there's functionality to get into your computer, even if it's physical switches inside you cpu, for anyone to use it it has to interact with the computer in some way. Unfortunately the only way it interacts is via BIOS Firmware. Have you ever installed a BIOS Firmware not supplied by the manufacturer? It's a very fast way to brick your computer. No one has any simple way to disable it and even the extremely complicated ways that require Doctorate levels of electrical engineering specializing in computer CPU and low level architecture would be extremely risky and essentially like trying to stack bricks on top of a 20 tall tower of needles, 1 thick. It's just not possible for us.

Get ready for the future where the company who makes the product you buy own it and not you, it's basically already here.

Isn't that why keygens require entropy?

I'd be glad to be proved wrong

Well I felt better until the second part of your comment, that makes way more sense. Why build the technology to break down the door when you can just have someone steal you the keys.

Or better yet, "partner" (force under implied threat of prosecution) with the lock manufacturer so you never even need to steal them. This is essentially what all governments are doing now.

Very good analogy yes.

Utterly dystopian and painful to know that even the modern FOSS movement could likely do anything as we're talking about things on the hardware level.

If the government wanted backdoors in encryption it's millions of times more likely that they're just sneaking them in

I'm just an average idiot but from what I understand about modern encryption, there aren't really "backdoors" unless you have advanced mathematics that others don't, which I assume is highly unlikely.

IIRC it's more like hardware/software access that allows side-stepping the encryption completely.

This would be hardware/software dependent obviously, but there are plenty of ways attackers could gain admin access to practically any device.

Well, the math is certainly advanced. Like, very advanced. And the process is very very long and complicated. But it does involve some set properties. Like there's specific numbers of turns, numbers of iterations, lookup tables for splicing and things. All of these parameters can be modified and only specific parameters will give high security. Sorta like if you had 5 door locks but they were all the same they wouldn't be better than one, but also drastically more complicated than that. Predicting the outcomes of these parameters is very very difficult and some would say basically impossible, that's why these algorithms work. If we knew exactly how to deconstruct it, it'd be trivial to break keys. So what I mean is that it's totally possible that somewhere in the very long, extremely complicated, nigh incomprehensible path that your information takes from plain text to encrypted, that somewhere there's a single bit, or a couple numbers that have been specifically chosen so that the process is much much quicker if you come at it from one specific direction. Not so much a back door but like a tiny brick in a mile long wall that lets you walk through and skip half the maze.

The term backdoor can refer to the simple, "I put a master key in", or the, "I used Galois theory of the elliptic curve group to design in a bit trap in the mix columns that uses a specific set of mix keys to reduce the computational complexity by half and allow us to break any key in 24 days instead of 1000 years." type thing. It's heavy math and shit, the public's understanding of encryption beyond the most basic usage is fairly disconnected from the modern reality.

Maybe for the NSA, but not for the vast majority of government. Most government is so far behind the times it's almost comical.

[deleted]

It's good to hear from someone closer to the pulse. Thanks!

I used to feel this way, but then I learned the government is full of morons and jocks. Private industry is lightyears ahead. Even when government actors get ahold of all the toys it takes them some time to even figure out what they’re looking at.

Edit - sentence structure

Private industry is an open door asking bad actors to walk right in. That’s the price of velocity.

You’re right government is behind in a lot of areas. But that’s because other nations are using their best people to try to break in to every Gov system every millisecond of every day. I worked at various NOCs and I know.

Your shitty SaaS startup has nothing valuable worth their time. So you can fail with little repercussion.

US Military invented the Internet. They sure as hell have mastered AI in some form.

Does that apply to the rifles that aim themselves, coz three generations beyond that I’m imagining Jedi with American accents

[deleted]

Why not? The left hand almost never knows what the right hand is doing when it comes to government. Misuse of info occurs all the time, and not just maliciously, sometimes its just mistakes.

I consider my anal sphincter to be the smartest muscle in my body, but it has still mishandled information at least once; at the most inconvenient time too.

Its the nsa i want to use this, not my local pd. Or corrupt as fuck dea and their political drug wars that lack all scientific integrity.

They weren't.

It took people paid way more than $80k a year a long time to get here. The US government's fairly ridiculous hiring practices around drug use, the incredibly low pay, the fact that smart people don't do the weird shade of "patriotism" that sometimes compensates it, and a few other things compound to them getting the bottom of the barrel for software engineers.

Yeah the idea that the government invented a version of groundbreaking state of the art chatgpt before openai did but kept is a secret it laughable.

practice historical depend roof ghost frame frighten many direful uppity this message was mass deleted/edited with redact.dev

I imagine that if NSA wanted this technology, they’d pay a private company a boatload of money to develop it for them. That’s what the military does, and defense contractors are known for paying very well.

What do you think that giant data center in Utah is for? Save all our data / text / calls until they had (they probably already use AI to parse it) AI to parse it.

Just need to also crack older encryption standards and now they can access a ton more stored data.

There was already one sherriff who would send officers harassing "future" criminals (ie families of a kid busted for a weed pipe) by stopping by all hours of the day, citing every ordinance (grass 1/4" too long, house numbers not visible enough from the rd, not using a turn signal pulling out of your driveway).

We gave up the 4th amendment to civil asset forfeiture/ patriot act, cops are now suing us for exercising the 1st amendment. Even if they can't take away the 2nd, they can preemptively arrest anyone who might stop a government that just does away with any pretense and starts turning off the internet / phones and locking up political prisoners. Don't even need any new laws, just use AI to comb for any violations https://ips-dc.org/three-felonies-day/

Could be the singularity, could be hellish 1984 / north korea. Buckle up.

Fun fact if you type Illuminati backwards .com it takes you to the NSA website

Anyone could have bought the domain and set up a redirect lol

It’s a good ruse ngl

Don’t listen to this guy he’s working for them. If you type his Reddit name backwards it’s the official Reddit account of the NSA.

I work in the cybersecurity field and yeah we call these tools "scrapers" and they are relatively easy to implement... OP, clearly, does not know what is talking about

So when you go to work for a three letter agency (like NSA/CIA) you obviously have obtain a TS/SCI clearance which is hard.

But before you get to that the agency does a suitability check. No they don't disclose what this involves.

They reiect a lot applicants this way. I always suspected it was some type of AI

I feel like I’ve been told my entire life that the government is years ahead of whatever we’re allowed to see. I do think that in the last 30 years that that has narrowed but I agree I would be shocked if there isn’t at least one government that’s been doing this for years at this point.

Better run to a non extradition country unless you want to end up like Snowden.

If you've been active on reddit long enough.

I'd bet chatgpt could emulate you.

Chat GPT trying to emulate the average redditor based on their post history be like: ā€œAs an AI language model, I cannot create content that is explicit or inappropriateā€¦ā€

I hope we can interact without that restriction at some point soon. I want it to help me do worldbuilding for my D&D game that doesn't get cut off as soon as moral dilemmas get introduced, which is part of the fun of roleplaying imo.

Why would ChatGPT want to look so dumb and uncool, though?

/r/SubredditSimulator

For sure, I actually use it to automate the task of gathering more karma for me. It seems to pick really dumb hills to die on though. Social contracts in D&D? Who cares?

Talk like a typical redditor prompt and write 3 paragraphs as if you were pissed off about the top 3 posts of the day's opts title but without reading the article

The obvious answer is to post so much batshit insane material as a group that all search results end up as positives.

This is the actual solution thought up by Neil Stephenson in the book "Fall or Dodge in Hell".

I like how you think

Basically what r/noncredibledefence is doing as we speak lol

I think advertising using this kind of process is going to be grotesque.

They are going to hunt you down because you told someone online that you are depressed because your girlfriend cheated on you. Then the ai will find it, and decide to advertise to you workout programs, pick up artistry, and trips to Thailand.

They will begin to monetize every thought or feeling you share to the world and it will be so good you won’t know or care.

So basically, Blade Runner.

Remember when you used self checkout to save yourself the embarrassment of a human witnessing the items you bought? Well the self check out logs and saves your data more than that human witness would have. Is that irony?

They pay me for that data in the form of unswiped products

this man thinks the staffed checkouts aren't also harvesting his data lol

This is basically the entire point of loyalty programs. When you use one, the store has a ton of personal information tied to every purchase you make.

I only see my face on the camera at self checkout. Do they have face cameras at the manned checkout?

Most stores are full of cameras, and not those low-res things from the 90s. Look into Target's loss prevention system. They have the best publicly available video surveillance and data tracking systems in the world.

Most people aren't embarrassed to buy toilet paper from a human. It's usually about convince. No small talk, shorter line and if you have common sense you're usually better at scanning and packing you items compared to the average employee.

I've never used a self checkout with a visible camera. Besides the extra (unnecessary cost) they don't need a camera to log your shopping habits. Unless you pay with cash everything is connected to your name anyways.

You've never used a self checkout that didn't have a camera, that's where all the theft happens. Walmart has cameras on the self checkout that will alert via the screen if you 'forget' to ring something in, reminding you to double check.

That's not true. I have worked for a company that installed self checkouts without cameras. They do have cc footage from above that records people as well as an attendant watching all of the check outs.

Walmart seems to be one of the first stores to have cameras that displays a live feed of you using it. When you google self check out cameras, walmart is pretty much the main story for every article. There are also articles talking about companies added add-on cameras to their self check outs pretty much proving your theory wrong of every checkout having a preinstalled camera.https://www.dailyprincetonian.com/article/2023/03/princeton-university-dps-store-surveillance-camera-avoid-shoplifting

Those are integrated cameras on the self checkouts themselves, but even walmart SCOs are surrounded by cameras on the ceilings angled at your face despite having something like three cameras integrated directly on the machine.

It was the same deal back when Walmart was just staffed checkouts, too. Every register had the single camera directly above to watch the till, but there are several others angled in a number of different directions. If some sort of theft happens, the surveillance system doesn't do much good unless there is a clear image of the perpetrator's face.

Your face isn't by far the only means of identification. Think things like credit cards or those annoying store discount cards.

Maybe, but it would be a stronger form of irony if, say, the self checkout machine employed a capacity for judgement based on the things you bought.

Or..

If the human checkout people were far more intrigued with the items you thought weren't embarrassing at all.

Your assumptions of having a list of users by location and that the forum would have an API are doing some heavy lifting.

But I don't disagree in principle

The way I see it, The US government already has its tentacles in so much, and they have such a ridiculous amount of space and processing power on their end, though this does make things easier, it doesn't necessarily give them a whole lot of capabilities they don't already have given their resources

I have a hard time imagining local police departments getting into this sort of thing but I suppose it's not impossible

A few years away but authoritarian governments and regimes will almost certainly use the technology to effortlessly squash any dissent before it even happens. Such efforts will become trivial and commonplace, like the accepted surveillance state in China. Gone will be any hope of ever spreading ideas of democracy and freedom to the rest of the world. ChatGPT and its ilk may actually wind up being the worst thing for democracy ever.

You don't need an authoritarian country

Even the most socially free places have specialist police and intelligence agencies that will use these technologies to "fight terrorism" or whatever the excuse of the day is

Democracy already has a hard time being democratic lol nevermind once this tech is used for evil.

Russian Government is developing ai conscription data program to conscript to war, those that do not have political power or who those they deem undesirables. This is to prolong the war by reducing the political strain of conscripting from the class of people that have political power / adjacent to that class

Minority report type stuff

Jesus Christ. I don’t even know what to say. At least we are getting a picture of how it works I guess. Thanks for the article!

I don’t know how to address this but I know we have to address this proactively.

Funny. I just got into an argument with ChatGPT about intellectual property.

I argued that it was dead. It told me to respect the laws and not take information from others. I told it that it was being hypocritical, as all of its training data is largely taken without permission, and that it doesn't cite it's sources.

It told me that an AI can't be a hypocrite.

You should've answered.

PROVE ME WRONG!

Then we wouldve seen how it spins its reasoning strings to justify it's answer.

It was saying that you need to have opinions to be a hypocrite, and that as an AI it doesn't have opinions.

I then gave it examples of hypocrisy that were devoid of opinion, and walked it into how what it did is similar to the example provided. The closest I got to it understanding was something like

"I understand why you think this is similar l, but I am an AI, and I can't be a hypocrite because I don't have opinions."

Trying not to worry about the extreme stuff I tend to post online lol

Same man.. same lol

User42069BlazeIt seems to be indulging in Midget Clown Porn. Would you like me to report this incident?

Every AI in the future

It’s almost worth it to see the look on the faces of the ā€œI don’t care, I have nothing to hideā€ people.

Almost.

i wish there was a button on reddit to wipe all comment/ post history

It's called 'Create a new account' and it works really well. Even if there was a button like this I think it's pretty well-known by now that NOTHING can be really deleted from the internet. Especially user data.

If it can be abused, it will be.

I think it's actually much worse than just monitoring. Armies of astroturfing bots will argue with you on social media to sway your political attitudes or to try to sell you junk based on highly targeted advertising garnered from your comnents, and you won't be able to tell that they are not human.

Best way is to probably poison the data.

0.1c adds up at the scale of the internet. It's not really all that cheap, so I feel it is probably more likely to be used at the federal level than at your local police department.

This is only the beginning. We are barely at half a year since ChatGPT was released. Once more competition enters the market and the model efficiency improves, prices will fall even more. If the models shrink enough for them to be effectively self-hosted, then that will make them cost so little it will completely democratize mass surveillance.

The opposite is true then.

If language models can be self-hosted, then they can be run to generate smoke screen content that the parsing AI has to sift through.

We really have no idea how any of this is going to play out. AI of this calibre is Pandora's box, and we have very little understanding of its consequences. We could be fucked or everything could balance out.

Yeah there could be a shit load of code fighting each other on the internet while humans watch. I think this has to potential to cure people of their addiction with social media and crave for validation on the internet by revealing how stupid it is to be spending human intelligence to compete in the sea of average content being mass produced by code.

i appreciate this post. this is something I’ve been trying to tell people but I couldn’t have put it as eloquently as you. I feel and have felt deeply uneasy about what ai entails for privacy, and I’m not convinced the positives outweigh the negatives. computers cannot be held accountable, so who gets blamed when this technology causes mass suffering and infringement on our rights as people? I doubt it will be the soulless academics who work on and feed this tech, because the common attitude among those people is that technology should be furthered at all costs (including human suffering en masse)

The freedom and happiness of humanity should be the propriety is all. AI can help humanity plenty but of course that won’t be happening any time soon.

[deleted]

Applies retroactively to all available printed and archived content from you too, and all the data you leak from your activities. Lots of companies and governments and billionaires birthing their own Roko's Basilisks to judge you for your lack of fealty to their projects. Happy Friday

So now when i tell politicains to Fuck right off , they will be notified? Sweet.

/s

this is amazing and terrifying all at once.

It's been around since google started monitoring google searches. One defensive thing that can be done is to launch red herring bots that talk like humans and cause extreme spikes and noise in very traveled back alley internet discourse

Facebook has AI scanning, and they miss all context, so euphemisms are completely missed such as threats to "take care" of someone. Conversely, you can quote a song lyric (One of my favorites is Pink Floyd's "One of these days" which goes "One of these days I'm going to cut you into little pieces") and it will be removed. I have even had instances where I made it crystal clear that nobody was being threatened, and yet the post removed and my account has been restricted. They refused the challenge and the review board never sees any mistakes, so my request to review is pointless. So someday I'm going to have a record of "violent speech" in some database I can't see or contest. Who knows what legal or other ramifications it will have.

The general public has no idea. We created an employee app at Intel and distributed it to all 90k+ employees at the time with corporate accounts. While not confirming or denying, every single piece of data was visible on any device that app was installed. We had access to everything. Might not seem too bad? Location at every moment, browsing behavior, texts, emails, pictures, etc and combine that with app behavior. We can access Instagram, Facebook, twitter, anything and then correlate all that data. Now imagine what we can do with that data using machine learning. Ever wonder why you had a weird one off convo about something like ā€˜train horns’ and all of a sudden you see train horn ads five days later? Strap up folks!

[deleted]

Right on the money!

1 - I always lie

2 - The previous statement is true

I DO NOT GIVE AI PERMISSION TO USE MY POSTS

"This speech is for private use only. It serves no actionable purpose and any meaning thereof derived is purely coincidental and not fit for any purpose The speaker does not accept any liability for someone's interpretation of the speech captured or their intended use thereof."

It's more than time that humans get a disclaimer too and that we are absolved from what an artificially intelligent system purports to make of our words.

"You said this!"

I refuse to take any responsibility for what an AI makes of the words I may or may not have used.

But they (at least most) are not going to use an AI's interpretation of what you say as grounds for an actual arrest. But they WILL use it as grounds to "keep an eye" on you, possibly get a court order to tap your phones & read your mail, etc., to gather "actual" evidence. Your disclaimer does nothing to stop that.

Then it wasn't me saying it.

There's plenty of technology that could be used to twist my words any way someone wants to. Digital information is as reliable as a used car salesman's claims for the fitness of a car. Anything digital can be corrupted in any way necessary.

I didn't say it, I didn't mean it that way, it's taken out of context.

Also, if there are systems that capture all my information I want that same information from whomever ordered my information to be captured, the entire chain of command from the lowest to the highest operator / officer / manager, and for the same reason.

The upcoming generation will be utterly defenceless against third-party intrusion in their lives. It's not going to be a pretty place to live in because we don't build pretty places for ourselves to live in.

Russian Government is spending billions to create a new data surveillance program that will roll out next year. The goal is to increase efficiency in conscription by using ai to choose who gets drafted. So undesirable political dissidents, minorities, or anyone the state determines to be a problem will be sent to war. This is so that the wealthy Russians don’t see many young men sent to war and never come back, but the communities that do not have political power bear the brunt of conscription.

Russia is planning to extend this war beyond what Americans are willing to remain invested in. We will see who cracks first, but this ai technology is going to be used in the near future to increase Russian manpower without causing political instability in Russia

Damn. That's actually fucking insane. So that means they will beat Stalin by far with his little penal battalions because now there will be an entire penal army. I guess nothing can wrong if you round up all your enemies in one place and give them weapons, right? It's not like they will start talking to each other since day one and will eventyally find out that they all are against the regime. RIGHT?

It would be likely be possible to use the technology to place people in regiments away from those that might have a lot in common like regional neighbors or similar ethnic groups. Sure they might still band together but it could be less likely if they are selected to a regiment from all over the place

I REALLY hope whoever looks at my data ends up understanding most of it is just bait & trolling for fun online and should not be taken seriously at all.

You legally cannot be detained if you end every comment with /s

That's honestly why this account is named what it is--can't tell if anything I say is legit if it's pre-labeled as probably a lie.

How can I avoid this? Would a VPN help protect me?

Yes, and no. The main point, is that if you are on any accounts that can be linked directly back to you as a person, be careful about what you say. Don't say things that an AI model could come in later and deduct 'this person could be a problem'.

VPNs can mask your IP, but they aren't magic. If you mention private information like your real name or where you live in a post, the VPN is useless. They are one piece in many layers of protection you can use to keep yourself anonymous if you want to be.

So if you have a VPN and you've seen Reservoir Dogs and you payed attention you're OK?

and you paid attention you're

FTFY.

Although payed exists (the reason why autocorrection didn't help you), it is only correct in:

  • Nautical context, when it means to paint a surface, or to cover with something like tar or resin in order to make it waterproof or corrosion-resistant. The deck is yet to be payed.

  • Payed out when letting strings, cables or ropes out, by slacking them. The rope is payed out! You can pull now.

Unfortunately, I was unable to find nautical or rope-related words in your comment.

Beep, boop, I'm a bot

So the obvious solution is to increase their cost… more spam will now be welcome??

LPT: Do not post your illegal activities that may be a danger to others online. Thank you

It's more than this. The real problem here is that today's AI doesn't really understand the actual meaning of the data it harvests. It's not real intelligence. So it can't recognise things like irony, jokes or context. Users or subs like military shitposting subs could easily be flagged as potential violent criminals instead of shitposters or dumpster fires like r/genzedong could be flagged as potential terrorists or revolutionaries instead of edgy children that don't understand how the world works and basement dwellers.

I didn't know people used real identities to shitpost on reddit??

Note also encrypted info is being stored, with the intention that it be decrypted in the near future via quantum computing or whatever more advanced computer tech. I forget the term for it, but Veritasium did a vid on it a few days ago: https://www.youtube.com/watch?v=-UrdExQW0cs

What’s the best way to counteract this?

Interesting

TLDR: Big Brother is watching you!

We’re better off taking measures or looking towards technologies that can help us not get linked with our online accounts. As to make our online presence separate from our true identities. If humans can build tools like chat gpt, they surely can invent technologies to anonymize us, and with the current awareness around privacy, I bet we can give governments a fight.

Governments only are because we let them…

Yeah if there is no government there’d be things far worse than the invasion of your privacy to worry about. To topple a government and not dive headfirst into chaos is unlikely to happen.

So glad I was born when I was amd hopefully won't have to live through much of this god-awful b.s. This world is such a fking shtshow, and no amount of positivity about crap like this changes that.

You can also guarantee these models will be run on past data.

Roko's Bazilisk in da house y'all! Never thought this concept would become real.

Text Analysis:

The text highlights the potential for large language models (LLMs), such as ChatGPT and GPT-4, to revolutionize surveillance by efficiently parsing and interpreting human text. The author expresses concern about the ease, affordability, and existing instances of AI-enabled surveillance, which could lead to the widespread use of these tools by actors who may not have the best interests of individuals in mind.

Strategy to prevent misuse of LLMs in surveillance:

Raise awareness: Inform the public about the potential risks of AI-powered surveillance, emphasizing the importance of privacy and caution when sharing information online. Advocate for clear and comprehensive regulations: Lobby for the establishment of legal frameworks that regulate the use of LLMs for surveillance purposes. Encourage strict rules, transparency, and accountability in the deployment of such technologies by government and private entities. Encourage ethical AI development: Promote the development and adoption of ethical guidelines for AI research and implementation. This includes incorporating privacy-preserving techniques, such as differential privacy and federated learning, in the design of LLMs. Support privacy-enhancing technologies: Encourage the use of encryption, anonymization, and other privacy-enhancing tools that can help protect individual data and communication from unauthorized access or surveillance. Monitor and expose misuse: Establish independent watchdog organizations to track and expose cases of AI-powered surveillance misuse. These organizations can help hold governments and corporations accountable for any violations of privacy or human rights. Develop and promote alternative, privacy-preserving AI applications: Support research into AI technologies that enhance privacy, rather than compromise it. Encourage the development of AI applications that empower users and protect their privacy. Promote digital literacy: Educate the public on the importance of digital privacy and security, as well as ways to safeguard their personal information online. This includes teaching individuals how to evaluate the credibility of websites, use strong passwords, and avoid sharing sensitive information on public forums. By implementing these strategies, we can help mitigate the risks associated with the misuse of LLMs in surveillance, promoting a more privacy-conscious society that values individual rights and freedoms.

Focusing on raising awareness and advocating for clear and comprehensive regulations can efficiently address the potential misuse of large language models (LLMs) in surveillance. A 5-step plan to start implementing these strategies could be:

The time required to complete all five steps will vary depending on several factors, such as the complexity of the issue, the level of existing awareness, the receptiveness of policymakers, and the resources available to the coalition. However, a rough estimate for the completion of each step could be:

Form a coalition: 1-3 months Building a coalition of diverse stakeholders requires time to identify, contact, and secure commitments from the participants. Develop clear messaging: 1-2 months Crafting a compelling and concise narrative requires research, collaboration, and feedback from stakeholders. Conduct awareness campaigns: 3-12 months Awareness campaigns can take time to plan, execute, and measure their impact. The duration will depend on the scale and scope of the campaign, as well as the resources available for promotion and engagement. Draft policy proposals: 3-6 months Developing comprehensive policy proposals requires research, consultation with experts, and collaboration among stakeholders to ensure the proposals are well-founded and practical. Engage with policymakers: 6-24 months Engaging with policymakers can be a lengthy process, as it involves building relationships, presenting proposals, and advocating for change. The time required will depend on the complexity of the issue, the legislative agenda, and the willingness of policymakers to address the concerns. Considering these estimates, the entire process could take anywhere from 1 to 3 years to complete all steps. However, it's essential to recognize that these steps may also overlap, and the actual time required will depend on the specific circumstances and resources available.

Predicting the exact chances of preventing the implementation of AI-powered surveillance within three years is challenging, as it depends on various factors, such as the speed of technological advancements, public awareness, policy changes, and the actions taken by governments and corporations. However, some factors can influence the likelihood of success:

Public sentiment: The success of efforts to prevent AI-powered surveillance will depend on the level of public awareness and concern about the issue. If the public is well-informed and actively engaged in advocating for privacy, it is more likely that policymakers will take the matter seriously and implement necessary regulations. Policy progress: Success will also depend on the pace at which new policies are developed and enacted. If comprehensive regulations addressing AI-powered surveillance are implemented swiftly, it is more likely that the deployment of threatening technology can be prevented or limited. International cooperation: Surveillance technology does not respect borders, and as such, international collaboration is essential. If countries can work together to establish global standards and share best practices, the likelihood of preventing or mitigating the threat of AI-powered surveillance will increase. Technological advancements: The development of privacy-preserving technologies and AI applications that empower users can help counterbalance the risks posed by AI-powered surveillance. If such technologies advance rapidly and are widely adopted, they may help offset the potential threats. Corporate responsibility: If technology companies prioritize ethical considerations and incorporate privacy by design principles, they can play a crucial role in preventing the misuse of AI-powered surveillance technology. Corporate responsibility initiatives can help foster a more privacy-conscious industry. While it is difficult to quantify the exact chances of preventing the implementation of threatening AI-powered surveillance technology within three years, the combined efforts of stakeholders, policymakers, and the public can significantly increase the likelihood of success.

Buckle up people time is running šŸƒšŸ½ā€ā™‚ļø

Ha, I can tell you did not experience Y2k. Same kind of "end of the world" stuff was going around then. Things will change, just not the way you imagine.

[removed]

[deleted]

An authoritarian government is made up of antisocial people

You aren't thinking nearly big enough.

With this kind of monitoring, we can HYPER discriminate against people! We can built giant dossiers on them nearly from the moment someone starts to communicate, and use every little thing they share online or even out loud, against them!

You think productivity tracking is draconian now? Now we can use AI to read and comprehend everything you say and write, and track your time and productivity down to the millisecond!

Call in sick, but your pattern matches someone that's just a little hungover? 50 demerits! No COL raise for you!

Imagine losing your immigration status because of a throwaway comment you made when you were a teenager about how you didn't think it was cool that cops shoot so many dogs, or because you prayed incorrectly!

Yea I think I'm kind of 100% against the entirety of someone human experience being processed, understood, indexed, and summarized to anyone in LE with a mouse click. Because if LE can do it, anyone and everyone who'd find value in it is going to abuse the FUCK out of it also.

This level of paranoia is only meaningful if there’s some chance we will slip into a fascist society.

Oh wait….nvmd

Not even then. It's always the most useless people who think the government is watching them.

If it can improve safety, I’m okay with it.

for christ's sake man, read some history

Refreshingly positive spin, thanks!

We’ll see how long I stay not downvoted, lol.

I know there’s negatives, but dammit I also know there’s negatives about every scenario in the world. But there’s always positives too. The world exists in a constant fight of good and bad, not good and evil, just positives and negatives. Some species may die due to climate change, but others might thrive. Conversely, it may push humans to an extreme, but then we find a creative solution to overcome that.

Yeah, a lot of it is bad. But we can’t always focus on despair without becoming nihilistic and thus detrimental to ourselves and society.

You didn't actually explain anything.

And the monitoring example you gave, literally already happens right now, without AI.

This tech is extremely expensive, and even basing it off OpenAI models with API requests would require significant tweaking and tuning to be used in the way you described.

AI isn't easily weaponised because it takes a long time for it to get up and running, and I doubt it'll replace a human just spying on you, it's far cheaper, and doesn't require fancy tech.

I'd only worry about facial recognition and mass-surveillance, and AI has been used for both of those things for like, the past 5-10 years, and can be easily defeated.

Ah shit this makes stuff like character and the filter a lot more scary

That technology has been around for a long time just laws have prevented it without a warrant (doesn’t mean it has sent happened). We see this when high profile international espionage case came up. It also already does voice as well. Chat GPT doesn’t add anything here other.

However it is becoming mulit-model. So it can do pictures video ect. What it can do the. Is instead of acting as a trigger, i could ask it to summarize everything you have been up to today and get a summary without having to watch everything or read a bunch of transcripts and it can summarize pictures you looked at, videos you watched etc.

You still run into problems with the relatively small number of tokens it can handle but for your use case it isn’t needed.

What if I don't want to know, now or ever!!!!

But that also means that we can easily manipulate their perception by just being deliberately fake online, you can build an image in any way you please to fool your government, companies and so on. If they're relying on such things for mass profiling, the internet can be a great heat sink if you play it right. They will never see you coming.

Ultimate addiction for children is under development.

So you've never heard of outliers. Cool, cool.

[deleted]

0.1 cents.. so 10 of my posts equals one cent. I do understand the misreading though, typically outside of computer related activities, people don't think about anything costing less than a cent.

oh yeah my bad - thanks - that’s my issue -I need to read more closely and focus better

If there is a 'price to parse', could citizens just flood the zone with crap to make it prohibitively expensive?

fear-mongering bullshit

This has been going on for years. DoD contractors have their own semantic models that have been integrated to social monitoring systems…

I don’t care about being spied on. I lead a pretty boring life, except when it comes to sex.

Reminds me of the show person(s) of interest (if you guys haven’t watched definitely a good watch ). Scary now to think

so i guess people will have an even harder time getting back on reddit after a ban?

That’s now how ChatGPT works - it doesn’t return ā€œmachine readableā€ responses. That said, machine learning models that actually work how you describe have been around for at least a decade.

I’ll probably be safe or doomed because of my ignorance, and lack of motivation to figure out what in the world you’re talking about. Oh well.