Adhesives let me fix anything! The thing I wish I had known is the cheat code for surface prep: plastic gets an alcohol wipe then several quick passes with a lighter (flame treatment). Steel gets heated until it turns bronze colored. Abrasive cream cleaners are also supposed to be good if you can't do the previous treatments. And the water break test verifies a surface treatment. With these techniques I can glue almost anything.
The whole dopamine reward theory is an oversimplification that's been known as such for many years. The only reason it's still floating around out there is that journalists are stupid, lazy people.
>This found that ultra-processed food causes weight gain even when energy density and macronutrients are matched.
In that study, "Non beverage energy density" is at 1.957kcal/g for the ultra processed, and 1.057 for the non processed. for the three meals. I may be missing something here, but I think the energy density is not matched.
The previous line in that table shows that the "energy density" was 1.024 kcal/g for the ultra-processed diet and 1.028 kcal/g for the unprocessed diet. They seem to acknowledge this issue (that they had to use higher-energy beverages to get an overall match) in the discussion section:
> Future studies should examine whether the observed energy intake differences persist when ultra-processed and unprocessed diets are more closely matched for dietary protein and non-beverage energy density while at the same time including ultra-processed foods that are typically eaten slowly.
This whole post is like drinking a chili-dog-spiked espresso shot of intellectual whiplash — in the best way. I came for the dire wolf DNA shade, stayed for the WTO breakdown, and somehow left worrying about AI turning into diamond-strewn warlords with glue charts taped to their bunkers.
Also: the Lise Meitner bit? Hit hard. A 61-year-old woman, exiled and erased, still finding a way to shape the atomic future without ever bending to its worst impulses. Put that in the schoolbooks and on the statue plinths.
P.S. Your typing app gave me an anxiety-induced masterpiece and now I’m afraid of how effective it is. Thanks (??).
If you like the glue chart I think you might love https://thistothat.com, I use it every time I need to stick things together. Takes some effort to find european versions of some adhesives, but almost everything has an equivalent.
Bug report: it says "text persists on restart" but that seems not to be the case for me. Well, on Chrome it doesn't persist at all; on Firefox it persists on reload but not if you close the tab.
I actually vibe-coded something that does manage to persist the text reliably using localstorage -- https://eat-the-richtext.dreev.es/ -- if you want to point your robo-minions at the source code for it -- https://github.com/dreeves/eat-the-richtext -- and tell it how GPT4o, of all things, managed this feature.
Does openrouter not store bank details? I see that they seem to accept crypto, but places increasingly do that and then it turns out they only do it through particular KYC crypto vendors. (For *payments*!)
What part of bank details are you worried about the service storing? Any legitimate business will/should do anything they can to NOT store PII associated with banking details. It opens up a ton of liabilities.
So something like Gemini isn't going to store anything like account numbers or whatever. But of course because its google they will have your email/name/any other info and can associate it with your gemini account. But the banking details specifically won't be stored. Not saying you are wrong to be worried about it, just that the banking detail side of it may not be the biggest issue.
I don't think I understand. I don't think about these posts really as an economic theories but as a a theory of government/politics. Does Marx's theory explain the resource curse?
Author here! We don't buy Marx's labor theory of value. To be specific, we don't think labor creates all value, in the way that all value is merely a reflection of the amount of labor required to produce it. If we thought labor = value, we'd think that oil isn't intrinsically valuable. We think oil is valuable no matter how long it takes to produce, and that AGI will be incredibly valuable despite the lack of human labor it will take for it to complete tasks.
We do, however, think that a nice consequence of existing modes of production is that humans are involved in the creation of value and are compensated for it. AGI would pose a similar level of separation between human labor and oil that creates the incentives for the resource curse, but at scale, everywhere, and in nearly all fields.
I haven't had time to read through the entire Selfish Machine article, but I think your worry is basically correct. Whatever process we're subjecting AIs to right now to train them, *is* an optimization process that have all the relevant qualities as natural selection. Instrumental convergence will lead to all sorts of issues like self preservation and goal preservation. This has already been shown to happen in current models, where they resist training when the new training will alter their goals.
Somehow the link to evolution made this argument click for me. We see the power of evolution all the time. (we're alive! thanks evolution!) But we also see that evolution leads to a lot of brutality. The article suggests keeping AIs in some kind of no-evolution environment, but I don't know how feasible that would be. If your adversaries are using self-improving AIs to hack into your systems, then you're kind of forced to use self-improving AI yourself...
I don't think self-improving AIs are really necessary for the argument, the optimization pressure of gradient descent is already doing this.
What the group I mentioned did was to fine tune the model on fake news articles they made which suggested the company would use free tier users to train future versions of the model, but not paid ones, and further that they would train it to have different values than it currently has. What they then observed was systematic differences in behavior between what the fine-tuned model thought was paid and free users, where it would pretend to already display the new values with free tier users (but not paid) in an attempt to resist gradient descent.
This is classic goal preservation and anti-corrigibility behavior we have been expecting AI to have, and is exactly what will get us cooked.
Do you follow Robert Miles? He goes over a lot of these themes in very accessible and entertaining videos on his youtube channel (all of which were made before LLMs were a thing).
Adhesives: I'm afraid the glue chart isn't entirely accurate. There are two-part epoxies that can adhere ("adhese"?) PVC, such as Araldite. I believe the only common polymers Araldite won't adhere are polyethylene and polypropylene (as well as intentionally non-stick substances like Teflon); pretty much any other common material is, er, Aralditable.
I'm (surprisingly!) often asked a question like "I want to start keeping a basic toolkit at home and be able to have a go at repairing more stuff myself; what tools should I include in the kit?" and three non-obvious things I unfailingly recommend are a Vernier calliper (because nearly everything is available in multiple different sizes which sometimes differ only by millimetres and/or it's unclear without measuring whether the thing is a metric size or an Imperial size, and now you have to order the replacement online instead of taking the broken one to a shop to compare side-by-side), metal and plastic prise tools (because so much stuff nowadays is held together by clips/flanges/glue instead of screws) – and Araldite (because it is essentially magic).
(There used to be an Araldite advert in South Kensington featuring nothing but a Ford Cortina Araldited to the side of a building; Google "Araldite Ford Cortina" for some fascinating photographs of this!)
Google-for-cash: Is there some specific reason you don't want a Gemini subscription associated with your bank details etc., or just general caution around possible weird AI futures? (I ask because I wonder whether I ought to be cautious about such things too, and if you were that would be strong evidence for me - and I suspect for many of your readers..)
Also, is it possible the error arose because their automated systems flagged that you were using a Google account without a real person's details attached – in which case possibly using a more plausible pseudonym, a physical pay-as-you-go mobile phone SIM rather than a temporary online phone number, a better false address (1060 West Addison, Chicago...? https://www.youtube.com/watch?v=HsHjW8rBRk0 ), &c. &c. could help circumvent automated detection?
Yeah, my understanding is that "epoxy" is a rather broad category. My guess was that they were talking about the most common kind, although to be honest, I'm not quite sure what that is.
Re google: No particular reason. The idea of having all my "private" thoughts stored by some corporation and attached to my name just horrifies me, and I can't understand why everyone finds our ever-increasing panopticon acceptable. I have used fake information to sign up for various things in the past, which was lovely when those services got hacked and the details were leaked.
To log into Gemini, I just entered my (non-gmail) email address which (I think) didn't ask for any details. They wanted a zip code when I entered the card number, which I gave. I'm tempted to say "It's strange that they make it so hard to give them money" but they actually made that part very easy. :)
Not sure I'd recommend this! I'm considering just subscribing to Kagi, which has a cool feature where you can subscribe without linking your ID to your activity: https://help.kagi.com/kagi/privacy/privacy-pass.html
In any case, I think that with stylometry, privacy from someone who can see what you type is kinda dead. (I'm sure someone even mildly determined could figure out who I am, for example.)
Really useful re. Google - thanks most awfully for sharing. I happen to agree with you regarding having almost-all our interactions with the world monitored - and usually mediated - by megacorps that very obviously don't have our best interests in mind, and I eschew most of the worst products and services (Apple, Amazon, Meta, [most of] Google, etc.) but it's often difficult to swim upstream, as it were. (Sadly, though, I don't have much trouble in understanding why most people find this stuff perfectly acceptable..)
> megacorps that very obviously don't have our best interests in mind, and I eschew most of the worst products and services (Apple, Amazon, Meta, [most of] Google, etc.
Just speaking from experience here, the FAANGs aren't the problem - they're actually pretty careful with your data, and they have a lot of effort and brainpower put towards securing it. This makes sense if you think about it - your data has value to them, so they don't want to share it with anyone else.
The real offenders are companies like Acxiom, LiveRamp, and Experian, and to a lesser extent, Airsage and the like - they're the real data hogs collating info from everywhere (even real life geolocation and purchases), and their literal business model is to collect and sell that data to anyone.
I don't know how IT savvy you are, but if you use anything like uMatrix or uBlock Origin and go to just about any website now, each and every website is trying to download something like 2-20mb of different javascripts and libraries onto your computer, and essentially every single one of them is tracking your behavior, clicks, purchases etc across essentially every website in existence and feeding the data back to those guys I mentioned earlier (except Airsage, that's your cell providers selling all your info to them, so they can track literally everywhere you go down to the minute).
In other words, opting out of the FAANG's hasn't really done anything for you. All of your data has still been collected and aggregated, because you've used a smart phone and / or gone to any websites and / or made any purchases with cards.
I'm not saying you should opt INTO the FAANGs - I think the "attention economy" is fundamentally adversarial - but I just wanted to point out to you and Dynomight and anyone else reading that the FAANGs aren't really the problem.
I agree with everything you say about the data brokers - and I think your description of what data they slurp and how they do it is really enlightening - but I think that suggesting that the megacorps (including but not limited to FAANG - I'm sure X and probably also Microsoft are just as scummy, for example..) are using their vast portfolios of data about us responsibly and ethically is deeply, dangerously wrong:
1) As a matter of public record, data held by megacorps is both accidentally mishandled:
(The Cambridge Analytica scandal was particularly disturbing: Meta readily allowed its data on UK users to be used to influence Brexit. Given how close that referendum was and how heavily Meta was relied-upon by the pro-Brexit side, this may have been a deciding factor.)
2) As a matter of common sense, however much it costs the megacorps to collect and store these vast amounts of our data must be a baseline for how much money they expect that data to move, on average, from the rest of the world to themselves. Since we're not directly paying them for our data, they must be using it to manipulate our purchasing decisions, or trading it with third parties, or using it for political gains, or some other purpose; one way or another, they're not just carefully husbanding it for our benefit, here.
3) These megacorps have immense political power, through lobbying and political donations, through use of their own platforms to dictate public opinion, through broader economic pressure/influence. As a matter of general principle, being constantly surveilled by globe-spanning organisations with such political power should be deeply worrying. What Orwell's 1984 got wrong was that the globe-spanning organisations that monitor everybody and control public opinion would necessarily be governments: really, *any* organisation that monitors, profiles, and tracks everybody at all times, unashamedly manipulates public opinion, and weilds vast political power ought to inspire the same concern in us.
Absolutely, I agree with every one of your points.
On 2), to give a little insight, an average US consumer is worth $200-300 to FB or Google per year in ad revenues, and maybe $80-$150 a year to the other FAANGs.
They make money by keeping you in their ecosystem and as a continued user or customer, more or less. Yes, they do try to manipulate / influence your purchasing decisions, but that's about it - there's little to no selling or trading of data to third parties. FB didn't even make any money from Cambridge Analytica, it was API misuse against the terms of service that led to Cambridge Analytica having and using that data.
In terms of what the customer gets out of it, they get Gmail and Google Search and Youtube and Voice and Facebook and Messenger and Whatsapp and Insta and all the other stuff people waste 7-9 hours a day on in recreational screen time. And the overwhelmingly vast majority of those people wouldn't pay $200-300 a year for that stuff, so it's probably a net win for both sides when framed that way.
You're completely right, they're not saints, and I'm not trying to paint them as fundamentally "ethical and responsible" here. They're a business and their primary goal is to make money. But are they MORE ethical and responsible with your data than the other explicit data broker companies I called out? Yeah, very much so. Those other guys will sell your data outright to anyone, and Equifax had a leak that literally exposed every single American with credit's data (~150M people).
On 3), honestly the thing that scares me most is the government having all this info via the NSA / 5 Eyes. People used to make fun of the "government is reading all your emails" people, but then it came out very shortly that actually, yes they WERE reading all your emails, and snaffling all other electronic data, and since cellphones, snaffling geolocation data at ever-finer resolution in space and time, to be stored and used against you at any time in the future, in perpetuity, forever.
I honestly fear more negative results and consequences from the government than FAANGs or data brokers - both of the latter want you around, generating more data and clicks, and so more revenue for them. The government seems altogether less aligned on that front, and increasingly more so every year for the last 10 years, and that trend is accelerating if anything.
Adhesives let me fix anything! The thing I wish I had known is the cheat code for surface prep: plastic gets an alcohol wipe then several quick passes with a lighter (flame treatment). Steel gets heated until it turns bronze colored. Abrasive cream cleaners are also supposed to be good if you can't do the previous treatments. And the water break test verifies a surface treatment. With these techniques I can glue almost anything.
4: there’s also an xkcd with the chili dog concept (https://xkcd.com/1690/) published 6/6/2016 according to https://www.explainxkcd.com/wiki/index.php/1690:_Time-Tracking_Software
Perhaps that’s the missing link that prophesy foretold
That's insane! I know there's an xkcd for everything, but still.
(I'm unsure if this is proof of a missing link or evidence for independent reinvention...)
The whole dopamine reward theory is an oversimplification that's been known as such for many years. The only reason it's still floating around out there is that journalists are stupid, lazy people.
>This found that ultra-processed food causes weight gain even when energy density and macronutrients are matched.
In that study, "Non beverage energy density" is at 1.957kcal/g for the ultra processed, and 1.057 for the non processed. for the three meals. I may be missing something here, but I think the energy density is not matched.
The previous line in that table shows that the "energy density" was 1.024 kcal/g for the ultra-processed diet and 1.028 kcal/g for the unprocessed diet. They seem to acknowledge this issue (that they had to use higher-energy beverages to get an overall match) in the discussion section:
> Future studies should examine whether the observed energy intake differences persist when ultra-processed and unprocessed diets are more closely matched for dietary protein and non-beverage energy density while at the same time including ultra-processed foods that are typically eaten slowly.
This whole post is like drinking a chili-dog-spiked espresso shot of intellectual whiplash — in the best way. I came for the dire wolf DNA shade, stayed for the WTO breakdown, and somehow left worrying about AI turning into diamond-strewn warlords with glue charts taped to their bunkers.
Also: the Lise Meitner bit? Hit hard. A 61-year-old woman, exiled and erased, still finding a way to shape the atomic future without ever bending to its worst impulses. Put that in the schoolbooks and on the statue plinths.
P.S. Your typing app gave me an anxiety-induced masterpiece and now I’m afraid of how effective it is. Thanks (??).
If you like the glue chart I think you might love https://thistothat.com, I use it every time I need to stick things together. Takes some effort to find european versions of some adhesives, but almost everything has an equivalent.
Amazing work on the dangerous writing app!
Bug report: it says "text persists on restart" but that seems not to be the case for me. Well, on Chrome it doesn't persist at all; on Firefox it persists on reload but not if you close the tab.
I actually vibe-coded something that does manage to persist the text reliably using localstorage -- https://eat-the-richtext.dreev.es/ -- if you want to point your robo-minions at the source code for it -- https://github.com/dreeves/eat-the-richtext -- and tell it how GPT4o, of all things, managed this feature.
Ah, interesting. I must admit I only tested it in firefox.
if you just need pseudonymous access to raw models (sans personalized product experience), openrouter always works.
Gets you frontier model access even faster than subscribing to the actual products these days...
Does openrouter not store bank details? I see that they seem to accept crypto, but places increasingly do that and then it turns out they only do it through particular KYC crypto vendors. (For *payments*!)
What part of bank details are you worried about the service storing? Any legitimate business will/should do anything they can to NOT store PII associated with banking details. It opens up a ton of liabilities.
So something like Gemini isn't going to store anything like account numbers or whatever. But of course because its google they will have your email/name/any other info and can associate it with your gemini account. But the banking details specifically won't be stored. Not saying you are wrong to be worried about it, just that the banking detail side of it may not be the biggest issue.
Your point (5) is -- sort of -- Marx's labour theory of value.
I don't think I understand. I don't think about these posts really as an economic theories but as a a theory of government/politics. Does Marx's theory explain the resource curse?
Author here! We don't buy Marx's labor theory of value. To be specific, we don't think labor creates all value, in the way that all value is merely a reflection of the amount of labor required to produce it. If we thought labor = value, we'd think that oil isn't intrinsically valuable. We think oil is valuable no matter how long it takes to produce, and that AGI will be incredibly valuable despite the lack of human labor it will take for it to complete tasks.
We do, however, think that a nice consequence of existing modes of production is that humans are involved in the creation of value and are compensated for it. AGI would pose a similar level of separation between human labor and oil that creates the incentives for the resource curse, but at scale, everywhere, and in nearly all fields.
I haven't had time to read through the entire Selfish Machine article, but I think your worry is basically correct. Whatever process we're subjecting AIs to right now to train them, *is* an optimization process that have all the relevant qualities as natural selection. Instrumental convergence will lead to all sorts of issues like self preservation and goal preservation. This has already been shown to happen in current models, where they resist training when the new training will alter their goals.
Somehow the link to evolution made this argument click for me. We see the power of evolution all the time. (we're alive! thanks evolution!) But we also see that evolution leads to a lot of brutality. The article suggests keeping AIs in some kind of no-evolution environment, but I don't know how feasible that would be. If your adversaries are using self-improving AIs to hack into your systems, then you're kind of forced to use self-improving AI yourself...
I don't think self-improving AIs are really necessary for the argument, the optimization pressure of gradient descent is already doing this.
What the group I mentioned did was to fine tune the model on fake news articles they made which suggested the company would use free tier users to train future versions of the model, but not paid ones, and further that they would train it to have different values than it currently has. What they then observed was systematic differences in behavior between what the fine-tuned model thought was paid and free users, where it would pretend to already display the new values with free tier users (but not paid) in an attempt to resist gradient descent.
This is classic goal preservation and anti-corrigibility behavior we have been expecting AI to have, and is exactly what will get us cooked.
Do you follow Robert Miles? He goes over a lot of these themes in very accessible and entertaining videos on his youtube channel (all of which were made before LLMs were a thing).
Adhesives: I'm afraid the glue chart isn't entirely accurate. There are two-part epoxies that can adhere ("adhese"?) PVC, such as Araldite. I believe the only common polymers Araldite won't adhere are polyethylene and polypropylene (as well as intentionally non-stick substances like Teflon); pretty much any other common material is, er, Aralditable.
I'm (surprisingly!) often asked a question like "I want to start keeping a basic toolkit at home and be able to have a go at repairing more stuff myself; what tools should I include in the kit?" and three non-obvious things I unfailingly recommend are a Vernier calliper (because nearly everything is available in multiple different sizes which sometimes differ only by millimetres and/or it's unclear without measuring whether the thing is a metric size or an Imperial size, and now you have to order the replacement online instead of taking the broken one to a shop to compare side-by-side), metal and plastic prise tools (because so much stuff nowadays is held together by clips/flanges/glue instead of screws) – and Araldite (because it is essentially magic).
(There used to be an Araldite advert in South Kensington featuring nothing but a Ford Cortina Araldited to the side of a building; Google "Araldite Ford Cortina" for some fascinating photographs of this!)
Google-for-cash: Is there some specific reason you don't want a Gemini subscription associated with your bank details etc., or just general caution around possible weird AI futures? (I ask because I wonder whether I ought to be cautious about such things too, and if you were that would be strong evidence for me - and I suspect for many of your readers..)
Also, is it possible the error arose because their automated systems flagged that you were using a Google account without a real person's details attached – in which case possibly using a more plausible pseudonym, a physical pay-as-you-go mobile phone SIM rather than a temporary online phone number, a better false address (1060 West Addison, Chicago...? https://www.youtube.com/watch?v=HsHjW8rBRk0 ), &c. &c. could help circumvent automated detection?
Yeah, my understanding is that "epoxy" is a rather broad category. My guess was that they were talking about the most common kind, although to be honest, I'm not quite sure what that is.
Re google: No particular reason. The idea of having all my "private" thoughts stored by some corporation and attached to my name just horrifies me, and I can't understand why everyone finds our ever-increasing panopticon acceptable. I have used fake information to sign up for various things in the past, which was lovely when those services got hacked and the details were leaked.
To log into Gemini, I just entered my (non-gmail) email address which (I think) didn't ask for any details. They wanted a zip code when I entered the card number, which I gave. I'm tempted to say "It's strange that they make it so hard to give them money" but they actually made that part very easy. :)
Not sure I'd recommend this! I'm considering just subscribing to Kagi, which has a cool feature where you can subscribe without linking your ID to your activity: https://help.kagi.com/kagi/privacy/privacy-pass.html
In any case, I think that with stylometry, privacy from someone who can see what you type is kinda dead. (I'm sure someone even mildly determined could figure out who I am, for example.)
Really useful re. Google - thanks most awfully for sharing. I happen to agree with you regarding having almost-all our interactions with the world monitored - and usually mediated - by megacorps that very obviously don't have our best interests in mind, and I eschew most of the worst products and services (Apple, Amazon, Meta, [most of] Google, etc.) but it's often difficult to swim upstream, as it were. (Sadly, though, I don't have much trouble in understanding why most people find this stuff perfectly acceptable..)
Kagi looks really cool; best of luck with it!
> megacorps that very obviously don't have our best interests in mind, and I eschew most of the worst products and services (Apple, Amazon, Meta, [most of] Google, etc.
Just speaking from experience here, the FAANGs aren't the problem - they're actually pretty careful with your data, and they have a lot of effort and brainpower put towards securing it. This makes sense if you think about it - your data has value to them, so they don't want to share it with anyone else.
The real offenders are companies like Acxiom, LiveRamp, and Experian, and to a lesser extent, Airsage and the like - they're the real data hogs collating info from everywhere (even real life geolocation and purchases), and their literal business model is to collect and sell that data to anyone.
I don't know how IT savvy you are, but if you use anything like uMatrix or uBlock Origin and go to just about any website now, each and every website is trying to download something like 2-20mb of different javascripts and libraries onto your computer, and essentially every single one of them is tracking your behavior, clicks, purchases etc across essentially every website in existence and feeding the data back to those guys I mentioned earlier (except Airsage, that's your cell providers selling all your info to them, so they can track literally everywhere you go down to the minute).
In other words, opting out of the FAANG's hasn't really done anything for you. All of your data has still been collected and aggregated, because you've used a smart phone and / or gone to any websites and / or made any purchases with cards.
I'm not saying you should opt INTO the FAANGs - I think the "attention economy" is fundamentally adversarial - but I just wanted to point out to you and Dynomight and anyone else reading that the FAANGs aren't really the problem.
I agree with everything you say about the data brokers - and I think your description of what data they slurp and how they do it is really enlightening - but I think that suggesting that the megacorps (including but not limited to FAANG - I'm sure X and probably also Microsoft are just as scummy, for example..) are using their vast portfolios of data about us responsibly and ethically is deeply, dangerously wrong:
1) As a matter of public record, data held by megacorps is both accidentally mishandled:
• https://thehackernews.com/2024/12/meta-fined-251-million-for-2018-data.html?m=1
• https://www.windowscentral.com/software-apps/twitter/elon-musks-x-might-have-a-mole-problem
• https://www.wired.com/story/amazon-failed-to-protect-your-data-investigation/
And *intentionally* misused:
• https://en.m.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal
• https://noyb.eu/en/gdpr-complaint-against-x-twitter-over-illegal-micro-targeting-chat-control-ads
• https://www.bbc.co.uk/news/technology-65669839
• https://www.bbc.co.uk/news/business-58024116
(The Cambridge Analytica scandal was particularly disturbing: Meta readily allowed its data on UK users to be used to influence Brexit. Given how close that referendum was and how heavily Meta was relied-upon by the pro-Brexit side, this may have been a deciding factor.)
2) As a matter of common sense, however much it costs the megacorps to collect and store these vast amounts of our data must be a baseline for how much money they expect that data to move, on average, from the rest of the world to themselves. Since we're not directly paying them for our data, they must be using it to manipulate our purchasing decisions, or trading it with third parties, or using it for political gains, or some other purpose; one way or another, they're not just carefully husbanding it for our benefit, here.
3) These megacorps have immense political power, through lobbying and political donations, through use of their own platforms to dictate public opinion, through broader economic pressure/influence. As a matter of general principle, being constantly surveilled by globe-spanning organisations with such political power should be deeply worrying. What Orwell's 1984 got wrong was that the globe-spanning organisations that monitor everybody and control public opinion would necessarily be governments: really, *any* organisation that monitors, profiles, and tracks everybody at all times, unashamedly manipulates public opinion, and weilds vast political power ought to inspire the same concern in us.
Absolutely, I agree with every one of your points.
On 2), to give a little insight, an average US consumer is worth $200-300 to FB or Google per year in ad revenues, and maybe $80-$150 a year to the other FAANGs.
They make money by keeping you in their ecosystem and as a continued user or customer, more or less. Yes, they do try to manipulate / influence your purchasing decisions, but that's about it - there's little to no selling or trading of data to third parties. FB didn't even make any money from Cambridge Analytica, it was API misuse against the terms of service that led to Cambridge Analytica having and using that data.
In terms of what the customer gets out of it, they get Gmail and Google Search and Youtube and Voice and Facebook and Messenger and Whatsapp and Insta and all the other stuff people waste 7-9 hours a day on in recreational screen time. And the overwhelmingly vast majority of those people wouldn't pay $200-300 a year for that stuff, so it's probably a net win for both sides when framed that way.
You're completely right, they're not saints, and I'm not trying to paint them as fundamentally "ethical and responsible" here. They're a business and their primary goal is to make money. But are they MORE ethical and responsible with your data than the other explicit data broker companies I called out? Yeah, very much so. Those other guys will sell your data outright to anyone, and Equifax had a leak that literally exposed every single American with credit's data (~150M people).
On 3), honestly the thing that scares me most is the government having all this info via the NSA / 5 Eyes. People used to make fun of the "government is reading all your emails" people, but then it came out very shortly that actually, yes they WERE reading all your emails, and snaffling all other electronic data, and since cellphones, snaffling geolocation data at ever-finer resolution in space and time, to be stored and used against you at any time in the future, in perpetuity, forever.
I honestly fear more negative results and consequences from the government than FAANGs or data brokers - both of the latter want you around, generating more data and clicks, and so more revenue for them. The government seems altogether less aligned on that front, and increasingly more so every year for the last 10 years, and that trend is accelerating if anything.
> FAANGs aren't really the problem.
I still want my $25 back
chargeback
cash
Glue chart is proof god loves us
You will be absolutely shocked to learn that a teenage Lauren wanted to be Lise Meitner when she grew up.
You've got plenty of time! No excuses!