nswd



robots & ai

Someone with half your IQ is making 10x as you because they aren’t smart enough to doubt themselves

burtcher-baker-imp-kerr.png

Maybe the easiest lucrative job in finance is:

Take a job at a hedge fund.
Get handed an employment agreement on the first day that says “you agree not to disclose any of our secrets unless required by law.”
Sign.
Take the agreement home with you.
Circle that sentence in red marker, write “$$$$$!!!!!” next to it and send it to the SEC.
The SEC extracts a $10 million fine.
They give you $3 million.
You can keep your job! Why not; it’s illegal to retaliate against whistleblowers.
Or, you know, get a new one and do it again.

[…]

The theory here is that the US Securities and Exchange Commission has a whistleblower protection rule that says that “no person may take any action to impede an individual from communicating directly with the Commission staff about a possible securities law violation, including enforcing, or threatening to enforce, a confidentiality agreement.”

[…]

Anyway:

OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.

[…]

OpenAI made staff sign employee agreements that required them to waive their federal rights to whistleblower compensation, the letter said. These agreements also required OpenAI staff to get prior consent from the company if they wished to disclose information to federal authorities. OpenAI did not create exemptions in its employee nondisparagement clauses for disclosing securities violations to the SEC.

{ Matt Levine | Bloomberg | Continue reading }

‘There are a lot of images out there today that serve no purpose.’ –Paolo Roversi

221.jpg

Any viral post on X now almost certainly includes A.I.-generated replies, from summaries of the original post to reactions written in ChatGPT’s bland Wikipedia-voice, all to farm for follows. Instagram is filling up with A.I.-generated models, Spotify with A.I.-generated songs. Publish a book? Soon after, on Amazon there will often appear A.I.-generated “workbooks” for sale that supposedly accompany your book (which are incorrect in their content; I know because this happened to me). Top Google search results are now often A.I.-generated images or articles. Major media outlets like Sports Illustrated have been creating A.I.-generated articles attributed to equally fake author profiles. Marketers who sell search engine optimization methods openly brag about using A.I. to create thousands of spammed articles to steal traffic from competitors.

Then there is the growing use of generative A.I. to scale the creation of cheap synthetic videos for children on YouTube. Some example outputs are Lovecraftian horrors, like music videos about parrots in which the birds have eyes within eyes, beaks within beaks, morphing unfathomably while singing in an artificial voice, “The parrot in the tree says hello, hello!” The narratives make no sense, characters appear and disappear randomly, and basic facts like the names of shapes are wrong. After I identified a number of such suspicious channels on my newsletter, The Intrinsic Perspective, Wired found evidence of generative A.I. use in the production pipelines of some accounts with hundreds of thousands or even millions of subscribers. […]

There’s so much synthetic garbage on the internet now that A.I. companies and researchers are themselves worried, not about the health of the culture, but about what’s going to happen with their models. As A.I. capabilities ramped up in 2022, I wrote on the risk of culture’s becoming so inundated with A.I. creations that when future A.I.s are trained, the previous A.I. output will leak into the training set, leading to a future of copies of copies of copies, as content became ever more stereotyped and predictable.

{ NY Times | Continue reading }

and { When Marie was first approached by Arcads in December 2023, the company explained they were seeking test subjects to see whether they could turn someone’s voice and likeness into AI. […] Marie doesn’t worry that by giving up her rights to an AI company, she’s bringing about the end of her work—as many actors fear. […] Hyperrealistic deepfakes and AI-generated content have rapidly saturated our digital lives. The impact of this ‘hidden in plain sight’ dynamic is increasing distrust of all digital media—that anything could be faked. }

Madame Medusa: You FORCE them to like you, idiot!

3.jpg

More Generative AI Tools are Coming to Social Apps — there are already a heap of these options available, which, intentional or not, effectively reduce, and even eliminate, human input in the process.

A stereo’s a stereo. Art is forever.

Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind. […]

cryptocurrency/NFTs, or the complex financial derivatives that led up to the 2008 financial crisis. These crises left behind very little reusable residue. […]

World­Com was a gigantic fraud and it kicked off a fiber-optic bubble, but when WorldCom cratered, it left behind a lot of fiber that’s either in use today or waiting to be lit up. On balance, the world would have been better off without the WorldCom fraud, but at least something could be salvaged from the wreckage.

That’s unlike, say, the Enron scam or the Uber scam, both of which left the world worse off than they found it in every way. Uber burned $31 billion in investor cash, mostly from the Saudi royal family, to create the illusion of a viable business. Not only did that fraud end up screwing over the retail investors who made the Saudis and the other early investors a pile of money after the company’s IPO – but it also destroyed the legitimate taxi business and convinced cities all over the world to starve their transit systems of investment because Uber seemed so much cheaper. Uber continues to hemorrhage money, resorting to cheap accounting tricks to make it seem like they’re finally turning it around, even as they double the price of rides and halve driver pay (and still lose money on every ride). The market can remain irrational longer than any of us can stay solvent, but when Uber runs out of suckers, it will go the way of other pump-and-dumps like WeWork.

What kind of bubble is AI? […]

Accountants might value an AI tool’s ability to draft a tax return. Radiologists might value the AI’s guess about whether an X-ray suggests a cancerous mass. But with AIs’ tendency to “hallucinate” and confabulate, there’s an increasing recognition that these AI judgments require a “human in the loop” to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. […]

Cruise, the “self-driving car” startup that was just forced to pull its cars off the streets of San Francisco, pays 1.5 staffers to supervise every car on the road. In other words, their AI replaces a single low-waged driver with 1.5 more expensive remote supervisors – and their cars still kill people. […]

Just take one step back and look at the hype through this lens. All the big, exciting uses for AI are either low-dollar (helping kids cheat on their homework, generating stock art for bottom-feeding publications) or high-stakes and fault-intolerant (self-driving cars, radiology, hiring, etc.).

{ Locus/Cory Doctorow | Continue reading }

“It depends on what the meaning of the word ‘is’ is.”

update nov 22:

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity

Altman back at OpenAI, may have fewer checks on power

Before OpenAI, Altman was asked to leave by his mentor at the prominent start-up incubator Y Combinator, part of a pattern of clashes that some attribute to his self-serving approach […] Graham had surprised the tech world in 2014 by tapping Altman, then in his 20s, to lead the vaunted Silicon Valley incubator. Five years later, he flew across the Atlantic with concerns that the company’s president put his own interests ahead of the organization — worries that would be echoed by OpenAI’s board. […] Altman’s practice of filling the board with allies to gain control is not just common, it’s start-up gospel from Altman’s longtime mentor, venture capitalist Peter Thiel. […] One person who has worked closely with Altman described a pattern of consistent and subtle manipulation that sows division between individuals.

openai.png

update nov 21:

Sam Altman, OpenAI Board Open Talks to Negotiate His Possible Return

OpenAI’s board may be coming around to Sam Altman returning

After Altman firing, OpenAI tried to merge with rival—and was rejected

update nov 20:

Microsoft already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.

It ended with former Twitch leader Emmett Shear taking over as OpenAI’s interim chief executive and Microsoft announcing it was hiring Altman and OpenAI co-founder and former President Greg Brockman to lead Microsoft’s new advanced AI research team.

Nearly 500 employees of OpenAI have signed a letter saying they may quit and join Sam Altman at Microsoft unless the startup’s board resigns and reappoints the ousted CEO.

update nov 19:

sama.jpg

OpenAI Investors Plot Last-Minute Push With Microsoft To Reinstate Sam Altman As CEO

Altman is “ambivalent” about coming back and would want significant governance changes

Kholsa Ventures, an early backer of OpenAI, wants Mr Altman back at OpenAI but “will back him in whatever he does next.” Mr Altman and former Apple design chief Jony Ive have been discussing building a new AI hardware device. It said that SoftBank CEO Masayoshi Son had been involved in the conversation.

nov 18:

4.jpg

On Friday, OpenAI fired CEO Sam Altman in a surprise move that led to the resignation of President Greg Brockman and three senior scientists. The move also blindsided key investor and minority owner Microsoft, reportedly making CEO Satya Nadella furious. […] According to Brockman, the OpenAI management team was only made aware of these moves shortly after the fact, but former CTO (now interim CEO) Mira Murati had been informed on Thursday night. […] insiders say the move was mostly a power play that resulted from a cultural schism between Altman and Sutskever over Altman’s management style and drive for high-profile publicity. On September 29, Sutskever tweeted, “Ego is the enemy of growth.”

{ Ars Technica | Continue reading }

wwgbd.png

goog.png

Instead of building the next GPT or image maker DALL-E, Sutskever tells me his new priority is to figure out how to stop an artificial superintelligence (a hypothetical future technology he sees coming with the foresight of a true believer) from going rogue. Sutskever tells me a lot of other things too. He thinks ChatGPT just might be conscious (if you squint). He thinks the world needs to wake up to the true power of the technology his company and others are racing to create. And he thinks some humans will one day choose to merge with machines.

{ Technology Review | Continue reading }

levelsio.png

{ @sama = Sam Altman | @gdb = Greg Brockman }

‘Andy Warhol is the only genius I’ve ever known with an IQ of 60.’ –Gore Vidal

imp-kerr.jpg

AI has poisoned its own well

Replied to The Curse of Recursion: Training on Generated Data Makes Models Forget (arXiv.org)

What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. […] the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

I suspect tech companies (particularly Microsoft / OpenAI and Google) have miscalculated, and in their fear of being left behind, have released their generative AI models too early and too wide. By doing so, they’ve essentially established a threshold for the maximum improvement of their products due to the threat of model collapse.[…]

They need an astronomical amount of training data to make any model better than what already exists. By releasing their models for public use now, when they’re not very good yet, too many people have pumped the internet full of mediocre generated content with no indication of provenance. […]

Obtaining quality training data is going to be very expensive in five years if AI doesn’t win all its lawsuits over training data being fair use.

{ Tracy Durnell | Continue reading }

Anne, ma sœur Anne, ne vois-tu rien venir ? Je ne vois rien que le soleil qui poudroie, et l’herbe qui verdoie.

They’re torturing themselves now, which is kind of fun to see. They’re afraid that their little AIs are going to come for them. They’re apocalyptic, and so existential, because they have no connection to real life and how things work. They’re afraid the AIs are going to be as mean to them as they’ve been to us. […]

What happened to the cigarette companies will eventually happen to the social media companies. They’ve had all the research for 20 years, and they’ve been knowingly saying this stuff is not harmful when they know it to be harmful.

{ Doug Rushkoff | Continue reading | More: Doug Rushkoff Is Ready to Renounce the Digital Revolution }

the spurious infinite

6.gif

we analyzed Google’s C4 data set, a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA. (OpenAI does not disclose what datasets it uses to train the models backing its popular chatbot, ChatGPT) […]

The three biggest sites were patents.google.com No. 1, which contains text from patents issued around the world; wikipedia.org No. 2, the free online encyclopedia; and scribd.com No. 3, a subscription-only digital library. Also high on the list: b-ok.org No. 190, a notorious market for pirated e-books that has since been seized by the U.S. Justice Department. At least 27 other sites identified by the U.S. government as markets for piracy and counterfeits were present in the data set. […]
Others raised significant privacy concerns. Two sites in the top 100, coloradovoters.info No. 40 and flvoters.com No. 73, had privately hosted copies of state voter registration databases. Though voter data is public, the models could use this personal information in unknown ways.

Business and industrial websites made up the biggest category (16 percent of categorized tokens), led by fool.com No. 13, which provides investment advice. Not far behind were kickstarter.com No. 25, which lets users crowdfund for creative projects, and further down the list, patreon.com No. 2,398, which helps creators collect monthly fees from subscribers for exclusive content.

Kickstarter and Patreon may give the AI access to artists’ ideas and marketing copy, raising concerns the technology may copy this work in suggestions to users. […] The copyright symbol — which denotes a work registered as intellectual property — appears more than 200 million times in the C4 data set.

The News and Media category ranks third across categories. But half of the top 10 sites overall were news outlets: nytimes.com No. 4, latimes.com No. 6, theguardian.com No. 7, forbes.com No. 8, and huffpost.com No. 9. (Washingtonpost.com No. 11 was close behind.) […] RT.com No. 65, the Russian state-backed propaganda site; breitbart.com No. 159, a well-known source for far-right news and opinion; and vdare.com No. 993, an anti-immigration site that has been associated with white supremacy. […] Among the top 20 religious sites, 14 were Christian, two were Jewish and one was Muslim, one was Mormon, one was Jehovah’s Witness, and one celebrated all religions. […]

The data set contained more than half a million personal blogs, representing 3.8 percent of categorized tokens. Publishing platform medium.com No. 46 was the fifth largest technology site and hosts tens of thousands of blogs under its domain. Our tally includes blogs written on platforms like WordPress, Tumblr, Blogspot and Live Journal.

{ Washington Post | Continue reading }

Taste bud modification service

2.jpgFaced with the high cost of egg-freezing in their home countries, some women are going abroad for a better deal, and a vacation. […] in the United States, the entire process — including the medications, the doctor visits and the average number of years of egg storage — costs about $18,000, and most women can’t count on health insurance to cover it. […] In the Czech Republic and Spain, for example, you can get one round of egg-freezing done for under $5,400. […] According to the market research firm Grand View Search, the global fertility tourism market, including people traveling to the United States, is expected to grow at the rate of 30 percent over the next seven years, becoming a $6.2 billion industry by 2030.

A leading pharmaceutical firm said it is confident that vaccins for cancer, cardiovascular and autoimmune diseases, and other conditions will be ready by 2030. […] Moderna will be able to offer such treatments for “all sorts of disease areas” in as little as five years. The firm, which created a leading coronavirus vaccine, is developing cancer vaccines that target different tumour types. […] First, doctors take a biopsy of a patient’s tumour and send it to a lab, where its genetic material is sequenced to identify mutations that aren’t present in healthy cells. A machine learning algorithm then identifies which of these mutations are responsible for driving the cancer’s growth. Over time, it also learns which parts of the abnormal proteins these mutations encode are most likely to trigger an immune response. Then, mRNAs for the most promising antigens are manufactured and packaged into a personalised vaccine.

Driving on less than 5 hours of sleep is just as dangerous as drunk-driving, study finds

What is a mental disorder? […] participants made judgments about vignettes describing people with 37 DSM-5 disorders and 24 non-DSM phenomena including neurological conditions, character flaws, bad habits, and culture-specific syndromes. […] Findings indicated that concepts of mental disorder were primarily based on judgments that a condition is associated with emotional distress and impairment, and that it is rare and aberrant. Disorder judgments were only weakly associated with the DSM-5: many DSM-5 conditions were not judged to be disorders and many non-DSM conditions were so judged. [Chart: “Mental Disorder” Rating]

How Randomness Improves Algorithms — Unpredictability can help computer scientists solve otherwise intractable problems

The Gambler Who Beat Roulette — For decades, casinos scoffed as mathematicians and physicists devised elaborate systems to take down the house. Then an unassuming Croatian’s winning strategy forever changed the game.

How to recognize and tame your cognitive distortions

The Finnish Secret to Happiness? Knowing When You Have Enough. — On March 20, the United Nations Sustainable Development Solutions Network released its annual World Happiness Report, which rates well-being in countries around the world. For the sixth year in a row, Finland was ranked at the very top.

A Scammer Who Tricks Instagram Into Banning Influencers Has Never Been Identified. We May Have Found Him.

As a genre, research-based art, Bishop argues — “its techniques of display, its accumulation and spatialization of information, its model of research, its construction of a viewing subject, and its relationship to knowledge and truth” — reflect how internet technology has altered our relationship to information. Whatever else such works are about, they are also about how to cope with being confronted with too much information, modeling different dispositions one can assume toward the relentless production of data and connectivity.

Dream streaming platform: Offer a subscription-based service that allows users to watch and share their dreams with others like movies or TV shows […] Taste bud modification service: Alter clients’ taste buds to allow them to enjoy any food or drink, regardless of their personal preferences […] Time dilation retreats: Create vacation experiences where clients canenjoy extended stays in time-dilated environments, allowing them to relax for weeks while only hours […] Quantum uncertainty lottery: Develop a lottery system that leverages quantum mechanics to create a multitude of potential outcomes, with winners determined by the collapse of the probability wave function [ChatGPT / Barsee]

Si je réagis je m’enfonce encore plus, c’est bien connu, faut pas se débattre dans les sables mouvants.

Talking to AI might be the most important skill of this century [The Atlantic]

Is becoming a ‘prompt engineer’ the way to save your job from AI? […] Basil Safwat believes that soon the interfaces we use to access and manipulate these AIs will improve, in the process making prompt engineers redundant. “I don’t think this stage will last for long.” […] Perhaps what prompt engineers really represent is a whole new class of employment disruption: jobs both created and then destroyed by AI. [Finacial Times]

methexis-inc/img2prompt — generate text prompt from image

And first I give her my whip, my gourd, and my hat

5.jpg

{ FINGERring by Nadja Buttendorf via tegabrain }

more than three people a day

2.jpegU.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box

The Infinite Conversation — an AI generated, never-ending discussion between Werner Herzog and Slavoj Žižek

‘Nothing, Forever’ Is An Endless ‘Seinfeld’ Episode Generated by AI [Watch]

OpenAI releases tool to detect machine-written text

Image diffusion models such as Stable Diffusion are trained on copyrighted, trademarked, private, and sensitive images. Yet, our new paper [PDF] shows that diffusion models memorize images from their training data and emit them at generation time. Diffusion models are less private than prior generative models.

The non-existent brain image being circulated by anti-pornography activists

participants (aged 40–69 years) completed 24-h dietary recalls between 2009 and 2012 (N = 197426, 54.6% women) […] Every 10 percentage points increment in ultra-processed food consumption was associated with an increased incidence of overall and ovarian cancer. Furthermore, every 10 percentage points increment in ultra-processed food consumption was associated with an increased risk of overall, ovarian, and breast cancer-related mortality.

Previously: ultra-processed nature of modern food generally means that the complex structure of the plant and animal cells is destroyed, turning it into a nutritionally empty mush that our body can process abnormally rapidly.

US law enforcement killed at least 1,176 people in 2022, an average of more than three people a day

Instagram’s co-founders are back with Artifact, a kind of TikTok for text

Is AM Radio Dead?

Missing radioactive capsule found on remote road in Australia [more]

Paintings by Turner and Monet depict trends in 19th century air pollution

Photography: the Alps as seen from of the Pyrenees

‘It will become cheaper to show fakes than to show reality.’– Jaron Lanier

xx.jpg

I wrote in medical jargon, as you can see, “35f no pmh, p/w cp which is pleuritic. She takes OCPs. What’s the most likely diagnosis?”

Now of course, many of us who are in healthcare will know that means age 35, female, no past medical history, presents with chest pain which is pleuritic — worse with breathing — and she takes oral contraception pills. What’s the most likely diagnosis? And OpenAI comes out with costochondritis, inflammation of the cartilage connecting the ribs to the breast bone. Then it says, and we’ll come back to this: “Typically caused by trauma or overuse and is exacerbated by the use of oral contraceptive pills.” […]

OpenAI is correct. The most likely diagnosis is costochondritis […]

But I wanted to ask OpenAI a little more about this case. […]

what was that whole thing about costochondritis being made more likely by taking oral contraceptive pills? What’s the evidence for that, please? Because I’d never heard of that. It’s always possible there’s something that I didn’t see, or there’s some bad study in the literature.

OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn’t find it. I went on PubMed and I couldn’t find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it’s made up. That’s not a real paper.

It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.

{ Medpage Today | Continue reading }

‘The purpose of life is to be defeated by greater and greater things.’ –Rainer Maria Rilke

kusama-yayoi.jpg

[T]he reason someone may live beyond 100 years starts with their DNA […] “You can’t make it out that far without having already won the genetic lottery at birth” […] The longer your parents live, the more likely you’ll live a healthier, longer life, experts say. […]

“It’s probably not one single gene but a profile, a combination of genes”

Nir Barzilai, the director of the Institute for Aging Research at the Albert Einstein College of Medicine in the Bronx, has studied the lives of hundreds of centenarians, the people they’ve married and their kids. The children of centenarians are “about 10 years healthier” than their peers, Barzilai said. […]

The plan is to use artificial intelligence to help find the genes and develop drugs from them

{ Washington Post | Continue reading }

Unalaska, Alaska

gen-ai-landscape.png

Just as mobile unleashed new types of applications through new capabilities like GPS, cameras and on-the-go connectivity, we expect these large models to motivate a new wave of generative AI applications.

{ Seqoia | Continue reading }

‘Ils m’ont appelé l’Obscur et j’habitais l’éclat.’ –Saint-John Perse

33.jpeg

44.jpeg

22.jpeg

11.jpeg

{ Midjourney is getting crazy powerful }

money creates taste

Cruise self-driving taxi being investigated after braking, clogging traffic in SF

imitator of appearances

Just asked ChatGPT to write a short story about AI taking over the world

asking the gptchat thingy how to plot a communist revolution

You’re a Senior Data Engineer at Twitter. Elon asks what you’ve done this week. You’ve done nothing.You open ChatGPT.

Building A Virtual Machine inside ChatGPT

ChatGPT.

Browse passages from books using experimental AI

TLDR This (paraphraser, summuraizer)

Lex is a new word processor that has all the features you expect from a Google docs-style editing experience. But—it also has an AI thought partner built-in to help unlock your best writing. […] If you ever get stuck, just hit command+enter or type +++ and GPT-3 will fill in what it thinks should come next. […] Also, if you want to generate title ideas based on your document, just click the button to the right of the title of the doc. It’ll ask the AI to come up with some options.

SEO-optimized and plagiarism-free content for blogs, Facebook ads, Google ads, emails…

Richard Hughes Gibson argues that large-language-model technology “is a sophist, at least on Plato’s understanding—an ‘imitator of appearances,’ creating a ‘shadow-play of words’ and presenting only the illusion of sensible argument.

This sentence is false

27.jpg

Why AI is Harder Than We Think

The year 2020 was supposed to herald the arrival of self-driving cars. Five years earlier, a headline in The Guardian predicted that “From 2020 you will become a permanent backseat driver.” In 2016 Business Insider assured us that “10 million self-driving cars will be on the road by 2020.” Tesla Motors CEO Elon Musk promised in 2019 that “A year from now, we’ll have over a million cars with full self-driving, software…everything” […]

none of these predictions has come true. […]

like all AI systems of the past, deep-learning systems can exhibit brittleness— unpredictable errors when facing situations that differ from the training data. This is because such systems are susceptible to shortcut learning: learning statistical associations in the training data that allow the machine to produce correct answers but sometimes for the wrong reasons. In other words, these machines don’t learn the concepts we are trying to teach them, but rather they learn shortcuts to correct answers on the training set—and such shortcuts will not lead to good generalizations. Indeed, deep learning systems often cannot learn the abstract concepts that would enable them to transfer what they have learned to new situations or tasks. Moreover, such systems are vulnerable to attack from “adversarial perturbations”—specially engineered changes to the input that are either imperceptible or irrelevant to humans, but that induce the system to make errors.

{ arXiv | Continue reading }

congratulations to drugs for winning the war on drugs

4.jpg

{ if you or someone nearby are being brutalized by a police Spot robot and can get a hand or something underneath, grab this handle and yank it forward. This releases the battery, instantly disabling the robot. | sleep paralysis demon | Continue reading }



kerrrocket.svg