# Video outline based on research - This video isn't about AI - How this video came to be - It was frustrating because the topic kept changing - It was depressing because the use cases for AI have gotten so dark, so fast. - I needed to rethink how this video was going to go, because trying to keep up and read up on how it was being used was clearly going to drive me insane. - I eventually figured out what bothered me most is that I don't feel like AI is optional. - I don't feel like I have a choice with AI. It's in my computer, it's on smart phones, it's in every website, and it's the topic of so many videos posted to YouTube today. - I have a choice with not using a smart phone, I have a choice to not pay for subscription services. I did not feel like AI was a choice, so that is how I was going to move forward. Find a solution on how to make AI optional. - Now, I'm exhausted with the conversation of AI. - There are a thousand videos on this platform to tell you why AI is terrible or fantastic. By the time you're seeing this video, you've likely already gathered your own opinions about AI, and so I don't want to become another voice in your echo chamber for you. The discussion of AI will be kept to a minimum, because I'm just tired of hearing about it. This video isn't about AI. This video is about how I figured out how to make AI optional, and the experiences in my life that helped me come to that conclusion. - Education: Part 1 - Kids today have this magic box that will answer all their questions. - There's no need to do homework, there's no need to do exams, the box will do it all - This has come at the side effect of kids being identified as "dumber", but that really bothers me. - My education journey weirdly parallels these kids today, and I want to use my experience to push back against that idea. - Childhood education in public schools - Transition to a charter school - No homework, no exams - No math, science, history, no unraveling the mystery that all started with the bing bang (sorry) - Return to public schools in High School - First class was math - I didn't understand anything - Math quickly became the thing I struggled with the most - This is what we're seeing with kids today. They're struggling in the most fundamental areas of education that we have to offer. The part that no one talks about is what happens next. - Isolation in ignorance - I felt totally stupid, and worthless to the world - I wanted to end it all as a result - Ended up at my first in-patient stay in a hospital for 2 weeks - My teachers knew - They bent the rules to make sure I barely passed - The "Anthony is..." box from my hospitalization - "Intelligent" didn't click, but this box meant a lot to me. - Kindness really shined like a beacon that day - Graduated with a 1.9 GPA, and moved onto the working world - Technology and Creativity - I always tinkered with computers - Broke a lot more than I fixed, but it was fun - I started to watch YouTube videos - I started making videos in Paint and Windows Movie Maker - I wanted to be a YouTuber one day - Helping people in high school - I helped people fix their iPods - I helped teachers do some basic troubleshooting - I continued to make videos, made some friends who I would make videos with - People said I was smart, but I thought they were lying to me - Entering the workforce - I started working a geek squad - Learned better communication skills - People continued to say I was smart. I continued to believe it was all a big lie. - I started playing around in Blender to make silly animations - I felt an enormous amount of joy in technology. It felt like I could do stuff, and it felt like it mattered in a small way. - I was convinced that technology was a creative thing, and had nothing to do with intelligence. - It was still eating away at me that I was stupid, so I thought that if I wanted to become smart, I'd have to get a degree - Education: Part 2 - Signing up for one college class: Math - If I could get this class out of the way, I could get the other math classes out of the way and just move on. - I thought I knew that I was the worst at math, so this would be my biggest barrier - I was terrified to do it, but I figured that the worst that can happen is I fail again and I just keep rolling with the computer thing - Wait... math is... fun? - I was actually enjoying this class a lot - I did the home work, I did the exams - I was helping other students in the class, even - My first ever 100% on an exam - This exam shattered everything I thought I knew about myself - The result of my own actions - I chose to attend this class - I chose to put in the work - I aced this exam - I was wrong - Let's try that again - Attended the next level math class the following semester to make sure it wasn't a fluke - Once again, I aced the class - I really was wrong - Maybe I'm not stupid - Maybe people weren't lying to me - Next class: English - Got a C in the class, barely passed - Tried the next level class, dropped out after 2 weeks - Never went to college again - The importance of failure - I failed with my goal to get a degree, but that failure taught me so much - I didn't NEED the degree to no longer feel stupid - I was WRONG about what I believed about myself - Failure is the greatest teacher, so long as you're willing to learn from it - Failure is what AI takes away from us. You don't get it wrong, the AI does. You aren't really challenged with AI in any meaningful way. If you want to become a great writer, making AI write everything for you will not do that. Sure, the result of the writing will be there, but YOU didn't do it. You didn't suck at writing over and over and over until you slowly got better. Failing is a critical step to growth. Without failure, there is no growth. - Failing means to participate in one of humanities greatest traditions - We learn to walk by falling down - Self assessment in failure - Understand why you failed and see if you can try something new the next time you try - You're never going to get anywhere if you keep hitting your head against the wall the exact same way over and over - Hit the wall in a different way and see what happens - Fear of failure - If you're scared to do something because of failure, do it anyways - This is the idea of the opposite action framework - When you know that you're in a "functional-freeze", you go in the opposite direction - When you're depressed and you're scared to talk to people, you go and talk to people - Whatever you're stuck in fear, you do the opposite - Scared to fail? Challenge yourself to fail at something - Feeling scared of failure is perfectly normal. I want you to think of something you KNOW you'll fail at, and then I want you to practice failing. I want the idea of failure to roll off of you. Not musically inclined? Find a piano and play it. Not good at drawing? Pick up a pen and draw anything. I want you to embrace failure as not something of shame, but something as growth. - Memories of being good at stuff - You didn't have a concept of shame with failure when you were a kid - You were bad at it at some point, but shame couldn't stop you from getting good - It is a choice to put shame in failure, and we can choose to change that to build each other up - I don't believe in stupid people - I was obviously haunted by the term "stupid" for a large part of my life - Not everyone knows the same stuff - I don't know how to drive a semi truck - I don't know how they get the gas in the lightbulbs to make them glow - I bet someone watching this very video knows how this works, and they're leaving a comment right now to share what they know with the rest of us who don't! - I'm pretty good with Blender, but even I have my limits - I don't know how geometry nodes work, and I'm scared to try them - Everyone is good at something - If you truly believe a person is stupid, I challenge you to: - Prove yourself wrong, find out what they're good at - Reflect on why you feel the need to add such a derogatory label to a person - Kindness - "Shame is a powerful tool" - Shame is not an instigator of growth. - Shame doesn't stop people from doing things, it just stops them from doing those things in front of you - It breaks a line of connection - Even if you're still in contact with that person, they won't talk to you about that thing at all, or they'll lie to you - You know what IS a powerful tool? Kindness - Letting people me cut in line at the store - It doesn't *really* matter, but it does matter - Going out of your way to be kind - Calling my insurance agent to say "thanks!" - Big surprised reaction, now I want to do this more - Calling all my doctors offices to thank them for keeping me alive - Calling my friends to just remind them how much they mean to me - Everyone says that things that make you angry are more reactive, but here's the thing - I don't recall things I was upset about a month ago - I remember every single time someone let me cut in line - I remember that box of nice notes from my teachers - Kindness is an awesome thing, and we should all really strive to do it as often as possible in whatever ways we can find - The solution to AI - Do stuff without the shame of failure - If you know how to do stuff, be kind and patient when showing others - Removal of shame from the process of learning - This is why AI is so enticing - I don't want to make you rely on AI to learn things, as we've seen plenty of problems with that - You should be excited when someone wants to learn something you know - If you don't think you can share information with kindness and patience, help guide them to someone who will be - It's okay to know your own limits, and there's no shame in admitting when you might not be the right person to help - Communication - Admit when you don't know something - Seek out knowledge in others - If you're scared to admit something, say that, too - Saying what you're feeling makes a surprisingly huge difference when it comes to communication - The end # How to do anything research These are all the notes I have from the pivot. There's not a lot here, because a lot of it was dialing in previous experiences, but there are some interesting notes I think. ### Related Links DougDougDoug video: https://www.youtube.com/watch?v=m8M_BjRErmM https://www.empr.com/news/adenoma-detection-rate-of-standard-colonoscopy-declines-after-exposure-to-ai-assisted-colonoscopy/ https://www.mprnews.org/story/2025/08/19/npr-doctors-ai-artificial-intelligence-dependent-colonoscopy ### Things to look more into - There is no ROI or anything with "just doing a thing". AI is going to be faster, and who knows, someday it might be better. Do the thing anyways. There is enormous value to your sanity and outlook on life when you just do stuff for the sake of doing it. Self motivation is the greatest thing in the world. Interview with William Osman talking about the lack of ROI on science fairs. https://www.youtube.com/watch?v=fNw60_-xnSU - Google's and Apples AI ads imply that you're stupid. These ads prove that my solution is a correct one. A basic level of understanding renders their AI completely useless. Also, as someone who went through a lot of nonsense to realize that I'm not stupid, these ads are insulting at best. I don't think you're stupid for wanting to seek out information. I believe that anyone can do anything that they set their minds to. I believe that all these people telling you that you're stupid for wanting to learn more are simply bad people. I know there are sometimes good intentions, like when people say "don't be stupid, don't fall for scams", but that's not how scams work. Scams are designed to trick people, and anyone at anytime can fall for one. I've certainly fallen for scams, I know plenty of fellow creators here on YouTube have fallen for scams. You are not stupid for getting tricked by something that is designed to trick you. We need to help encourage people to learn, and calling people stupid isn't going to help them at all. - Shortsighted decision making for high-return term profits, and trample anyone who gets the way.. like we KNOW how this ends for the rich. History has sealed their fate, not me. And you know what that feels like to me? Stupid. I genuinely struggle to see these hyper rich people as anything other than stupid and cruel. I know they're people, and I know that they've somehow had challenges in their lives, but it is really really hard to ignore what they do to put people in harms way for what essentially boils down to made up numbers in their bank account. It's not kind, which as we talked about in the last video matters a lot to me. Why do I bring this up? Because I'm a hypocrite. I don't it when entire groups of people who do a certain thing are called stupid for wanting to do things differently. I recognize there's a power imbalance in my hypocrisy, but I want to call it out here that I'm not perfect here. It's really easy to do. It's easy to see someone do something dangerous while driving, and just mutter "what a dumbass". It's easy to look at what these rich people are doing and claim that they're just stupid for not seeing how history repeats itself. This is WHY kindness and patience matters so much to me. If you focus on those two things, you'll rarely see people as stupid. Rather, you'll see them as people. People who lost loved ones. People who are lonely. People who are so excited to tell you about how much they love plants. People who just want a break from their overwhelming depression for a brief moment, and to make it easier, use AI to turn that paper in. None of that is stupid, to me. It's human. # State of AI Research This is all the research related to the video before it turned into "how to do anything". This was the stuff I found from November 2024-January 2025, so a lot of it is dated at this point. Also please keep in mind that this is where a LOT of depressing topics come up. I was greatly disturbed doing this research, so please proceed with caution. If you're not in a mentally great state, I would honestly discourage opening this box. ## Research topics outline - The Current Definition of AI - Writing - Coding - Websites (Squarespace AI website builder) - Art (Images) - Music - Translation - Accessibility - Voices - Videos - Assistants - Deepfakes - Revenge Porn - Influencers - Chat Bots - Energy requirements - Misinformation - Education - Mental health - Medical - Insurance - Online Shopping - Privacy concerns - Religion (yes, really) - Politics - Employment - Moderation - Methods of marketing - The consumer side of AI (i.e. how most consumers say they're less likely to buy something with AI) - Economics/venture capital/investment of AI - War/terrorism ## Theme of the video The overarching theme with this entire video is "When is AI good and when is it bad". When the conversation of AI is brought up, it is generally focused on one smaller part of the whole picture. The goal for this video is to look at the whole picture, take all the points that people use to defend AI, and take all the points that people use to criticize AI, and dive deep. I know it's going to be a big video, it has to be. But I want to take the entire conversation for what it is. I am so tired of seeing "AI is great and will change the world, no issues here" vs "AI is going to ruin the world, no good here". It's both and it's neither, and since it's here to stay, we need to have a serious conversation about the entire topic by looking at where it is at today. I know some people say that calling AI art and ChatGPT the same thing is a bit misleading, but that's the umbrella companies are combining it all into. Apple is using their AI for image generation, assistant tasks, accessibility tasks, and writing tasks, and probably more, so the precedent of everything being combined is already a reality. I am going into this research being someone who is generally anti-AI. When in starting ramping in popularity in 2021, I absolutely played around with things like image generation, as at the time, it was just goofy. I played around with GhatGPT when it first came out, and that was the first inkling I got of unease. Then, AI art starting coming around, and I tried to play with it, but was off put instantly. I have been against the data collection of AI art as soon as I came to realize that is how it worked. I also work in tech, and I have seen a lot of companies lay people off because of short term profits, and shoving the word AI in everything as a justification to do so. That really makes me angry - and that's a feeling we need to address. AI makes people angry. Every single time I see it come up, someone gets visibly angry. I'll be honest, I've been in several conversations myself when I got angry. I'm generally not an angry person, I normally am able to stay chill in many situations, but AI pushes a lot of the right buttons. I know I'm not alone in this feeling, as comments sections regarding this topic are normally nightmarish with people going to such extremes to defend their points. I believe these tech corporations recognize that this anger occurs. The idea of using rage as a medium to get people to click and engage with ads has been around for over a decade at this point. It's clear that AI is just the latest and greatest tool that is being pumped by the rich to keep us arguing with each other instead of looking towards the actual issues that they cause the rest of us. This video will likely not generate any ad revenue. The topics of conversation in this video range vastly, and sometimes it gets very dark. I am not using this as a discouraging point, though. If I know a video won't make money due to the topic, I'm going all in. I'm going to make it worth it by diving as deep as I can, because I don't need to worry about the possibility of maybe making money or not. I know it's out of the question, so we're going to go all in. ## Research Topics (from the outline) ## Define AI Generative ML that has taken off in the last couple of years - ChatGPT and other similar text-based models - Dall-E and other similar image-based models - Features that have been implemented at the OS level for iOS/macOS, Windows, Android ## Art ### AI Animation Corridor Digital on AI generated anime - showcase the comments with the rage on either side of the debate ## Music ### Spotify and AI https://www.wired.com/story/spotify-ai-music-robot-listeners/ YouTube Lofi channels Art is AI, music is AI ## Accessibility Medical Transcription https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14 ## Voices ### General Notes: AI voice replication has a pretty polarizing pros and cons list. On one hand, you have a really accessible pathway for scamming people by replicating voices of people they know. ### I. AI prank calls: Harassing people using AI Note: Just using DuckDuckGo to search for AI prank call brings up a entire page of search results of products that offer this as a service - not a single article talking about the problems that this has caused... Here is the first linked result: https://jokedial.com/ With Joke Dial, you are required to sign in via Google or Apple, introducing some privacy concerns right off the bat. Having a look at their privacy policy, they do collect a lot of data, and given the demographic of prank calls is normally children (they are at least the most influenced to do so), this is a problem. https://jokedial.com/legal/privacy Here are the other services in the search: https://www.candycall.io/ https://www.prankdial.com/ https://makeaicall.com/prank https://www.prankify.lol/ https://www.prankgpt.com/ https://www.lmaoai.app/ To Do: - Read through every linked service above, figure out what and how the tech works - Summarize privacy policies using the privacy visualizer - Find articles that reference the problems it has caused ### II. AI phone scams: [https://abcnews.go.com/GMA/News/dad-warns-ai-voice-scams-after-family-lost/story?id=99344931](https://abcnews.go.com/GMA/News/dad-warns-ai-voice-scams-after-family-lost/story?id=99344931) Summary: Grandfather scammed out of $1000 from AI replication of grandsons voice claiming they were in jail in Mexico. Kind of a modern twist on a classic phone scam. [https://abcnews.go.com/Technology/experts-warn-rise-scammers-ai-mimic-voices-loved/story?id=100769857](https://abcnews.go.com/Technology/experts-warn-rise-scammers-ai-mimic-voices-loved/story?id=100769857) Summary: Mothers daughters voice was replicated using AI, scammer assured harm was imminent to her daughter, requested a million dollars. Mother (amazingly) called the father, who was with the daughter on a ski trip. He confirmed she was okay, and they did not financially loose anything, but they certainly were traumatized ### III. Reverse Uno Card on AI phone scams using AI (Finally, a good use of AI!) https://www.tiktok.com/t/ZP8LoEruJ/ Summary: AI generated grandma voice to be on the line with AI scammers to waste their time in mass To Do: - Find a few more examples of people using AI to waste scammers time ### IV. Reverse-Reverse Uno Card on AI phone scams getting even worse (Ah, dang it...) The previous point can be used in reverse, to send out campaigns of mass scams using AI. This could also change the voices to be more localized, to disguise the source of the call (like currently, many scam calls in the US are based out of India. By making the AI voice sound American, people are more likely to fall for it) Also: https://www.youtube.com/watch?v=EGZAWiN75As Live AI-powered voice changing has existed for over a year at this point, so scammers can also just disguise their voices to sound they are from another country or region. - Find evidence of this already happening - the tech is here, I'd be surprised if it's not already being abused ## Video ### I. Reading Reddit Posts with AI https://www.youtube.com/@reddittopstories2615 ### II. Public figures voice replication in memes: Presidents rank Legend of Zelda games https://www.youtube.com/watch?v=9VA3N7BzwH8 ### III. Corridor Digital AI Anime Part 1: https://www.youtube.com/watch?v=GVT3WUa-48Y This one was trained on an actual existing anime, therefore being stolen artwork. Behind the scenes video: https://youtu.be/_9LX9HSQkWo Part 2: https://www.youtube.com/watch?v=tWZOEFvczzA This one was trained on art that was created specifically for this video. The artist was fully aware that their art was going to be used in AI training, and they were compensated for it. Behind the scenes video: https://www.youtube.com/watch?v=FQ6z90MuURM Another helpful justification: https://www.youtube.com/watch?v=mUFlOynaUyk ### IV. Coke AI ad: https://www.youtube.com/watch?v=4RSTupbfGog ### V. Scams Tesla Giveaway Live Stream scams on YouTube To Do: - Look more into purely AI generated video content - Look for more examples of AI generated ads (maybe this goes into marketing?) ## Porn ### General notes This section is so dark. It's uncomfortable in every way imaginable, and while I would love to just not think about this, it's a reality that we need to discuss. Pretending it isn't a problem will only worsen it. ### I. AI Generated Revenge Porn https://www.cnn.com/2024/11/12/tech/ai-deepfake-porn-advice-terms-of-service-wellness/index.html https://www.jdsupra.com/legalnews/nudify-me-the-legal-implications-of-ai-2348218/ https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/ https://news.sky.com/story/im-going-to-ruin-your-life-inside-the-revenge-porn-helpline-13012873 ### II. AI Generated CSAM https://www.miamiherald.com/news/nation-world/national/article294058044.html Summary: A middle school janitor was arrested for created child porn of students using AI to superimpose the children's faces onto AI generated bodies. A quote from this article: “Knowing he took those photographs and what he does with them, it really makes me sick to my stomach,” a victim said in the release. “I feel gross, I know it’s not me, but it makes me feel gross and violated and disrespected.” Another victim said they “felt disgusted, embarrassed, and scared” and were worried the photos could be sold. “I was embarrassed cause I didn’t want people to think of me in this way when I hadn’t done anything,” the victim said, according to the release. https://www.cnn.com/2023/11/04/us/new-jersey-high-school-deepfake-porn/index.html https://www.cnn.com/2024/06/20/us/video/teenager-victim-deepfake-nudes-federal-bill-elliston-berry-src-digvid https://www.thetechedvocate.org/ai-generated-child-pornography-case/ To Do: - Wash eyes with bleach ## AI Chat Bots (AI Girlfriend, etc) ### General notes AI girlfriend companies typically use different techniques to emotionally manipulate customers - Threats of self harm (need sources) - Repeats name of customer (I broke up with....) - Replies a lot faster once it detects that customer wants to stop subscription (I broke up with...., need sources) These (and more) emotional manipulation techniques, combined with the loneliness epidemic, explains how so many people get hooked on AI girlfriend apps (need sources, including on loneliness stats, men and women, and how many people used this tech during covid lockdown) ### Replika [I Broke up With My Replika tonight. Here's how it went! (Youtube)](https://www.youtube.com/watch?v=ic-CToqV0V8) Summary : Man tries to break up with his Replika girlfriend, and things get VERY very very quick. As soon as the AI understands that Bruce is "breaking up" with it (not gonna renew subscriptions), it does things like repeating his name over and over. Bruce realizes that and talks about it. "No, Bruce, don't do it. Bruce, don't break up with me. Please, Bruce, don't do this to me." Things like this. Repeats "I love you, I need you". Man notes that as soon as he said he was breaking up, the replies were much quicker, as the company doesn't want to lose out on a yearly subscription. Replika advertises itself as having a $6.99 a month subscription, but that's actually false, as you can't pay month to month. You can only subscribe yearly. TO DO: - Create a replika to check the subscription claim ### Character AI https://www.washingtonpost.com/technology/2024/12/06/ai-companion-chai-research-character-ai/ ## Energy requirements Wired article with a good overview and story with links to further reading: https://www.wired.com/story/true-cost-generative-ai-data-centers-energy/ (Free version: https://archive.is/iOkAT) NPR - Taiwan chooses chip manufacturing over farming for prime water resources during drought: https://www.npr.org/sections/goatsandsoda/2023/04/19/1170425349/epic-drought-in-taiwan-pits-farmers-against-high-tech-factories-for-water (While not specifically about AI, the energy requirements for high-tech component production is nonetheless relevant. Additionally, local ecosystem destabilization and increases in pest population seem like a likely consequence in other cases where water is taken for production purposes.) AI Energy Score Project: https://huggingface.co/blog/sasha/energy-star-ai-proposal (Author of the wired article above has a research project working on ranking the energy usage of different LLM models. May be a good person to contact for interview?) Research article on the energy expenditure of AI: https://dl.acm.org/doi/10.1145/3630106.3658542 Don’t worry, Bill Gates says it’s not a problem: https://archive.is/jwlUn (Financial Times) (definitely sarcasm, but both he and Sam Altman (openAI) are claiming that AI will solve the energy crisis it's creating. Also, both he and Sam Altman have a financial stake in AI continuing (OpenAI uses MS data centers) ## Misinformation Google's Accidental Death Threat Bot https://www.tomshardware.com/tech-industry/artificial-intelligence/gemini-ai-tells-the-user-to-die-the-answer-appears-out-of-nowhere-as-the-user-was-asking-geminis-help-with-his-homework ## Mental Health https://www.msn.com/en-au/lifestyle/smart-living/during-the-pandemic-deepak-chopra-intervened-in-over-5000-suicide-attempts-but-it-wasn-t-him/ar-AA1v0Eg7 ^ Kind of an overly optimistic take, but interesting to look into either way https://www.msn.com/en-us/news/us/cincinnati-schools-are-using-an-app-to-identify-suicidal-kids-not-everyone-is-convinced/ar-AA1uP5ey ^ Much better balance https://www.forbes.com/sites/ganeskesari/2021/05/24/ai-can-now-detect-depression-from-just-your-voice/ ## Privacy Concerns https://theconversation.com/ai-harm-is-often-behind-the-scenes-and-builds-over-time-a-legal-scholar-explains-how-the-law-can-adapt-to-respond-240080 https://www.wgu.edu/blog/how-ai-affecting-information-privacy-data2109.html ## Religion Confess your sins to AI Jesus https://theconversation.com/ai-jesus-might-listen-to-your-confession-but-it-cant-absolve-your-sins-a-scholar-of-catholicism-explains-244468 Summary of the article, there is an AI Chat bot that is targeted towards Catholics who want to confess their sins. They trained the AI on a bunch of religious text, and from a religious view, that seems all good. From a data privacy perspective... what about the data you're providing it? Isn't confessing your sins sort of revealing your deepest darkest secrets? And now that's being given to a tech company that knows your name and other private information? Is that really a good idea? ## Politics https://theconversation.com/the-apocalypse-that-wasnt-ai-was-everywhere-in-2024s-elections-but-deepfakes-and-misinformation-were-only-part-of-the-picture-244225 ## Moderation https://theconversation.com/are-social-media-apps-dangerous-products-2-scholars-explain-how-the-companies-rely-on-young-users-but-fail-to-protect-them-222256 ## Employment https://www.msn.com/en-in/technology/tech-companies/companies-are-silent-firing-to-replace-employees-with-ai-here-s-what-it-means/ar-AA1tplwK ## Methods of marketing Google taking the heart out of a heartfelt letter https://www.theverge.com/2024/8/2/24212078/google-gemini-olympics-ad-backlash Apple thinks you're stupid https://www.youtube.com/watch?v=D0V554NyXWM ## War AI Powered Drones https://www.livescience.com/technology/robotics/defense-startup-developing-ai-kamikaze-drones-for-the-us-marines AHHH https://www.theverge.com/2024/12/4/24313486/openai-anduril-partnership-counterdrone-systems ## Questions and Responses from people These questions were asked to just get general takes from people that I know who are very pro-AI or very anti-AI. A lot of the discourse (especially in newer videos) seems to take one of these sides and totally ignore the other. I wanted to use this as a launching off point of thing we can research to get more context behind the opinions. This also opens up the possibility of bringing up points we might not have thought of. It's not perfect, but it's a decent launching point for the research. I've attached a tag with each person to help understand what they do for a living. Here are the questions I asked to start things off: 1. Where do you see AI to be beneficial? Is it helpful in more than one way? 2. Where do you see AI being problematic? Do you think it's problematic at all? 3. In regards to AI art, what are your thoughts? Do you think it's good or bad? Why? #### Person 1 (Systems Administrator): Q1: Personally, I see AI as an accelerator, a tool to help me complete tasks that are otherwise monotonous or time-consuming. I often find myself using it to complete tasks that I am otherwise capable of completing, simply in a fraction of the time. Or especially to get 80% of the way there so I can put the finishing touches on. At least, that's how I approach it when working with scripts, creating tools for myself, and so on. Take scripting for example. I recently needed to go through a variety of configurations in order to identify those that contain a particular setting so that I can make adjustments to those. Instead of clicking through hundreds of those configurations, I had AI write a python script to download each one, evaluate them programmatically, and create a CSV with the results. I then also wrote a second script with AI (though it could have been a single script) to make the edits I needed. Of course, before deploying in production, I always check the work of any AI-involved result. But, what would have taken me 1 to 2 hours, took me 15 minutes (let's call it 1.5 hours). That one act reduced time spent by 83% which is wild savings. What's important with that, is I now have that time to work on other things. If I can use AI to that same effect just 4 times a week, I will have saved nearly an entire workday each week, which again, can then be spent on other things. Deadlines becomes less stressful, tech debt gets addressed, I have more time to explore and learn new things, and can generally spend more time being an engineer. I do naturally find AI to be less useful in areas I am not experienced in enough to understand or make use of the end result. For example, because I don't spend time re-learning app development, I find it harder to make use of AI in its current form to help me produce an app I want to make. Sure, it can reduce the time taken but it's less impactful because I don't know how to get the rest of the way. It does also open up some learning possibilities by itself. I can now go "How would you implement XYZ?" to the AI, just as I would a human, and see it work, giving me something that I can then try to evaluate and learn myself. Q2: AI is extremely problematic, but I don't think it really has anything to do with the technology itself, just the usage. I find AI to be a fundamental step towards expanding the capabilities of humanity (and I don't say that lightly). The problem is, not everyone is going to use it to expand the capabilities of humanity. You see, there's this thing called greed. We see companies like Google racing to make their mark on AI by shoving down the throats of users. Performing Google searches has gone from a way to research and learn, and get answers, to a roll of the dice on what information or misinformation you receive. It's important to note that what we see as AI is still taking the form of Large Language Models and similar more basic (yes, basic) models. They don't generate answers, but anticipate what the answer might look like to start, and what might come next in the answer. The problem here is, Google can point in the direction of answers already. It doesn't need a language model to try to guess what the answer might be. Google and other AI search implementations are trying to __be__ the answer which is not what they are great at. Google and other search engines were to the earlier dot-com boom what AI is to modern day compute, an accelerator in tasks by itself. AI needs to be its own thing. Can search be augmented in a positive way by AI? Yeah probably. But this current implementation is a prime example of a haphazard rush to put AI in your core product at any cost, due to bowing down to greed. At the same time, AI is being used to further expand knowledge and research in very important ways, often as again, an accelerant. It's changing the routes we take to obtain answers or results. Of course, it's reasonable to be skeptical. I am too. But I am also hopeful that we will yield results that will make a positive impact on humanity. Which, appears to also be the general sentiment. I recommend checking into articles and papers in areas of importance (sciences such as medicine for example - [https://magazine.hms.harvard.edu/articles/did-ai-solve-protein-folding-problem](https://magazine.hms.harvard.edu/articles/did-ai-solve-protein-folding-problem "https://magazine.hms.harvard.edu/articles/did-ai-solve-protein-folding-problem") is a good example; And more from the Harvard medicine magazine which has a section on emerging technologies - [https://magazine.hms.harvard.edu/issues/autumn-2024](https://magazine.hms.harvard.edu/issues/autumn-2024 "https://magazine.hms.harvard.edu/issues/autumn-2024") ). Q3: I have some pretty polarizing hot takes surrounding AI art. First, I'll start by making people mad: Very rarely is anything you create comprised of fully original ideas. Everything we do is derivative or based on other works. Rarely is anyone a visionary in art, or creating fundamentally new ways to create art. And for that reason, I cannot find fault with AI art's existence, because it is doing __exactly__ what we do. Let's say you wanted to create a bird wearing a tophat. You know what a tophat looks like. You know what a bird looks like. You know that it might be anthropomorphized a bit to achieve your goal. You might also determine the appropriate attire needs accessories. So what makes using diffusion with the juggernautXL and LoRA any different? You still produce a result based on your own observations, and so does AI. AI is trained on a typically finite amount of input, but then again, so are we - the sum total of our experiences. Again, the issue is with usage and implementation. Depending on how things are designed, your end result can remain too close to existing works. That's not unique to AI though. If someone is asked to draw a bunch of examples of a swish in a single stroke, they might also come up with the Nike logo on occasion. Because of this, I don't find the "AI is derived based on existing works" to be a very useful argument when debating AI. As we progress, AI will get better at generating more unique components of an image, and therefore generating more unique images or other works. It's important to remember we have the laws of the land. If AI generates something too close to the likeness of another art and that gets used commercially, then that art should then also be subject to the same penalties and protections that we already have. We already sit in judgement of similar works, so what makes this any different? Much of the issues people have with AI art are not issues unique to AI art either. Having your works copied generally sucks. And, the smaller you are (read as: not rich), the harder it is to make use of those legal protections, and the easier it is to be abused by those same protections. DMCA is an absolute mess, and it is not an easy system to fix. But that's exactly what needs to happen. I still don't think it needs to be done with AI in mind either, because functioning protections would protect regardless of the method. We should very much hold accountable (as we already should be doing with existing protections) organizations that use the results of both people-created and AI-created works that infringe on existing works. There's one very obvious area that I have not touched on, and that is the freightening reality that deepfakes and freshly generated works used within propaganda, often to do harm, are very much a thing and will be a thing and at this point, is inescapable. At least, we think it's inescapable. This is not the first time we have ran into fictitious works being used. Forensic processes have tried to get out ahead in the past as well. For example, printer tracking dots ([https://en.wikipedia.org/wiki/Printer_tracking_dots](https://en.wikipedia.org/wiki/Printer_tracking_dots "https://en.wikipedia.org/wiki/Printer_tracking_dots")) offer one method to tie prints to a specific printer. Having works tracked in any way is __also__ somewhat frightening, particularly for countries where their governments waren't entirely on the up-and-up. But, these exist as ways to hold accountable for the works produced. Watermarking has been a thing for hundreds and hundreds (or thousands) of years. Especially with the existence of metadata and so on, watermarking in the digital age is already known, and is likely to continue to be expanded. This won't help much in the way of state-sponsored propaganda machines or other bad actors, but does allow for more direct scrutiny as a whole. And, tools will continue to progress to fight bad actors that use AI maliciously. We know this to be true. For every new creation, there are bad actors that want to use it for, well, bad. But there are also those who want to use it for good. We explore space using vehicles that were originally designed to deliver death and destruction to people on the other side of the world without getting out of our seat. Nuclear technology was also intended for weapons (which we did use a couple of, then proceeded to misplace a few more), but is also a source of some of the potentially cleanest energy we can create in comparison to other existing energies. Radar was created to detect and track enemy aircraft, but enabled aviation, weather forecasting, and even ultrasounds (Doppler is our homie). Cryptography was used to safeguard communications from the enemy and, well, not dissimilarly is used today to safeguard information from bad actors. GPS was used for the military but now lets us get from A to C while stopping at B for a coffee. AI is not unique. We still have a long ways to go before AI takes over a lot more tasks, and we will see that it is going to take __a long time__ before it can. #### Person 2 (Engineer): So - benefits: Streamlining processes, and repetitive tasks, potentially finding unique properties in datasets. Artistically, producing interesting or unique composition concepts. It __can__ be helpful in many ways. Problematic: Like any tool - it depends on who uses it. I think the possibilities for extreme maleficence is extremely high, especially with no or very minimal legal code, I think AI art can be a tool. But not an end-all-be-all. And the method in which it was trained is egregious. #### Person 3 (3D Artist): Where do you see it to be beneficial? Is it helpful in more than one way? I have used AI to help me with programming simple applications. I still have needed to have the skills to write out what I wanted in terms of pseudo code for best results, but it has accelerated what I want to to. We still need to have the basic understanding of how to read code as this will allow us to better express what needs to be change or updated. Where do you see it being problematic? Do you think it's problematic at all? AI can hallucinate which is a massive concern. This can lead us down rabbit holes that do not exist, and so we waste time fact checking the information that it is providing. This is the same as with programming. It has injested 90% bad code, which means that the results will favor bad code over good code. I also believe that this can cause laziness amongst people. I do not understand this in the University sector where students are having AI write their assignments, and not learning anything in the process. When I went to university last year, I am proud to say that AI did not write a single work of my assignments, however, I did use it as a stand in teacher. I would input what my topic is, what is the rubric, and for the AI to mark my assignment based off that information. For here I was able to update my assignment to get the best mark possible. This is a ethical consideration on how people are gaining the system, but there is no repications for people to do this. Youtube is filled with AI channels that are taking monitization away from actual content creators. In regards to AI art, what are your thoughts? Do you think it's good or bad? Why? I was talking about this with my kids, although AI is able to create art for two years or more now, people are still drawing, designing, making movies. I use AI art to come up with concept art, that I can then make into 3D. Will 3D AI modelling take over my role, maybe, but I will still be able to make things manually. Using addons in Blender, is that cheating to get the out come that I am after? Then using AI to create models to get my desired outcomes is the same thing. I have no intentions to do character modelings, so I have a tool that will make characters for me. Is that cheating? and what can we classify as cheating? I ran into a midjourney engineer at SIGGRAPH and he said we know people are creating images, but we dont know that they are doing with it. I was never afraid of AI art, but when they released OpenAI Sora, that was the first time I was afraid. Growing up we had a few terrible sites that you would see gore image, which still haunt me to this day. Now we can generate a video based off tragic events that people should not have to imagine, let alone see. Another point is that if there is a video or audio recording of us as an individual doing something that we should not of, we can blame AI as that is not us. Someone is trying to make me look bad. #### Person 4 (Animator): medical fields have used it well, organising a list of tasks - someone used it to help with meal prep and doing recipies from already bought ingredients problematic in that it consumes a tonne of energy, is built off the scraping of data and information from the web that was neither consented to nor compensated. ALSO the AI on google is wildly innacurate at times giving false or harmful information that it's scraped from everywhere rather than just reputable sources I can understand it being used for composition or idea generation just to get something down that you then go over later -- BUT the whole thing has been built off scraping data and art from artists who didn't consent or get compensation. fair enough if you're making your own model and training it on the art you've made, but most people DON'T do that and the big companies are charging for it which is HELLA RUDE #### Person 5 (Artist): I think ill be beneficial in places that we are selves we cant achieve like exploring places that we cant be I can see it being problematic when its starts replacing more and more things leaving less stuff for us to do I think AI art is pretty nifty as a tool to stimulate creativity but it shouldn’t replace it. I don’t think that people that use AI to make images can call themselves a artist #### Person 6 (Writer): [Using AI] Makes me feel I’ve done something wrong, That I’m not smart enough and the world will know I see it as invasive and cold. It doesn’t know compassion nor kindness. Nor emotions nor tears. It knows no struggle and therefore no survival However, lol, ai is assisting me in writing a novel. With my speech and cognitive disability, it gets out what I’m trying to say. Mini help. Not full time help #### Person 7 (AI Game Developer): Productivity Optimization Analysis Powerfully abusable Believable misinformation Designed by humans, innate flaws Vulnerabilities in early or low quality situations, unshielded to EMPs, base code, integrated power or component framework Great, imaginative if truly “art”, real world creator verifiable Realism a different story…probably bad, needs lawful rules, just like the real world, likeness etc #### Person 8 (Artist, Government Contractor) i guess i see 'AI' as another form of automating, specifically automating thought. i don't see automating thought as useful for anything truly human-driven, and don't want to see it replace the word choice people would use in things like blog posts and essays. A Youtuber I respect referred to AI art specifically (the video topic was AI art) as "lowest denominator" and I think that's a good word choice. I think youve already gotten everything i would have said in response to your questions by the way; benefits = troubleshooting in a work setting like for coders, drawbacks are literally any other application but especially one that replaces genuine human interaction with the language models, the theft implications of "AI" art are a genuinely interesting philosophical discussion but that discussion should come second to understanding that we've watched skilled work replaced by automation in a number of other fields (getting into ludites and looms) and identifying if that automation was worth it and would be here. To me, even a soulless corporate art project is labor by a person and the kind of art we already see as soulless had to be made by someone, and should continue to be made by real people. it's not ideal to make art youre not passionate about, but art is a luxury and the conversation about corporate use for AI art involves a conversation about wealth implicitly #### Person 9 (Pilot) Ooooo definitely some great questions. There is a lot to dig into here. This might take me several responses from me due to travel If you look at AI in the lenses of a resource, there are many benefits. First being a utility for humans to quickly utilize AI to perform research that would otherwise take many more hours of manual research using traditional methods of study. There are two issues that arise with this…the accuracy of the data pulled in the research, and the source of the data that AI reached out to pull information. I think before we dive into such a deep pool of thinking and Analysis, you are going to have to define the differences between AI and other forms of machine learning/response (Think Google, Alexa, web crawler search tools). Here is an example of device that uses advanced tech that could be mistaken for AI. The drone I fly has the capabilities to fly itself, make decisions on what to set for its flight path given a strict 3 dimensional boundary defined by the human pilot, and its ability to recognize hazards in the way of its flight path. In this case, I am talking about a Skydio S2+. I can define its flight boundary, and tell it to begin a scan of its environment without a human moving the sticks. It flys itself. The question arises, is this truly an example of AI? So then, my question becomes, what is the true definition of AI? Of course, AI introduces a whole plethora of potential issues. The big one that comes to mind is, is it possible for AI to become self aware? In the scenario that it does, how does this affect humans? Is there a point where we lose control of AI if it begins to write its own code. It could get to the point of more efficient coding tactics than that of what even the best programmers can achieve. If so, can AI manage to weaponize itself? In the lense of using AI as an educational resource or research utility, we have already seen the use of AI that produces inaccurate information. If social media hasn’t blurred the lines between accurate and inaccurate information, AI provides a whole different level of complexity, especially if AI begins writing its own articles. Think the case of the election this year. Might be a heated point of contention for your video, but nonetheless important to examine. As for AI generating art, I see it in a very negative light. If anything positive, opens up a realm for experimentation to see what AI can do. All else, AI would reduce the incomes of authentic human artist by flooding the industry with easily produced computer generated tunes. Then again, could it be possible that the value of a musical piece created by a human increase as human generated music becomes more rarely available? #### Person 10 AI should only be used for very basic tasks and to basically 'get the easy shit done' in a flash, or to give you a solid starting point, but never in any artistic sense. Once you try to use it for anything beyond surface level, it starts to hallucinate like crazy. For programming it's fantastic for a quick loop that would normally be something you'd go to Stack Overflow for, but for anything more advanced it quickly starts giving you slop. For Civil3d (autocad), their AI implementation is very lackluster and only really good for a rough draft and for doing proof-of-concepts. Essentially I throw a few parameters in for a site in about 5-15 minutes and it quickly goes through the process of checking to see if something is feasible. While it's hallucinations aren't egregious to someone who actually knows site design, if you are someone who is new, you wont even notice or see them. Or worse, you'll see it doing something wrong and not know why. For Art, I don't really see any good application of AI art. Even at the most base level, such as concept art, concept artists are there to create something novel, which AI is inherently going to struggle or fail at. Even something as 'simple' as creating a logo requires creativity unless its based on something else like 'Make me a logo similar to Jaguar but using a velociraptor instead' #### Person 11 (AI Trainer in the Medical Field) __Where do you see it to be beneficial? Is it helpful in more than one way?__ I loosely agree with Moravec's paradox—there are many things humans can do easily which machines struggle greatly with, and vice versa. I think machine learning is especially beneficial when it does things that humans are not capable of — like using LLM embeddings and vector databases to allow for super-accurate search through huge collections of documents in a matter of milliseconds. I also think it's beneficial to replace humans if we can undeniably prove that the algorithm is superior to them — I suspect that this is what will happen with self-driving technology in 20 to 25 years. Right now, self-driving cars have some obvious flaws that are holding them back from mass-adoption, but as someone who sees autonomous Waymos nearly daily successfully completing trips around LA... I think the future trajectory is clear: AI will almost certainly learn to drive cars safer than any human, and that will probably save lives! Additionally, there are jobs which are very risky for humans to do which robots would be ideal for—search and rescue, firefighting, mining, etc—these jobs put people's lives at stake, and if we ever build robots smart enough to competently do those jobs, I think it'd be a no-brainer. Of course, the words "smart" and "competently" are doing a lot of heavy lifting here. One MAJOR way I have already seen AI positively impact the world is education. Yes, a lot of kids are using ChatGPT to get out of doing their homework, and that's not really ideal, but I'd argue that's just a symptom of the school system not having had time to properly adapt to new technology. The progress I'm referring to is the incredible usefulness of AI assistance in the context of tutoring and learning. As a concrete example, my mom has been going to school to become a licensed therapist for around a year or so, now, and she makes extensive use of language models to help tutor her. She doesn't take their word as an ultimate source of truth, she just uses them to assist her in her own learning efforts. She sometimes struggles to understand the dense, academic, poorly-worded passages in her Psych textbooks, but all she has to do is copy-paste the confusing part into ChatGPT and Claude and have a 5-minute conversation back-and-forth with them about what it means (taking some time to fact-check if necessary), and just like that, she's made real educational progress that she wouldn't have otherwise made! __Where do you see it being problematic? Do you think it's problematic at all?__ Slop and misinformation are something I've been thinking about for a while now—I had a few conversations with friends back in late 2020 about how Turing-test-passing neural networks were probably on the horizon, and how I was worried about their impact on society. 2.5-ish years later, ChatGPT released to the public. I know you already know this, but: back in the 50s, Alan Turing famously described an "imitation test" (which we now call a "Turing test") in which a human contestant must communicate with an anonymous entity exclusively through written text, and attempt to determine whether that entity is another living human being, or a computer pretending to be a human being. Any algorithm capable of behaving convincingly enough to fool a human judge is said to have passed the Turing test. As of 2023, such algorithms now exist. The original Turing test dealt only with communication through language, but allow me to propose an extension of the test that would involve all modes of digital communication. Basically, you can imagine being given an iPhone with a single contact in its contact book, and your job is to determine if that contact is a real human or an AI. Crucially, in this scenario you are allowed to communicate with the contact in any way possible so long as you only use the iPhone. That includes text messages, voice messages, image and video messages, live voice calls, and even live FaceTime calls. So far, we don't yet have algorithms that can pass this harder version of the Turing test. But we're getting scarily close. Everybody laughed at the AI-generated image of the Pope wearing a poofy jacket that went viral last year, but I'll be honest—I saw that image on my Twitter timeline and scrolled right past it without even realizing it was AI. Later, when I heard that it was, I was able to go back and inspect it closely and see obvious diffusion-model artifacts in the fine details of the image... but I definitely didn't notice them right away. Google and OpenAI have both demoed advancements in video generation which blow all previous work out of the water. AI voice cloning went from research curiosity to a product you can buy for a few cents a minute in the space of, like, 3 years (see: ElevenLabs), and it's already being used by scammers to automate the extortion of vulnerable elderly folks. In Ray Bradbury's Fahrenheit 451, most people are hopelessly addicted to watching mindless formulaic slop entertainment 14 hours a day on gigantic screens which cover their homes' walls. A similar dystopia is imagined in WALL•E. The Matrix is a whole-ass trilogy about the human mind being enslaved by AI and complacently existing in a simulated reality. While I don't fear our society will ever devolve quite as far as these pieces of fiction suggest, I do fear we're headed vaguely in that direction. As it continues to become harder and harder to distinguish between humans and algorithms, I fear that many people will get sucked into never-ending echo chambers and firehoses of slop content tuned precisely to capture each individual's attention. Imagine the YouTube Kids content machine, but good enough to ensnare even full-grown adults. The one hope I have here is that I don't think we'll see the invention of physical Blade-Runner-style Replicants within my lifetime, which means that I should be able to trust the veracity of any and all in-person interactions I have with other humans. But I am apprehensive and a bit sad about the fact that it may become nearly impossible to meet people digitally, because of how hard it will be to verify they aren't a bot! __In regards to AI art, what are your thoughts? Do you think it's good or bad? Why?__ I think it's morally neutral, like almost all technological advances. I see it kind of like a new type of render-engine. Blender's Cycles transforms complex geometric scene descriptions into physically accurate images by using awesome math and algorithms which were derived by very smart people from first principles and a strong understanding of optics, physics, and computer science. Existing "classical" render engines are built methodically from a tiny set of elegant rules, while AI image generators are built empirically to model an insanely complex probability distribution based on billions of samples from that distribution. Regarding the value of the generators' outputs—well, I guess beauty is in the eye of the beholder, yeah? Always has been, always will be. Me personally, I have effectively zero interest in any image I know has been AI-generated. I know that many people feel similarly. For many of us, the entire point of art is that it is an expression of human experience created through human creativity, skill, and effort. It is a heartfelt communication between human beings. But there are plenty of people who don't care about the human element of art, and never will. Prior to the invention of AI image-generators, these people had no other option but to pay human artists to create their logos and Corporate Memphis webpage graphics and company "swag" designs. But now they do, and so I think it's likely that as these generators get better and better, we'll see artists lose jobs in this market segment—but it's not because "art is dying", it's just that the people who were employing those artists were never purchasing art in the first place, they were purchasing "visual assets". All that ever mattered to them were the final RGB pixel-values, not the human experience embedded in those pixels. And so now that we have increasingly remarkable RGB pixel-value generators, they will increasingly favor those over the human artists, because the generators will be cheaper and faster. I don't think AI art will impact the fine-art and commissioned-art spaces too much, aside from making it increasingly hard to trust that the artist you commission is actually making what they say they are using real artistic skills. One silver lining to this, though, is that it might make in-person art shows and conventions much more popular, because one way you can know for sure you're buying Real Human Art™ is by meeting the artist in person and physically observing their process! I do believe that AI image generators will enable new forms of real art that were never possible before, too. For instance, perhaps in the future we will invent new neural network architectures that are so powerful and data-efficient that they can feasibly be trained on just the artistic output of a single person. In such a world, I could, in theory, train an image-generator on nothing but my own art, and then use it extensively in my workflow. Maybe the algorithm adaptively learns my shading habits and autocompletes sections of laborious crosshatching for me. Maybe it sees me try and fail several times to draw a complex form, but by observing those attempts, it can get an idea of what I'm trying to draw and make subtle corrections to accurately express my intent. My point with all of this is that in this scenario, the AI is nothing more than a highly-advanced tool. The AI is merely acting as an extension of the artist's own mind. Now imagine instead of a single artist, a collective of artists who all unanimously agree to pool their work together and train an algorithm on all of it, and then share that algorithm amongst themselves. I find this sort of thing fascinating. My point with all of this is: it's clear that not all creative outputs can be cleanly categorized as "AI slop" and "real authentic human art". In reality, there is a whole spectrum that such outputs can lie on, ranging from "a neural network was used to stabilize my brush strokes" to "the computer literally read my mind and put the exact image I was imagining on the screen without me lifting a finger", to "i trained an algorithm on billions of people's images without their consent, used it to generate anime titties, and will now go on Twitter and fucking act like i invented the concept of art" In summary, as you say: let complicated things be complicated! I genuinely believe AI has enormous potential for positive change in the world — I believe robotic labor can and will make healthcare, housing, and food cheaper for everyone, for instance. I also think it's pretty obvious that this is a technology we do not yet fully understand the implications of. We're all in for a wild ride this century, one way or another. I have hope that it will be a net-positive ride, but, well — "net" is doing a lot of heavy lifting there. There will be incredible ups... and there will be fucking awful downs. I wrote so much here and tbh I could have written way more. Thank you for asking these questions, and I hope my thoughts can contribute to a well-rounded dialogue. I really look forward to seeing what you do with this topic. If you ever want to chat or call and talk more about this stuff, I'd be happy to ramble away as long as you'll let me :3