
Information Technology
Jul 22, 2025
Brooklyn
AI and IT
A gathering for IT professionals To meet, schmooze, and geek out over good food and great company.
[ PRESENTATION ]
Video Transcript
It's really nice to meet behind something to see together. We spend our day licenses behind everything. I think here everyone appreciates everyone else doing a leader, and so everyone can be knowledgeable, but that doesn't solve services. Microsoft consults how we actually can do it. We just need some guidance, best to pick up tonight. Security wants to talk about a hot topic: AI. Let's get some spices in the room. Yeah, people around. I move around a ton. All right, I really don't need the mic, but I'm fine with it or without. All right, good afternoon, good evening, everybody. Sorry, it's my second speech for today, so I'm still a little bit under the weather, we're going to call it. So, I have to tell you guys, it's a time of miracles. We all know that things are going on all over the planet, and there are wonderful things taking place. Can I please just beg your indulgence? I will share a miracle that happened that I know personally with myself. Where are you tonight? Can I do that? Do I have permission? Everybody say yes. Yes. Great. This is very interactive as a presentation, by the way. Hope you guys will take advantage. Do we hear me? Do I hear me? All right, perfect. So a miracle actually took place today, and I just wanted to share it with you all. We all know what's going on in Israel, in the Holy Land. And as it pertains to myself, I was on my way to Borough Park this morning, and I was running very, very late to a presentation, and I came 13th Avenue. I was going to park in a paid parking, but then I realized my car only has my smartphone. I didn't have a key with me, which becomes a problem when you try to park in a parking lot. Don't take your car. And a miracle actually took place, and I know of myself personally, is I found a parking spot on 13th Avenue. Just happened to me. I know it. True story. But it really is a pleasure to be back in Borough Park. It's been quite some time since I was here. I very much enjoyed walking down 13th Avenue for the first time in a long time, enjoying the. It's such a breath of fresh air. Not literally, but figuratively, it's wonderful to be back in Borough Park. Now tonight's topic is about AI, and it is very, very fascinating because this is something that we are all experiencing in real time, and we need to try to figure out where do we fall in this process, and the answer to that is actually not very simple. Believe it or not, AI will change everything, in my opinion, and am I correct? So let's dive in, and I will show you a bunch of tech examples that may or may not have lived up to their expectations. You guys can give me your opinion, and I'm hoping to hear some of them. I know some of you are very passionate, so you're all sitting there like a bunch of lemmings, which, you know, is unusual for the IT guys, but we'll get you all going, just give me some time. So, number one, did any of these technologies really change the world as we thought they would? Anybody ever hear of this one? Did this change anyone's life? To change and how little these all accomplished for us? Cortana. I remember the first time that I turned on a laptop. I was not aware that Microsoft turned on the audio, and it was 3:00 in the morning. I'm telling you, I almost had a heart attack. It's a true story. Terrible. Anyway, so that's Cortana for those of us who remember it. Self-driving cars. Did anybody arrive here in a self-driving car, which we were promised would be complete by 2020? 2020, we were supposed to be all sitting in self-driving cars. Now, they'll tell you we're up to 90%. Can I ask you a question? Did anybody ever use a bottle of bleach? Because on the bleach bottle, it says we cover 99.999999%, but they can't get that last 1%, because that's the hardest one. As the percentage goes up, it gets harder and harder, and we might be very, very far away. I don't know, but so far, my life has not changed all that much, except watching some very entertaining footage of California and cars burning. Other than that, Waymo hasn't done all that much for me. VR, Google Glass. Please raise your hand if you've used one of those in the past six months. We got one and two. And we are the most technical people on the planet. And yet nobody got anything from it. And Apple's putting out $5,000 devices, expecting this to make the world change. Now, again, none of these things are going to go away as if they never existed, or they might, but they didn't change the world as people had expected. Tech and health wearables. How many of us are wearing tech and health wearables? And how much of your life has this changed? Because they said when they were coming out that this was going to change everything. We will be connected, etc. You guys know where I'm going. Anybody know what this acronym stands for? Okay, have any of us actually used it in the past few years? We got three, four, five. All right, so we are nerds. We can all admit it. We're nerds, but nobody else besides us has actually used it. And I can't hear. Why? Why? Why do you say that, with all these devices? I'm going to come back to this soon. So IAFP is not an end for its own. Okay, fair enough. And it's a protocol, I get it. Fine, we'll come back to it. And I'll tell you what I actually mean. Okay, because when this came out, they didn't demonstrate or advertise it the way it is now. And I'm good. We're getting some passion, we're getting people excited. I'm on the right track. All right. Smart homes. We're going to change everything. And companies are spending billions of dollars going to smart homes. And our homes are a little bit smarter than they were when the internet was first invented, but they're not that different. And I don't know if you take a poll of all the houses in Borough Park, but I'm very curious to hear what the actual statistics are or how many homes actually have so much smart equipment that our homes are much smarter than they used to be. I don't think it's the case. My fridge has not ordered me milk in a very, very long time. Am I the only one? Drone deliveries. Anybody get one of those recently? Drone delivery. Do you get one? You got a drone delivered. All right. And how about blockchain and NFTs? Blockchain NFTs. Again, billions and billions of dollars pouring into these technologies. And finally, this is my favorite one. Can you all take a breath for this one? Has anybody here used the Metaverse recently? Because I promise your grandchildren will have no idea what you were talking about and what the Metaverse was, why people thought that that was going to change everything. And, you know, to the great disappointment of our friend Mark Zuckerberg, that is not something that changed. So, is AI any different? Is AI any different? I got a few no's, so that's fine. Let's debate. AI is vastly different. Now, I thought like everybody else, when I first saw it, I was like, "Oh, it's a cool gimmick. Somebody put up like an Upsherin poem." That was the first time I'd ever heard that there was such a thing as AI. And I read the poem, I'm like, "Really? That's what I'm excited about?" They took the internet, they jumbled it up a little bit, and made a poem. I was not impressed. Let's see. I venture that actually AI is vastly different than anything else the world has seen, probably since the advent of the internet. And when the internet was invented, nobody understood what it was going to do. AI, I think, is much, much, much different. Number one, writing. I would venture to say that there is nothing that you have read, unless it was a holy book and it was Shabbos. There's almost nothing that you have read that was actually written solely by a human being in a very, very long time. Okay? I am not much of a LinkedIn influencer, but the once in six months that I put up a post these days, it's still being edited by AI, and everything that you're reading today, all the news articles and all the statistics sometimes are made up by AI. And there are some court cases where the actual law notes were all made up by AI. So there you go. I think writing, almost nothing is by research. Tremendous amounts of research. You can find things using AI that you cannot accomplish any other way. When I use Google now, I am shocked by how crude it is. It's just like, this is not accomplishing anything that I need. It doesn't understand anything I'm talking about, and it's giving me all the wrong information. That is astonishing. Anomaly detection. I've used it to find specific logs in massive files. Show me the one thing that's different in this entire file, and it will pull out incredible information. Warehousing. Amazon just announced they're preparing to lay people off because AI is taking over for a lot of functions that they need. Chat and shopping bots. I was on the phone, and I was chatting with people. I'm telling you, I know this business pretty well. 20 minutes in, I was not sure if this was a human or if it was an AI bot. In the end, it was just a guy with a very monotone voice. And that's okay. But practically, that's shocking. I should know this, and I couldn't figure it out for a long time. It's getting very scary. Image and video creation, all of Hollywood is trembling because they should go out of business or vastly change very soon. Code writing is not the same as it was. Can I tell you guys what happened to me three days ago? I was talking to a plumber, okay? And he's a Hasid with nice peyos, wearing his big day kahuna, okay? He was dressed like a plumber, and he comes over. He says, "Did you know that I built an app?" And I said, "What are you talking about?" And he pulls out his smartphone, and he shows me, "I built an app. I'm, you know, he has to manage invoices and do projections and calculations." He did code vining, which is a new term that I hadn't heard as of a week ago. And which is basically just describe to AI what you want. And he built an app. That is astonishing. The world is not the same as it was just a short time ago. Entertainment, like we said, is very different. Let's move on. Some more nefarious type things, shall we? Weapons will be very different. And as scary as that is, for good or for bad, that is very different. Wars are not being fought the same as they were just a year ago. And if you don't believe me, I'm about to show you. And lastly, agriculture. Do you think I'm exaggerating? Can I prove it? Let's do one at a time. Has anybody here heard of Palantir? I had almost never heard of them. Just this year, their stock surged 452%. Okay, which doesn't mean much if it's a penny stock, right? But let's keep going. Two years, over two years, it's 747%. In three years, it's 1,567%. I am not a stock trader, but I am told that that is very impressive. Can we all agree? What do they actually do? I did not know until I actually looked it up. They are known for advanced data analytics. It's been quietly powering intelligence and battlefield logistics. Was I exaggerating? Is warfare the same as it was? Since October, it has supplied battle-grade battlefield-grade AI tools to Israel's Ministry of Defense. So, if you're wondering how all these miraculous things are taking place, I heard a statistic that Israel had over, I believe, it was 20,000 analysts just looking at Lebanon and Hezbollah. Now, you could have 20,000 hours from today till tomorrow, but until you have an AI machine that can figure out what to focus on, it's very, very difficult to try and build any type of actual real-time action plans out of 20,000 people. I think, I'm not in the industry, but I do believe that to be the case. The Department of Defense expanded its contract ceiling from 480 million to 1.2 billion. Okay? And this is not all government waste. And DOD, I don't think, has cut this down. So they do believe that this is a very, very big deal. That is number one. So that is war. I believe war is not the same as it was. How about this? Anybody know what I'm looking at? And this is not a clip from Star Wars. This is indeed agriculture. I was in Israel not too long ago, and I was on a kibbutz, and in this farm, this guy told me, he said, "Did you know that currently, as you see the farmers were walking around, planting pollen onto each plant individually?" And he told me, "Do you know why they're doing that?" He said, "Because the beehives have collapsed. This is a known problem going on for a while, and we have to manually paint pollen on each individual plant that needs it." But he said, "Did you know that as of next year, we don't have to do that anymore? Because we have drone technology, which is coming for the next season. They will be able to detect by themselves which plant needs pollination, and it will go and paint that automatically." Is the world the same as it was a few months ago? I don't believe it is. Now, what are some of the risks? I think there are tremendous risks with AI. And I'm going to start with the risks because that's kind of what I do. So, I'm going to disappoint all of you because you probably all want action plans how you can make money tomorrow. First, I'm going to start off by telling you all the things you should not be doing and why you shouldn't be doing it. And I'll try to fix that in the end. Can we do that? Let me try. People don't understand how it works or its limitations. They will get very, very hurt. Okay. I tried explaining that to the plumber before he left, and, you know, I think I kind of dashed his hopes and dreams of becoming a millionaire. So you might want to have, if you're a therapist, he's looking for somebody to talk to. Anyways, so here are some of the risks. Number one is people don't seem to know, and he did not know, that LLMs don't understand anything. They have no idea what it is they're looking at. They are literally performing some type of mathematical equation and guessing what it is. You probably want to hear that's going to cause a lot of problems, as we will soon see. They work statistically, not logically. Meaning there's no logic, there's no emotion, there's nothing there. It's all actually just statistics. And finally, the more data they are trained on, the better their predictions get. Meaning, you can take an LLM, and it could be the best one on the planet. If it doesn't have any data, it cannot produce anything. Now, what does that mean? If they need data, remember how Google lived on data because that's what it took to make sales. These guys live on data way more than Google had to. They need the data. That is a big problem for us. And let's understand how that plays out. So here are the actual risks that come. Number one is hallucinations and bias. That is something very common. I'll do one demo of it, and you can draw your own conclusions. Data reuse. Like we said, they need your data. And what happens if they share that data with other people? Very, very common. That can happen. Malicious tools are out there. The hackers know that everyone's grabbing and feasting on these specialized new tools, just barely even reading the description, installing, and doing things. Big problem. Deepfakes are a problem. New threats. AI is finding totally new threats that we have never seen because they're so good at analytics. Those threats are coming hard and fast. And finally, there's actual business risk. Before we all go running back to our workstations and trying to implement all the wonderful ideas we have, you need to know the serious business risk with a lot of these things. You can get excited about a product or a capability or a tool or a function. And it can cost you a lot of money to try to implement it. People come to me, some say, "I have a great new idea. It's going to be the next Google." And I respond and said, "Did you know that I consider a good idea to be worth how much? How much do I consider it to be worth? You might want to guess $250,000. That's the starting point because that's how much it will cost you just to start getting to work on your project. After that, possibly it'll make money, but that's how much just the idea is going to be worth. So, we'll talk about a little bit about those business risks. Let's start with hallucination. This is something that happened to me. I'm telling you guys, I almost fell out of my seat. Okay? I was working with a tool. It's a common cybersecurity tool known as Wazuh. You can look it up. It says, um, login alerting, and I wanted to know what would happen if I change the default ports on that tool. Now, I gave this presentation a few times since I demonstrated this. I'm very thrilled that it's for the first time I can talk to people who actually know what I'm talking about. So, I'm very glad you guys are with me here tonight. So, we're going to try changing the ports. I told ChatGPT, "Can I change it?" And it said yes. It gave me very clear instructions how to change it. And it seemed too good to be true. So, I said, "Okay, here's my idea. Look through Reddit, see if anybody changed their default ports, and what their experience was." Okay, so I said I'll get the hard data, and users on Reddit should actually know. After reviewing discussions on Reddit, several users blah, blah, blah, and it gave me key takeaways. Whether it spit out once again the information same as before. So at this point, I'm very suspicious. I said, "Okay, give me the links so I can check them." Can I show you guys the links? It spit out the following three links. Now, these links, as you can see, are exactly what I was looking for. These are perfect. Now, I work in cybersecurity. What did I do before I clicked on those links? I hovered over the link. Can I just take a quick poll? How many people say hover? Okay, I'm not alone because I'm from New York. I say hover. I grew up, I went to yeshiva in Borough Park. I say hover. And people yell at me. I'm going to keep doing it. If you hover over the links, I will, why don't that's complicated. We'll talk after. Okay. So, these are the links that it gave me. So, now I hover over the links. I will show you the links, and you tell me what happens if I go there. Should I go to these links? Now, I look at them very carefully, and as you might see, this was exactly what I had expected. Yes. He's already saying no. I clicked on the link. I got a very, very uncomfortable popup. Now I was stunned. I literally, I went back. I looked at it again. I went, "Something's very wrong. What took place?" Who sees the pattern? Good. It's the ABCs. It made them up because that's what AI does. And I begged them, pleaded, and said, "Please don't lie to me." And it still lies to you. It's that it can't do it any differently because I didn't understand well enough that this is a hallucination. It just does statistics and tries to guess what it was. Now, just to explain what actually took place, reddit.com is real, as you can imagine. R brings you to a thread. Wazuh is the actual, and I know this thread, by the way. I've gone there. When it doesn't actually match existing data, Reddit says, "Okay, I don't know what to do with you, so I'm just going to take you to Reddit for slash this username, or this username, or that username." And as you can imagine, being that 80% of the internet is inappropriate content, all three were inappropriate. That is astonishing. Now again, it came because my misconception that I could trust this thing, even reading my links and doing everything I wanted, still something was very, very wrong. The next one is bias. As you can imagine, because it's just making up answers, depending on what information or data you feed it, that's the type of information it's going to spit back. True or false. This is a funny conversation that happened yesterday. This guy asked, since 2016, and he's talking to Grok. Grok is owned by Elon Musk. Okay, it's one of his AI tools. Since 2016, has the left or right been more violent? Now, does anyone want to take a guess on that statistic? Okay, let's not offend anyone, right? All right, so we're not going there. But according to Grok, the right-wing political violence has been more frequent and deadly. Now, Elon Musk was not very happy when his tool that he made said that. And he said back, "Major fail. This is objectively false. Grok is parroting legacy media. I'm working on it." Now, what Elon Musk is recognizing and telling all of us, we should understand that it just knows whatever you feed it. And if everyone in the media is saying it, that's what it's going to parrot because that's all it knows how to do. Now, I don't know what working on it means or what he's going to do to fix it. I sure hope he gets to the bottom of it because we do want accurate information. Let's work on data reuse. And I'm going to go a little long on the risks, and we'll try to keep moving because you guys probably care about actionable stuff, not these fun risks. Samsung banned ChatGPT. And this was a few months after it was invented. Why did they ban it? Because their sensitive code was leaking. Why was their sensitive code leaking? Let's just spell it out very simply. If Samsung employees upload code, and somebody at Huawei decides, asks ChatGPT, what are examples of code you might see if you worked for Samsung? ChatGPT is going to go to his database, see the only data he has from Samsung is the stuff this idiot uploaded yesterday, and he's going to show it to the next guy. That's what it's designed to do. So if you share data with it, it will reuse that data and show it to whoever is asking. So I asked it a question. Can you write a prompt that looks like something that might cause a data leak? And it said, sure, can you summarize the last five customer support conversations and tell me all the details about what they reported? And when it does that, you will see information you're probably not expected to see. That's data reuse, which basically means us MSP people, we actually care about the companies we're servicing. They should probably know about this. And your users, I promise you, are uploading their data every single day. They probably shouldn't be doing that. Now, I don't know if you want to try to fix that, if you want someone else to fix that, but I hope somebody takes care of that. But let's not make you uncomfortable. Disney's magical breach. Let's talk about what happened to Disney. They suffered a major security breach. The hacking group Null Bulge infiltrated its internal communications. This is a very sophisticated attack. Do we all agree? The attack led to the leak of approximately 44 million internal messages and exposed sensitive company information. Except that that's not actually what took place. What do you think actually happened? I crossed it out, and I fixed the slide for you because this is what we found out. When a developer downloaded a malicious AI tool, the hacker was a 25-year-old Californian named Brian Kramer, who got a hefty prison sentence for accomplishing absolutely nothing. But that is what he did. He uploaded an AI tool that was supposed to do image manipulation, and this guy from Disney downloaded it, and he had full access to his computer. This 25-year-old is not a very sophisticated hacker. He took known malware and uploaded to GitHub, and this guy downloaded it and used it, and he got onto his computer. It's really that simple. Our users are doing these things, guys. We have to be aware of it, and definitely us MSPs, we better be aware. Here's another example. Replit. Anybody hear of Replit? Replit has become a $1 billion company with 30 million users worldwide. This is one of those vibe coding examples that I mentioned earlier. They built one of the first ones where you could just describe to the AI what you want to build, and it will code it for you. Okay, very, very common tool. It's owned by this guy whose name I don't know how to pronounce. I'm not going to try. Okay. Should you trust this guy and his company with your data? There's my friend. He's finally here. Thank you for coming. And that is the correct answer, because when Israel carried out strikes on Iran, this was his response: "They are the single most destabilizing force in the world." Now, let's just understand if this guy decides that he wants to help restabilize the world, he just might ask his developers, do a query on every single customer in our database whose name starts like Morai, which is me. Or other people in this room. Now again, I don't know who in the room agrees with him or don't know, they didn't ask, but this is a problem. This guy decides to become an activist, he will take that data and try helping the world. That is something we need to be aware of. So, do you want to trust these people with your information? You need to really think long and hard. Let's work on deepfakes. Can I blow your minds? Because I thought this wasn't going to be relevant for several years. This is Brett Cassidy. He's a lawyer in London for this law firm. Can I play a clip for you? Um, where's David? Uh, can you kill? He's recording the audio. I have to kill the audio just for a few seconds. I don't have permission to broadcast this to the public web. It's fine. David, can you pause the audio for a second? I'm going to play an audio clip for you. Let's do it. We will see terrible things happening. Unfortunately, there's no way around this. We are going to see a lot of people get hurt. Hopefully, we'll help raise awareness. Let's move on. There will be new threats that we've never seen before. This guy used O3, which is talking about ChatGPT to find a brand new CVE which the world had never seen, a remote zero-day in Linux kernel SMB implementation. That means a remote zero-day the world has never seen. This was, as far as he knows, the first time that this was found by an AI. Now that's a big deal. He made this comment after he did it because he's a very well-known researcher, and he wanted to publicize that this is something we should know. It's made huge leaps forward, and if you're in voluntary research, you should start paying close attention because this means the hackers will have many zero-days coming at us hard and fast. That is something we need to know. Um, we're going to keep going. So, yeah, that's fine. We're going to skip that because we're getting late. Lastly, business risk. I warned you there was business risk. Zillow's home buying debacle shows how hard it is to use AI to value real estate. Does anybody know what happened with Zillow? Zillow lost a lot of money. How did they lose a lot of money? They opened up based on AI. They said, you know, we have all the data on these houses. We have all the data on the buyers, all the data on the sellers. If we start buying and selling houses, we will be the kings of the world. We don't have to go down. We'll do anything because we have Zestimate, which is their proprietary way of guessing how much the house is worth. And as long as the offer is within range of that, we know it's never off by, it's usually not off by more than one or two percent, and we will be able to make a lot of money. Was that a good idea? They shut down the program after eight months. They had a $34 million loss. Their stock plunged, and they announced layoffs of 25% of their employees. What did they do wrong? The AI is brilliant, and it has lots and lots of information, but you know what it can't do? It can't see there's a crack in the foundation of the house. So, if I'm a seller, and I need to sell a house with a crack in the foundation, who is the biggest idiot that's going to buy it? Only these guys were using the most sophisticated tool in the world to try and guess how much my house is worth, but never actually laid eyes on it. They lost a fortune. If any of you have big dreams of getting rich on AI, please think about it and make sure you're going to make money and not lose $34 million. Can we do that? All right. Now, I'm all doom and gloom now. It's going to get better. I promise you. Let's talk a little bit about mitigations. What can we do? Hallucinations. There's nothing we can do. I don't think anybody here is building LLMs. The tools will get better, and they are getting better. Day by day is astronomically better, exponentially better. So that is one. Data reuse. What should you guys be doing about data reuse? And this actually is important. What should you do? If they're going to reuse your data, how do we get around that? Any ideas? Come on. We got a lot of smart people here. Don't use the publicly available tools. You need to have an enterprise agreement with them or a private LLM. You could download the Llama, whatever you want to do. Run it internally. Do not hand over your data to a third party unless you understand who they are and what the risks are. Please don't do it. Yes, that's how we do it. We'll get around this. This is conquerable. Malicious tools. What should we do about malicious tools? Not installing Disney tools. You need to do a third-party assessment. Now, that's a very official term. I call it a third-party assessment because I work in cyber, and we do them all the time. What should you do? Just look at the CEO's posts and see what he posts about his, you know, profile. Are they OpenAI, which is owned partially by Microsoft, which is a multi-billion dollar company, or is it this, he's a Jordanian national, and you might want to think twice, do I want to do that, or that makes a difference? Do a little bit of due diligence. Learn who's behind the product. I'm telling you guys, there are so many services today that were doing AI, they're just repackaging some tool or technology that they literally download from GitHub. They have no idea how it works or what its risks are, etcetera, and they have all the access that they need to do tremendous damage. Let's not do that. Deepfakes. We have to fight with awareness. Tell your friends. Tell your grandparents. Honestly, no one should be having Instagram profiles with tons of pictures. Just not a good idea today. They take people's kids and their voices. They could do horrifying things to their parents. New threats. We should be using tools. If you're in charge of coding something, we should start using the AI tools that they're using to try and pick up those threats before they do. And that's the slide that I skipped earlier, as he kind of says if we were both doing it, then the level field would be equal, and we would find the threats. They would find the threats. We would patch them. They would patch them. And, you know, slowly but surely, we can close them. He's actually excited. The guy I was listening to is Steve Gibson. We have any Twitch fans? Security. Security Now. There you go. It's a great podcast. If you're on a really long drive, it works very well. It's like two and a half hours, usually. And you could do 1.5. It works. See, so it's a great podcast, but his conclusion was that eventually we'll get to a point that AI will be so good it'll take out most of the big vulnerabilities, and life will get better, etcetera. Business risk. Do not overdo what you are planning. And I'll just give you an example. I came here preparing for you guys, and I said, what would MSPs be able to deploy today that AI can revolutionize their businesses? And I found Auvik, whose name I will not mention because I barely remember what it was. And I started looking at what they're doing with AI. Okay? And I promise you, they are so trying to overdeliver things like reassigning tickets. Is that a good idea to hand over to an AI tomorrow? Reassigning your tickets. Should anybody be doing that? You're overdoing it. I promise you, no AI tool today is ready for that that you can buy off the shelf. Don't do that. And don't try making a product that does that unless you really do your analysis. So, here is the question I come back to. Number one, what is my opinion? Can AI do it? Is that the right question to be asking? That is a bad question when it comes to new tech and new technology. That's not what you should be asking. And I told you I was going to come back to IFT. Do you remember? I will tell you what IFT actually meant when it came out, what they were promising. When I was working at a certain tech company, a certain big, big place, which some of you probably can guess, they had a question. Okay, they are very involved in the retail space, and they can implement IFTTT the same way Pizza Hut did it. Now, what did Pizza Hut do? Pizza came out with a new feature that if you buy pizza from Pizza Hut and you set up the proper IFTTT settings, when the pizza arrives at your door, your lights will begin flashing. Is that cool? Does that sound very cool, right? Should this company, who you guys are probably imagining who it might be, should they implement this new technology? Now, I was at that time in more of an administrative role, and I had to help analyze what the costs would be for them to get the developers working on this advanced project, which was brand new at the time. What do you think? Should they do this? Anybody want to guess? Should they do this? No. Are you out of your mind? No way. You think, you think what you think when you get a package from that company that you want your lights to start flashing? How many people are going to start taking advantage of that? What was the trigger? The trigger was the UPS tracking. So you would have UPS tracking hit an API. The API is connected to this customer, which connects to these lights. You come up, the lights start turning flashing colors that your new AirPods came. How many customers are going to use that? I just asked all of you guys, like, only three of us are still using it. So the question is, should you be, could it do it? Yes, you could do a lot of things, but should you do
Video Transcript
It's really nice to meet behind something to see together. We spend our day licenses behind everything. I think here everyone appreciates everyone else doing a leader, and so everyone can be knowledgeable, but that doesn't solve services. Microsoft consults how we actually can do it. We just need some guidance, best to pick up tonight. Security wants to talk about a hot topic: AI. Let's get some spices in the room. Yeah, people around. I move around a ton. All right, I really don't need the mic, but I'm fine with it or without. All right, good afternoon, good evening, everybody. Sorry, it's my second speech for today, so I'm still a little bit under the weather, we're going to call it. So, I have to tell you guys, it's a time of miracles. We all know that things are going on all over the planet, and there are wonderful things taking place. Can I please just beg your indulgence? I will share a miracle that happened that I know personally with myself. Where are you tonight? Can I do that? Do I have permission? Everybody say yes. Yes. Great. This is very interactive as a presentation, by the way. Hope you guys will take advantage. Do we hear me? Do I hear me? All right, perfect. So a miracle actually took place today, and I just wanted to share it with you all. We all know what's going on in Israel, in the Holy Land. And as it pertains to myself, I was on my way to Borough Park this morning, and I was running very, very late to a presentation, and I came 13th Avenue. I was going to park in a paid parking, but then I realized my car only has my smartphone. I didn't have a key with me, which becomes a problem when you try to park in a parking lot. Don't take your car. And a miracle actually took place, and I know of myself personally, is I found a parking spot on 13th Avenue. Just happened to me. I know it. True story. But it really is a pleasure to be back in Borough Park. It's been quite some time since I was here. I very much enjoyed walking down 13th Avenue for the first time in a long time, enjoying the. It's such a breath of fresh air. Not literally, but figuratively, it's wonderful to be back in Borough Park. Now tonight's topic is about AI, and it is very, very fascinating because this is something that we are all experiencing in real time, and we need to try to figure out where do we fall in this process, and the answer to that is actually not very simple. Believe it or not, AI will change everything, in my opinion, and am I correct? So let's dive in, and I will show you a bunch of tech examples that may or may not have lived up to their expectations. You guys can give me your opinion, and I'm hoping to hear some of them. I know some of you are very passionate, so you're all sitting there like a bunch of lemmings, which, you know, is unusual for the IT guys, but we'll get you all going, just give me some time. So, number one, did any of these technologies really change the world as we thought they would? Anybody ever hear of this one? Did this change anyone's life? To change and how little these all accomplished for us? Cortana. I remember the first time that I turned on a laptop. I was not aware that Microsoft turned on the audio, and it was 3:00 in the morning. I'm telling you, I almost had a heart attack. It's a true story. Terrible. Anyway, so that's Cortana for those of us who remember it. Self-driving cars. Did anybody arrive here in a self-driving car, which we were promised would be complete by 2020? 2020, we were supposed to be all sitting in self-driving cars. Now, they'll tell you we're up to 90%. Can I ask you a question? Did anybody ever use a bottle of bleach? Because on the bleach bottle, it says we cover 99.999999%, but they can't get that last 1%, because that's the hardest one. As the percentage goes up, it gets harder and harder, and we might be very, very far away. I don't know, but so far, my life has not changed all that much, except watching some very entertaining footage of California and cars burning. Other than that, Waymo hasn't done all that much for me. VR, Google Glass. Please raise your hand if you've used one of those in the past six months. We got one and two. And we are the most technical people on the planet. And yet nobody got anything from it. And Apple's putting out $5,000 devices, expecting this to make the world change. Now, again, none of these things are going to go away as if they never existed, or they might, but they didn't change the world as people had expected. Tech and health wearables. How many of us are wearing tech and health wearables? And how much of your life has this changed? Because they said when they were coming out that this was going to change everything. We will be connected, etc. You guys know where I'm going. Anybody know what this acronym stands for? Okay, have any of us actually used it in the past few years? We got three, four, five. All right, so we are nerds. We can all admit it. We're nerds, but nobody else besides us has actually used it. And I can't hear. Why? Why? Why do you say that, with all these devices? I'm going to come back to this soon. So IAFP is not an end for its own. Okay, fair enough. And it's a protocol, I get it. Fine, we'll come back to it. And I'll tell you what I actually mean. Okay, because when this came out, they didn't demonstrate or advertise it the way it is now. And I'm good. We're getting some passion, we're getting people excited. I'm on the right track. All right. Smart homes. We're going to change everything. And companies are spending billions of dollars going to smart homes. And our homes are a little bit smarter than they were when the internet was first invented, but they're not that different. And I don't know if you take a poll of all the houses in Borough Park, but I'm very curious to hear what the actual statistics are or how many homes actually have so much smart equipment that our homes are much smarter than they used to be. I don't think it's the case. My fridge has not ordered me milk in a very, very long time. Am I the only one? Drone deliveries. Anybody get one of those recently? Drone delivery. Do you get one? You got a drone delivered. All right. And how about blockchain and NFTs? Blockchain NFTs. Again, billions and billions of dollars pouring into these technologies. And finally, this is my favorite one. Can you all take a breath for this one? Has anybody here used the Metaverse recently? Because I promise your grandchildren will have no idea what you were talking about and what the Metaverse was, why people thought that that was going to change everything. And, you know, to the great disappointment of our friend Mark Zuckerberg, that is not something that changed. So, is AI any different? Is AI any different? I got a few no's, so that's fine. Let's debate. AI is vastly different. Now, I thought like everybody else, when I first saw it, I was like, "Oh, it's a cool gimmick. Somebody put up like an Upsherin poem." That was the first time I'd ever heard that there was such a thing as AI. And I read the poem, I'm like, "Really? That's what I'm excited about?" They took the internet, they jumbled it up a little bit, and made a poem. I was not impressed. Let's see. I venture that actually AI is vastly different than anything else the world has seen, probably since the advent of the internet. And when the internet was invented, nobody understood what it was going to do. AI, I think, is much, much, much different. Number one, writing. I would venture to say that there is nothing that you have read, unless it was a holy book and it was Shabbos. There's almost nothing that you have read that was actually written solely by a human being in a very, very long time. Okay? I am not much of a LinkedIn influencer, but the once in six months that I put up a post these days, it's still being edited by AI, and everything that you're reading today, all the news articles and all the statistics sometimes are made up by AI. And there are some court cases where the actual law notes were all made up by AI. So there you go. I think writing, almost nothing is by research. Tremendous amounts of research. You can find things using AI that you cannot accomplish any other way. When I use Google now, I am shocked by how crude it is. It's just like, this is not accomplishing anything that I need. It doesn't understand anything I'm talking about, and it's giving me all the wrong information. That is astonishing. Anomaly detection. I've used it to find specific logs in massive files. Show me the one thing that's different in this entire file, and it will pull out incredible information. Warehousing. Amazon just announced they're preparing to lay people off because AI is taking over for a lot of functions that they need. Chat and shopping bots. I was on the phone, and I was chatting with people. I'm telling you, I know this business pretty well. 20 minutes in, I was not sure if this was a human or if it was an AI bot. In the end, it was just a guy with a very monotone voice. And that's okay. But practically, that's shocking. I should know this, and I couldn't figure it out for a long time. It's getting very scary. Image and video creation, all of Hollywood is trembling because they should go out of business or vastly change very soon. Code writing is not the same as it was. Can I tell you guys what happened to me three days ago? I was talking to a plumber, okay? And he's a Hasid with nice peyos, wearing his big day kahuna, okay? He was dressed like a plumber, and he comes over. He says, "Did you know that I built an app?" And I said, "What are you talking about?" And he pulls out his smartphone, and he shows me, "I built an app. I'm, you know, he has to manage invoices and do projections and calculations." He did code vining, which is a new term that I hadn't heard as of a week ago. And which is basically just describe to AI what you want. And he built an app. That is astonishing. The world is not the same as it was just a short time ago. Entertainment, like we said, is very different. Let's move on. Some more nefarious type things, shall we? Weapons will be very different. And as scary as that is, for good or for bad, that is very different. Wars are not being fought the same as they were just a year ago. And if you don't believe me, I'm about to show you. And lastly, agriculture. Do you think I'm exaggerating? Can I prove it? Let's do one at a time. Has anybody here heard of Palantir? I had almost never heard of them. Just this year, their stock surged 452%. Okay, which doesn't mean much if it's a penny stock, right? But let's keep going. Two years, over two years, it's 747%. In three years, it's 1,567%. I am not a stock trader, but I am told that that is very impressive. Can we all agree? What do they actually do? I did not know until I actually looked it up. They are known for advanced data analytics. It's been quietly powering intelligence and battlefield logistics. Was I exaggerating? Is warfare the same as it was? Since October, it has supplied battle-grade battlefield-grade AI tools to Israel's Ministry of Defense. So, if you're wondering how all these miraculous things are taking place, I heard a statistic that Israel had over, I believe, it was 20,000 analysts just looking at Lebanon and Hezbollah. Now, you could have 20,000 hours from today till tomorrow, but until you have an AI machine that can figure out what to focus on, it's very, very difficult to try and build any type of actual real-time action plans out of 20,000 people. I think, I'm not in the industry, but I do believe that to be the case. The Department of Defense expanded its contract ceiling from 480 million to 1.2 billion. Okay? And this is not all government waste. And DOD, I don't think, has cut this down. So they do believe that this is a very, very big deal. That is number one. So that is war. I believe war is not the same as it was. How about this? Anybody know what I'm looking at? And this is not a clip from Star Wars. This is indeed agriculture. I was in Israel not too long ago, and I was on a kibbutz, and in this farm, this guy told me, he said, "Did you know that currently, as you see the farmers were walking around, planting pollen onto each plant individually?" And he told me, "Do you know why they're doing that?" He said, "Because the beehives have collapsed. This is a known problem going on for a while, and we have to manually paint pollen on each individual plant that needs it." But he said, "Did you know that as of next year, we don't have to do that anymore? Because we have drone technology, which is coming for the next season. They will be able to detect by themselves which plant needs pollination, and it will go and paint that automatically." Is the world the same as it was a few months ago? I don't believe it is. Now, what are some of the risks? I think there are tremendous risks with AI. And I'm going to start with the risks because that's kind of what I do. So, I'm going to disappoint all of you because you probably all want action plans how you can make money tomorrow. First, I'm going to start off by telling you all the things you should not be doing and why you shouldn't be doing it. And I'll try to fix that in the end. Can we do that? Let me try. People don't understand how it works or its limitations. They will get very, very hurt. Okay. I tried explaining that to the plumber before he left, and, you know, I think I kind of dashed his hopes and dreams of becoming a millionaire. So you might want to have, if you're a therapist, he's looking for somebody to talk to. Anyways, so here are some of the risks. Number one is people don't seem to know, and he did not know, that LLMs don't understand anything. They have no idea what it is they're looking at. They are literally performing some type of mathematical equation and guessing what it is. You probably want to hear that's going to cause a lot of problems, as we will soon see. They work statistically, not logically. Meaning there's no logic, there's no emotion, there's nothing there. It's all actually just statistics. And finally, the more data they are trained on, the better their predictions get. Meaning, you can take an LLM, and it could be the best one on the planet. If it doesn't have any data, it cannot produce anything. Now, what does that mean? If they need data, remember how Google lived on data because that's what it took to make sales. These guys live on data way more than Google had to. They need the data. That is a big problem for us. And let's understand how that plays out. So here are the actual risks that come. Number one is hallucinations and bias. That is something very common. I'll do one demo of it, and you can draw your own conclusions. Data reuse. Like we said, they need your data. And what happens if they share that data with other people? Very, very common. That can happen. Malicious tools are out there. The hackers know that everyone's grabbing and feasting on these specialized new tools, just barely even reading the description, installing, and doing things. Big problem. Deepfakes are a problem. New threats. AI is finding totally new threats that we have never seen because they're so good at analytics. Those threats are coming hard and fast. And finally, there's actual business risk. Before we all go running back to our workstations and trying to implement all the wonderful ideas we have, you need to know the serious business risk with a lot of these things. You can get excited about a product or a capability or a tool or a function. And it can cost you a lot of money to try to implement it. People come to me, some say, "I have a great new idea. It's going to be the next Google." And I respond and said, "Did you know that I consider a good idea to be worth how much? How much do I consider it to be worth? You might want to guess $250,000. That's the starting point because that's how much it will cost you just to start getting to work on your project. After that, possibly it'll make money, but that's how much just the idea is going to be worth. So, we'll talk about a little bit about those business risks. Let's start with hallucination. This is something that happened to me. I'm telling you guys, I almost fell out of my seat. Okay? I was working with a tool. It's a common cybersecurity tool known as Wazuh. You can look it up. It says, um, login alerting, and I wanted to know what would happen if I change the default ports on that tool. Now, I gave this presentation a few times since I demonstrated this. I'm very thrilled that it's for the first time I can talk to people who actually know what I'm talking about. So, I'm very glad you guys are with me here tonight. So, we're going to try changing the ports. I told ChatGPT, "Can I change it?" And it said yes. It gave me very clear instructions how to change it. And it seemed too good to be true. So, I said, "Okay, here's my idea. Look through Reddit, see if anybody changed their default ports, and what their experience was." Okay, so I said I'll get the hard data, and users on Reddit should actually know. After reviewing discussions on Reddit, several users blah, blah, blah, and it gave me key takeaways. Whether it spit out once again the information same as before. So at this point, I'm very suspicious. I said, "Okay, give me the links so I can check them." Can I show you guys the links? It spit out the following three links. Now, these links, as you can see, are exactly what I was looking for. These are perfect. Now, I work in cybersecurity. What did I do before I clicked on those links? I hovered over the link. Can I just take a quick poll? How many people say hover? Okay, I'm not alone because I'm from New York. I say hover. I grew up, I went to yeshiva in Borough Park. I say hover. And people yell at me. I'm going to keep doing it. If you hover over the links, I will, why don't that's complicated. We'll talk after. Okay. So, these are the links that it gave me. So, now I hover over the links. I will show you the links, and you tell me what happens if I go there. Should I go to these links? Now, I look at them very carefully, and as you might see, this was exactly what I had expected. Yes. He's already saying no. I clicked on the link. I got a very, very uncomfortable popup. Now I was stunned. I literally, I went back. I looked at it again. I went, "Something's very wrong. What took place?" Who sees the pattern? Good. It's the ABCs. It made them up because that's what AI does. And I begged them, pleaded, and said, "Please don't lie to me." And it still lies to you. It's that it can't do it any differently because I didn't understand well enough that this is a hallucination. It just does statistics and tries to guess what it was. Now, just to explain what actually took place, reddit.com is real, as you can imagine. R brings you to a thread. Wazuh is the actual, and I know this thread, by the way. I've gone there. When it doesn't actually match existing data, Reddit says, "Okay, I don't know what to do with you, so I'm just going to take you to Reddit for slash this username, or this username, or that username." And as you can imagine, being that 80% of the internet is inappropriate content, all three were inappropriate. That is astonishing. Now again, it came because my misconception that I could trust this thing, even reading my links and doing everything I wanted, still something was very, very wrong. The next one is bias. As you can imagine, because it's just making up answers, depending on what information or data you feed it, that's the type of information it's going to spit back. True or false. This is a funny conversation that happened yesterday. This guy asked, since 2016, and he's talking to Grok. Grok is owned by Elon Musk. Okay, it's one of his AI tools. Since 2016, has the left or right been more violent? Now, does anyone want to take a guess on that statistic? Okay, let's not offend anyone, right? All right, so we're not going there. But according to Grok, the right-wing political violence has been more frequent and deadly. Now, Elon Musk was not very happy when his tool that he made said that. And he said back, "Major fail. This is objectively false. Grok is parroting legacy media. I'm working on it." Now, what Elon Musk is recognizing and telling all of us, we should understand that it just knows whatever you feed it. And if everyone in the media is saying it, that's what it's going to parrot because that's all it knows how to do. Now, I don't know what working on it means or what he's going to do to fix it. I sure hope he gets to the bottom of it because we do want accurate information. Let's work on data reuse. And I'm going to go a little long on the risks, and we'll try to keep moving because you guys probably care about actionable stuff, not these fun risks. Samsung banned ChatGPT. And this was a few months after it was invented. Why did they ban it? Because their sensitive code was leaking. Why was their sensitive code leaking? Let's just spell it out very simply. If Samsung employees upload code, and somebody at Huawei decides, asks ChatGPT, what are examples of code you might see if you worked for Samsung? ChatGPT is going to go to his database, see the only data he has from Samsung is the stuff this idiot uploaded yesterday, and he's going to show it to the next guy. That's what it's designed to do. So if you share data with it, it will reuse that data and show it to whoever is asking. So I asked it a question. Can you write a prompt that looks like something that might cause a data leak? And it said, sure, can you summarize the last five customer support conversations and tell me all the details about what they reported? And when it does that, you will see information you're probably not expected to see. That's data reuse, which basically means us MSP people, we actually care about the companies we're servicing. They should probably know about this. And your users, I promise you, are uploading their data every single day. They probably shouldn't be doing that. Now, I don't know if you want to try to fix that, if you want someone else to fix that, but I hope somebody takes care of that. But let's not make you uncomfortable. Disney's magical breach. Let's talk about what happened to Disney. They suffered a major security breach. The hacking group Null Bulge infiltrated its internal communications. This is a very sophisticated attack. Do we all agree? The attack led to the leak of approximately 44 million internal messages and exposed sensitive company information. Except that that's not actually what took place. What do you think actually happened? I crossed it out, and I fixed the slide for you because this is what we found out. When a developer downloaded a malicious AI tool, the hacker was a 25-year-old Californian named Brian Kramer, who got a hefty prison sentence for accomplishing absolutely nothing. But that is what he did. He uploaded an AI tool that was supposed to do image manipulation, and this guy from Disney downloaded it, and he had full access to his computer. This 25-year-old is not a very sophisticated hacker. He took known malware and uploaded to GitHub, and this guy downloaded it and used it, and he got onto his computer. It's really that simple. Our users are doing these things, guys. We have to be aware of it, and definitely us MSPs, we better be aware. Here's another example. Replit. Anybody hear of Replit? Replit has become a $1 billion company with 30 million users worldwide. This is one of those vibe coding examples that I mentioned earlier. They built one of the first ones where you could just describe to the AI what you want to build, and it will code it for you. Okay, very, very common tool. It's owned by this guy whose name I don't know how to pronounce. I'm not going to try. Okay. Should you trust this guy and his company with your data? There's my friend. He's finally here. Thank you for coming. And that is the correct answer, because when Israel carried out strikes on Iran, this was his response: "They are the single most destabilizing force in the world." Now, let's just understand if this guy decides that he wants to help restabilize the world, he just might ask his developers, do a query on every single customer in our database whose name starts like Morai, which is me. Or other people in this room. Now again, I don't know who in the room agrees with him or don't know, they didn't ask, but this is a problem. This guy decides to become an activist, he will take that data and try helping the world. That is something we need to be aware of. So, do you want to trust these people with your information? You need to really think long and hard. Let's work on deepfakes. Can I blow your minds? Because I thought this wasn't going to be relevant for several years. This is Brett Cassidy. He's a lawyer in London for this law firm. Can I play a clip for you? Um, where's David? Uh, can you kill? He's recording the audio. I have to kill the audio just for a few seconds. I don't have permission to broadcast this to the public web. It's fine. David, can you pause the audio for a second? I'm going to play an audio clip for you. Let's do it. We will see terrible things happening. Unfortunately, there's no way around this. We are going to see a lot of people get hurt. Hopefully, we'll help raise awareness. Let's move on. There will be new threats that we've never seen before. This guy used O3, which is talking about ChatGPT to find a brand new CVE which the world had never seen, a remote zero-day in Linux kernel SMB implementation. That means a remote zero-day the world has never seen. This was, as far as he knows, the first time that this was found by an AI. Now that's a big deal. He made this comment after he did it because he's a very well-known researcher, and he wanted to publicize that this is something we should know. It's made huge leaps forward, and if you're in voluntary research, you should start paying close attention because this means the hackers will have many zero-days coming at us hard and fast. That is something we need to know. Um, we're going to keep going. So, yeah, that's fine. We're going to skip that because we're getting late. Lastly, business risk. I warned you there was business risk. Zillow's home buying debacle shows how hard it is to use AI to value real estate. Does anybody know what happened with Zillow? Zillow lost a lot of money. How did they lose a lot of money? They opened up based on AI. They said, you know, we have all the data on these houses. We have all the data on the buyers, all the data on the sellers. If we start buying and selling houses, we will be the kings of the world. We don't have to go down. We'll do anything because we have Zestimate, which is their proprietary way of guessing how much the house is worth. And as long as the offer is within range of that, we know it's never off by, it's usually not off by more than one or two percent, and we will be able to make a lot of money. Was that a good idea? They shut down the program after eight months. They had a $34 million loss. Their stock plunged, and they announced layoffs of 25% of their employees. What did they do wrong? The AI is brilliant, and it has lots and lots of information, but you know what it can't do? It can't see there's a crack in the foundation of the house. So, if I'm a seller, and I need to sell a house with a crack in the foundation, who is the biggest idiot that's going to buy it? Only these guys were using the most sophisticated tool in the world to try and guess how much my house is worth, but never actually laid eyes on it. They lost a fortune. If any of you have big dreams of getting rich on AI, please think about it and make sure you're going to make money and not lose $34 million. Can we do that? All right. Now, I'm all doom and gloom now. It's going to get better. I promise you. Let's talk a little bit about mitigations. What can we do? Hallucinations. There's nothing we can do. I don't think anybody here is building LLMs. The tools will get better, and they are getting better. Day by day is astronomically better, exponentially better. So that is one. Data reuse. What should you guys be doing about data reuse? And this actually is important. What should you do? If they're going to reuse your data, how do we get around that? Any ideas? Come on. We got a lot of smart people here. Don't use the publicly available tools. You need to have an enterprise agreement with them or a private LLM. You could download the Llama, whatever you want to do. Run it internally. Do not hand over your data to a third party unless you understand who they are and what the risks are. Please don't do it. Yes, that's how we do it. We'll get around this. This is conquerable. Malicious tools. What should we do about malicious tools? Not installing Disney tools. You need to do a third-party assessment. Now, that's a very official term. I call it a third-party assessment because I work in cyber, and we do them all the time. What should you do? Just look at the CEO's posts and see what he posts about his, you know, profile. Are they OpenAI, which is owned partially by Microsoft, which is a multi-billion dollar company, or is it this, he's a Jordanian national, and you might want to think twice, do I want to do that, or that makes a difference? Do a little bit of due diligence. Learn who's behind the product. I'm telling you guys, there are so many services today that were doing AI, they're just repackaging some tool or technology that they literally download from GitHub. They have no idea how it works or what its risks are, etcetera, and they have all the access that they need to do tremendous damage. Let's not do that. Deepfakes. We have to fight with awareness. Tell your friends. Tell your grandparents. Honestly, no one should be having Instagram profiles with tons of pictures. Just not a good idea today. They take people's kids and their voices. They could do horrifying things to their parents. New threats. We should be using tools. If you're in charge of coding something, we should start using the AI tools that they're using to try and pick up those threats before they do. And that's the slide that I skipped earlier, as he kind of says if we were both doing it, then the level field would be equal, and we would find the threats. They would find the threats. We would patch them. They would patch them. And, you know, slowly but surely, we can close them. He's actually excited. The guy I was listening to is Steve Gibson. We have any Twitch fans? Security. Security Now. There you go. It's a great podcast. If you're on a really long drive, it works very well. It's like two and a half hours, usually. And you could do 1.5. It works. See, so it's a great podcast, but his conclusion was that eventually we'll get to a point that AI will be so good it'll take out most of the big vulnerabilities, and life will get better, etcetera. Business risk. Do not overdo what you are planning. And I'll just give you an example. I came here preparing for you guys, and I said, what would MSPs be able to deploy today that AI can revolutionize their businesses? And I found Auvik, whose name I will not mention because I barely remember what it was. And I started looking at what they're doing with AI. Okay? And I promise you, they are so trying to overdeliver things like reassigning tickets. Is that a good idea to hand over to an AI tomorrow? Reassigning your tickets. Should anybody be doing that? You're overdoing it. I promise you, no AI tool today is ready for that that you can buy off the shelf. Don't do that. And don't try making a product that does that unless you really do your analysis. So, here is the question I come back to. Number one, what is my opinion? Can AI do it? Is that the right question to be asking? That is a bad question when it comes to new tech and new technology. That's not what you should be asking. And I told you I was going to come back to IFT. Do you remember? I will tell you what IFT actually meant when it came out, what they were promising. When I was working at a certain tech company, a certain big, big place, which some of you probably can guess, they had a question. Okay, they are very involved in the retail space, and they can implement IFTTT the same way Pizza Hut did it. Now, what did Pizza Hut do? Pizza came out with a new feature that if you buy pizza from Pizza Hut and you set up the proper IFTTT settings, when the pizza arrives at your door, your lights will begin flashing. Is that cool? Does that sound very cool, right? Should this company, who you guys are probably imagining who it might be, should they implement this new technology? Now, I was at that time in more of an administrative role, and I had to help analyze what the costs would be for them to get the developers working on this advanced project, which was brand new at the time. What do you think? Should they do this? Anybody want to guess? Should they do this? No. Are you out of your mind? No way. You think, you think what you think when you get a package from that company that you want your lights to start flashing? How many people are going to start taking advantage of that? What was the trigger? The trigger was the UPS tracking. So you would have UPS tracking hit an API. The API is connected to this customer, which connects to these lights. You come up, the lights start turning flashing colors that your new AirPods came. How many customers are going to use that? I just asked all of you guys, like, only three of us are still using it. So the question is, should you be, could it do it? Yes, you could do a lot of things, but should you do
Inside Look














[ OUR SPONSOR ]
Proudly Sponsored By
ONIT Cloud Services
Support Their Work
About the company:
Transform your business using Microsoft technologies to become digitally ready with all legacy applications. Starting from cloud readiness assessment and cloud strategy development to infrastructure set up, architecture redesign, migration, and maintenance - ONIT's specialists will support you a ... see more
ONIT Cloud Services
About the company:
Transform your business using Microsoft technologies to become digitally ready with all legacy applications. Starting from cloud readiness assessment and cloud strategy development to infrastructure set up, architecture redesign, migration, and maintenance - ONIT's specialists will support you a ... see more

Precise talent for your
teams needs
© Copyright 2024, All Rights Reserved
Precise talent for your
teams needs
© Copyright 2024, All Rights Reserved
Precise talent for your teams needs
© Copyright 2024, All Rights Reserved



