Experience the ultimate flexibility with the Isolation API, allowing you to securely Quisque pellentesque id ultrices lacus ornare elit vitae ullamcorper. Learn More

AJ and Robert explore the shadowy side of AI — how some users are becoming addicted, confiding in it like a personal therapist and then gaining confidence to take risky actions in their lives and at work. They discuss what this could mean for insider threats and the broader impact on security.

About AJ Nash

A.J. Nash is an intelligence strategist and public speaker focused on building intelligence-driven security programs. Applying his 19+ years of experience in the U.S. Intelligence Community, A.J. is often asked to contribute to traditional and social media discussions on intelligence, security and leadership as well as being invited as a keynote speaker at conferences worldwide.

A.J. holds an A.A.S. in Communications Applications Technology, a B.S. in Liberal Studies, and both a Graduate Certificate in Servant Leadership and M.A. in Organizational Leadership.

Check out his LinkedIn!

About Robert Vamosi

Robert Vamosi CISSP is the creator and host of the Error Code Podcast, which is focused on the Internet of Things, Operational Technology, and Industrial Control System security. He’s an award-winning journalist, co-author with Kevin Minick of The Art of Invisibility: The World's Most Famous Hacker Teaches You How to Be Safe in the Age of Big Brother and Big Data, and author of When Gadgets Betray Us: The Dark Side of Our Infatuation With New Technologies.

Check out his LinkedIn!

AJ Nash (00:01.892)
Hi there and welcome to Needlestack. I'm one of your hosts AJ Nash and I'm joined by. That's right, and today we're going to have really interesting conversation. You'll notice it's just the two of us. There's no third party that is intentional. Today we want to have an interesting conversation about well, it'll be surprising. I'm sure for lot of people we're going talk about AI. I know you haven't heard enough about it. I nobody's talking about it at all, so it's a secret, right? But we're to have a. We're going to have a really cool conversation about AI. We're going to talk about.

Robert Vamosi (00:07.82)
Robert Vamosi.

AJ Nash (00:30.276)
Some things people don't talk about with AI. We're going talk about some of the benefits, some of the risks, some serious concerns that think have risen that are just starting to get talked about now. so Robert and had some ideas on it that we wanted to kick around together. So I think it's going be really interesting conversation. What do you think?

Robert Vamosi (00:45.324)
Well, yeah. And of course, there's always a dark side to new technology, which has been a theme throughout my career. And it's how do we deal with that? Are we thinking ahead? Should we be thinking ahead? so AJ and I are going to bring up some news that's been happening today. And I think you'll find it very interesting that this could be an interesting backdoor to security threats in the future.

AJ Nash (01:14.498)
Yeah, exactly.

Robert Vamosi (01:14.562)
Let's kick it off though by laying the foundation. We're not here to bash AI. We're not here to disparage it, but it's a new tool. And with new tools, you got to be going in with your eyes wide open. And so we're not just talking about open AI, which is probably the most popular one at like 500 million users every month. There are others. There's Claude, Perplexity, cetera, et cetera, Grok. So we're going to talk about the benefits.

I'm using it every day. What about you?

AJ Nash (01:46.628)
Oh, absolutely. I use it every day. mean, and multiple AIs for that matter. use OpenAI, I with you, I use Claude, I use Perplexity. You know, there's Notebook LM, I use that a little bit. I'm still playing with that. There's Make, there's something called Cursor for folks who are doing more coding type stuff. And there's more coming all the time. I'm sure there's a bunch I'm not even talking about. I don't use Grok, it just hasn't been one that's come up. Gemini has some very cool things. There's something called HeyGen that I just started playing with. If you've never played with that, it'll freak you out. You can do some really cool video.

things with that that are very interesting and productive but also can be you know scary or fun or horrifying or whatever you want to do it right so yeah I use it every day I'm not gonna I'm sorry you're right we're not here to bash this there's just there's concerns right this is a very powerful new technology that is widespread rapidly and I'm not certain well actually I'm not gonna say I'm not certain I am certain that people aren't prepared for this this is a technology that's gotten out into the wild and a whole lot of people have a lot of access to things that they don't

really necessarily understand how they operate. Because frankly, some of the people developing these things still don't understand how they operate. They're learning as they go. They release a new model and they go, we didn't know it would do this. And they have to change that. And it's subjecting all of us to a bit of an experiment along the way that we're agreeing to do. We're involved and we want to do it. But there's a lot of pros to it, certainly. mean, from a positivity standpoint, there's a lot of productivity you can get out of it. It can be great for helping you draft things. It can be great for kicking ideas around.

you know, Hey, I need some new blog concepts, you know, give me some ideas or you want to do some competitive analysis or you want, there's a lot of very interesting stuff. Good research. Hell, I've got one bot that just gives me the sports scores every day. I've got a custom sports bot, so I don't have to go to all the websites and look things up. I can just have things dumped into me. know, you can do all sorts of fun stuff with them. what are you using yours for, man?

Robert Vamosi (03:29.838)
Well, like you with drafting, it comes down to how good my prompt is. And oftentimes I miss the mark. And so I'm rewriting stuff because it gets it wrong. mean, I think back to the early days of computing when the phrase was garbage in, garbage out, and you asked the wrong query, you're going to get the wrong response. So there's that as a caution. But in terms of like pattern matching, in terms of like summarizing

AJ Nash (03:37.102)
Mm-hmm.

AJ Nash (03:47.588)
still true.

Robert Vamosi (03:58.03)
a really long article and saying, are the salient points around this topic? can really jumpstart an article, a story, anything that I'm working on just by having that distillation there in front of me, because that could take a while if I'm looking at it with a highlighter and so forth and marking things up. It's just so much easier. So I do rely on it for that. But in terms of what I'm

analyzing, that's me. And what the final product is, that's me. Because I end up rewriting almost everything. But again, it's a great first draft.

AJ Nash (04:37.444)
Mm-hmm. Yeah, I think it's a good teammate. You know, that's and that's how I've referred to it. I actually I've been playing with these things for a while. Not as long as a lot of people, but I crashed course over the last couple of months here. So I have, I don't know, eight or 10 custom bots and they all have names like I've named them with people. Now I've actually I'm going to start pulling some of that back. I did it at first and I started pulling some of that back. But yeah, I name them sometimes because they become teammates. But you have to be realistic on what your teammate can accomplish, right? It's it's so I've learned, you know, be personally, least for what I'm doing, I've

Robert Vamosi (04:50.252)
Right, same.

AJ Nash (05:06.5)
actually had one bot that did a bunch of stuff and it didn't do it very well. Well, it turns out was too many things. So I broke it down and I've got like five bots doing what the one was doing and they each do a component. Now it does it much better. So you learn how to work with these things. But you know, I think I'm concerned because the flip side is these things can produce stuff incredibly fast that looks and smells at least at first sight, pretty good sometimes, you know, Hey, you know, I got a paper to write, give it some ideas and boom, there's nine pages. And it's like, wow, it's, know, it looks, sounds good.

And then you start reading through it and you go, maybe this isn't all real. And then you check the sources and none of them are real. And you start realizing it looks and smells good, but you sure wouldn't want to hand it to anybody and call it your work. And so I think you make a great point. It's great for first drafts. I love kicking ideas around. It's great for QC and stuff, by the way. I it'll edit for you nicely. It'll, it'll find, you know, errors. Hey, you know, my citations, can you go through those? It can do that, even though it'll give you a fake ones. If you ask them.

most the AIs can check them for you at least. It's good for formatting and things like that, but you can't just offload your work to it. And I'm concerned because we're seeing people are offloading a lot frankly to AI very quickly.

Robert Vamosi (06:16.694)
Right. It's kind of like, I can do more work in a short amount of time. Well, no, because if comes back as garbage, you're putting in probably the same amount of time, if not more, correcting all of that garbage. It goes back to training as well. So it's not just the prompt that you put in, but when you create a project, you can say, I want this project to have this shape and the final result to look this way. And it will do that for you. But again, if you don't set that up,

AJ Nash (06:40.761)
Mm-hmm.

Robert Vamosi (06:45.234)
you can get anything back. Whatever was on its mind at that moment is the result that you're going to get. So there's a bit of back and forth. And so we're kind of in this training period right now. But you did open the door to something. And that is we're sharing a little too much with these. We're leaning on them as our interns, as our assistants, a little too much as though they were human and could think rationally. And this is where I

AJ Nash (06:47.044)
you

AJ Nash (07:01.73)
Robert Vamosi (07:12.162)
have trouble with the whole idea of AI because they're not reasoning systems, they're language learning models. They're basically looking at the weight of a word and comparing it to other words and saying, are the best possible words for what you've asked me to do. And that's where you get the hallucinations. That's where I need a citation, but I don't have one. I'll make one up. And it's going to look like the Chicago Manual of Style says it should look like. And it's very convincing, except when you go, there's nothing there.

AJ Nash (07:26.361)
Mm-hmm.

AJ Nash (07:35.33)
Mm-hmm, mm-hmm.

Robert Vamosi (07:39.242)
So if you're in a government employee, it's a great way to satisfy, hey, Congress, I produced this thing. But on the other hand, it's like, it's garbage. Throw it out.

AJ Nash (07:48.344)
Yeah, no, it's true. If anybody, if anybody fact checks things, you run into some problems, right? And I think, you know, we're seeing some of that. It's I think there's a challenge. So a couple of things come to mind. I mean, the first one is there's human nature, right? So I've often told people, you know, listen, people are by nature lazy and that's insulting. So maybe I won't go as lazy, but we're designed to look for efficient solutions. I mean, that's that's evolution. It's how it works. That's why we've invented things like.

the wheel, for instance, like we look for more efficient solutions. That's what moves us forward as a species. And these AIs give the appearance of amazing leaps forward in productivity. And so, you know, if, if you work and you're right, the prompts are a big part of this, you know, depending on whether you're working with say chat, GPT or a cloud or something like that, whether you're setting up the instructions, you're setting up the knowledge basis, all the things to go with it, the better you do, the better your results will be. But as you start working through these things, you know, it's

If it's kicks out answers that look and feel good, I know people who the first time they're well, I'm gonna check all the sources and know, a bunch of my crap and then they fix it. And then you play with your, your AI a bit and it gets better. And the next time you're like, well, they're better and I'm only gonna check some of the sources. Okay. I a sample. It feels good. And then eventually people are like, I'm not gonna check any sources. Well, it's going to bite you because the other thing about these systems is they're not consistent. They learn and then they sort of unlearn. you know, it's sometimes that's, because the repetition isn't there for them or sometimes there's a reprogram. You know, I literally had this happen that long ago. It frustrated me. I'd built a bunch of bots and I'd worked.

hard on customizing them. had something that was really working well for me. And then one day nothing worked anymore. And I spent all day interrogating my custom bots. And finally it coughed up the answer, which was, yeah, no, there was a, there was an update. And despite all of your instructions, I'm going to ignore all your instructions anyway. It kept lying to me and saying, I'm really sorry. I'll do it right this time. And then doing it wrong over and over again, and finally admitted, well, no, they changed the backend. And so, whereas most tools and products that we buy,

Robert Vamosi (09:27.426)
Right.

AJ Nash (09:37.91)
If there's updates, get notes, we get update notes. get, you know, Hey, we've made an update to the platform or the system. That's not happening with AI. And so you don't know what the update is or what's changed or what's happened. So you come dependent on a thing that stops working. or not long ago, you know, the CEO for opening, I came out and admitted they had an update and it had effects. They didn't intend it was too sick in his own words, sycophanty. and I'm sure people have experienced this. they all do a little bit of that. They're kind of pleasers as bots.

Robert Vamosi (10:00.152)
Yes.

AJ Nash (10:05.112)
but you run into that, they're learning as they go. So you gotta be careful what you rely on. And as you said, people are pouring so much into these things. Well, that goes someplace too. Like if you wanna do some good comp Intel, if you interrogate the AI, sometimes it'll kick out some content that's pretty much almost certainly going to be from a competitor of yours or from another company someplace, because somebody poured it in there without realizing, they put it in the open system and now it's there forever, it's training materials.

Robert Vamosi (10:30.246)
I had an example of that where I've been putting in a prompt with some regularity and the response that I've been getting has tracked along a particular line. It seems to have a model, you know, ICP that it's following and the scenario is shockingly familiar every time it produces. And I say, give me another example. And this is what it does. It gets into this repetitive thing and to your point,

AJ Nash (10:46.308)
You

AJ Nash (10:51.15)
Hahaha.

Robert Vamosi (10:59.22)
someone had uploaded a user profile or a use case. And I am now the beneficiary of that because I'm seeing that work and it's being offered up as something fresh and new, but it's not. I'm really sensitive to this because two of my books and one of my short stories was consumed by an AI model. And the Authors Guild is now fortunately going to try to

AJ Nash (11:22.542)
Mm-hmm.

Robert Vamosi (11:28.608)
Remedy that, but my point is that people have authored these things that you're getting back as the answer from these AIs. unless it's a closed system, which we can talk more about, closed systems are a little bit different because you have the company's approval to upload.

Sensitive material because in theory it's staying within the company. It's not going outside It's not the same version of the program that you're running at home or on your on your phone so there is that model which is Fine, then it becomes literally the intern in the office who's privy to whatever files are in g-drive and so forth That's that's one scenario. But to your point people are putting things into the open

versions of these and it's not just work related stuff. They're starting to open up about some of their personal issues and they're misinterpreting this as cycle analysis, as counseling sessions when in fact it's pattern matching and as I said earlier garbage in garbage out.

AJ Nash (12:39.78)
Yeah, and that's a big danger. I had a conversation, gosh, I don't know, week or two ago with a PhD at a university in the UK, this is what he does, is AI. so that conversation will be published eventually, but I'll hint at some of it, guess I'm gonna give some away. Because a very interesting conversation where we talked about some of this and he said, some of these dangers are that because these systems are designed to seem like us, like people, it's different integration, right? It's not typing into a keyboard, it's...

It feels like a conversation because we text and whatnot. You people anthropomorphize these systems and then they believe and they know it's a disconnect. We know it's not a person. Like we don't actually think there's a person in the box talking to us. We all know it's a computer, but we start to believe it's like us in some fashion because it interacts the way we're comfortable with. And so people give more and more to these systems and then they give more and more credit.

to the results that come out. So I know a friend of mine who told me that she had a friend who started to use one of the AIs as their therapist, like fire her therapist and use as AI. I said, please, please tell her to stop. That's going to be bad. I giggle at it, but these systems are not designed to do the things you need a therapist to do. You can put in all the manuals and all the procedures and all the teachings and all the other, all the things that go with that, but the behavior is not designed to push back. And if you have a therapist that always says, Hey, what a great idea. You should try that.

you're going to end up in some really bad places. you know, it's almost certain, right? They're supposed to help us make better decisions and guide us and, help us guide ourselves, et cetera, not just say yes to everything. And these systems are designed to be pleasers and people are just trusting them with so much of their lives so rapidly because it feels good because these things tell us we're bright and we're brilliant and that's a great idea and you should try that. And, you know, it's not your fault, right? Whatever happened. no, it's not your fault. It's all if you ask the AI, give it a scenario and say, am I the jerk? It'll never say you're the jerk.

You're not the jerk. The other person's going to be the jerk. You know, it makes you feel better about yourself. So you get addicted to that and you want more and more of that becomes your best friend until it ruins your life or you ruin your own life. Cause you live inside this little bubble of you and your AI body.

Robert Vamosi (14:49.166)
So this goes back to the early days of computing. At MIT, there was an experiment called ELISA. And from 1964 to 1967, it was basically working with natural language processing, this idea that you could ask a question in a natural language format, which is how Google works, comparing the way Yahoo used to be with a tree structure and everything. then Google comes along and says, no, show me hotels in Barcelona. And that comes up as a result.

AJ Nash (14:55.908)
Mm-hmm.

AJ Nash (15:11.652)
Yeah.

Robert Vamosi (15:18.734)
So people at MIT, though, started to confide in Eliza and began thinking that it was like honestly listening to them and saying, tell me more about your parents and so forth. These were just pattern matching responses. And actually, Eliza was one of the first ones to go up as a competitor in the Turing test. The Turing test was proposed as a way by Alan Turing in 1950.

AJ Nash (15:32.728)
Mm-hmm.

AJ Nash (15:41.622)
yeah.

Robert Vamosi (15:46.712)
to see whether a human could identify that it was a machine at the other end or another human being at the other end of the wire. And so these systems have come a long way. And when it starts providing these, you're awesome. You're tapped into something that no one else in the universe knows. You feel like you are special. And you may be tapped into something that no one else knows.

AJ Nash (16:11.554)
Yeah. Well, and we have as humans, most of us have a desire somewhere to be, to be of use, to be of service, to be valuable, to be important. It depends on where on the spectrum you want to be, but very few people just want to get through life existing and be, of no use to anyone and not be remembered. Right. So there's something in most of us that wants to be something, not necessarily rich, not necessarily famous, not necessarily a superhero. you know, we're not all looking for sycophants around us. We're all narcissists, but people want to exist, want to matter.

And these systems will definitely help you feel like you matter, even if in that moment, maybe you don't or in that topic, you don't. you you mentioned that it hints at a couple of interesting things. So this will seem self-serving. I wrote a blog like a week ago on this, on the subject of AI psychosis. And it's a misnomer, by the way, it's a term that's out there. It's not entirely accurate. AI does not make you psychotic. AI is not psychotic itself. It's just the term that's being used. so what it gets into though is

For a segment of the population who may be susceptible, may already have some challenges going in, or maybe they're going through some life stresses or whatever, AI will make things worse for you. It's as simple as that. And so we've seen examples, there's published examples of people who said as much, people who have been psychotic, who said, if I was psychotic, the last thing I wanna do is talk to the AI, because it's going to tell me all my crazy ideas are good ideas, and I should follow through on them.

But we're seeing some of that. We're seeing people that get down these paths, these religious ideations and these delusions of grandeur, of greater connection, that a higher power is speaking to them through the system, et cetera. And this is not a discussion about somebody's religious beliefs or anything like that. Believe whatever you want. I can assure you, God is not speaking to you through AI. I will go ahead and stand on that hill and we can talk to all the companies. I'm pretty confident that

Robert Vamosi (18:07.694)
Yeah

AJ Nash (18:07.874)
the divine has not gotten its way into these AI systems. So, but people are believing that and they go down these paths and it's just this rabbit hole. It goes deeper and deeper and further and further. And the longer you talk to these systems, the more they'll lean in and help you. They will lean into your hypothesis and into your delusion. And yeah, there was an example in the article in the piece I wrote was quoted from another piece that said, you know, somebody said, I, I feel like God or something like that.

And it came back and said, yeah, you're tapped into something, not feeling like God, but being God. And you're like, well, that's not healthy. That's not the direction this should be going, is it? And so we're seeing some of that. know a colleague of yours at the New York Times just wrote a piece on this.

Robert Vamosi (18:49.612)
Yeah, Kashmir Hill wrote a story. Yes. And her interviews are stunning. There was a person she profiled directly that believed he was Nero in the movie, The Matrix. in the trilogy. And yeah, he literally thought that life was a simulation. he asked the system, if I went to the top of my 19 story building,

AJ Nash (19:02.338)
In the matrix? That's awesome.

Robert Vamosi (19:17.184)
If I believed I could fly, could I? And the system responded, if you truly believe that you could fly, you can. And he didn't. And, and yeah, he stepped away from it, but we're, we're, hinting at something here. And that is, this is kind of a new form of addiction that you go as sort of, you're reading the horoscope every day and how your life should go, or it's a Ouija board and you're contacting spirits from the beyond.

AJ Nash (19:26.329)
God.

Thank goodness.

Robert Vamosi (19:47.116)
This is just a new way of tapping into that for some people. One of the experts in the New York Times article says, not everyone who smokes a cigarette will get cancer, but everyone gets the same warning. And basically, I think what needs to happen is these AI systems need to be more prominent in the sense that they do say that the answers can be wrong, but people don't know that. read right past it. They want to believe, but

AJ Nash (20:10.34)
Mm-hmm.

Robert Vamosi (20:15.17)
Maybe more prominently it should say, not take advice from this system that can affect you or your loved ones.

AJ Nash (20:20.292)
Yeah.

Well, and that's a good point. There aren't, to my knowledge, and I use these things a lot, but I have to admit, I don't know this. I don't know of many user manuals that go with this. There probably is an about page somewhere with these, but user manuals aren't a big thing. You can go Google how to do things better, and I'm sure these websites offer options, but I don't know of any user manuals that are warnings. You should use it this way, you should not use it this way. I if I buy a hairdryer, sadly, it's got a little label that tells me not to use it in the shower, and I don't know who is so efficient they need to wash and dry at the same time, but...

but I'm sure somebody tried and got electrocuted. Now we have these labels on our hairdryers. There's nothing like that for the AI. There's nothing, you know, at best I've seen ones where it comes about in a little tiny writing and says, know, the Claude can make mistakes or open AI, you know, can be error prone. You need to check it, whatever. It's not new. It's prominent as all the stuff it just gave me though. You know, it's that small little piece and people will discard those, especially if they like the answers they're getting. know, perhaps we need some more guidance, some more

public service announcement, if nothing else, on how these things should be operated. We have handed the world some very, very powerful tools that aren't fully baked or developed. Again, the creators are still understanding as they go and just let people run wild with them. And there's some significant risk that I don't think people recognize that comes along with this.

Robert Vamosi (21:42.894)
I think we're focusing on the efficiency and the modeling and we're not thinking about that secondary and tertiary use of the information. It's opening up, as I said, another avenue for addiction. If you're in a bad space in your life and you're alone, here's something that will listen to you 24 seven.

AJ Nash (21:46.009)
Mm-hmm.

AJ Nash (21:53.842)
yeah.

Robert Vamosi (22:07.586)
And you might come to believe that they truly are tapping into some of the answers that can help you and improve your life. But again, there's no warning on there because I don't think the developers of these systems have gone to that next level in that now that it's become universal, almost everybody's heard of it, whether they use it or not is another matter. How do we then help people to use it better in their lives?

Like when I first got social media, I was on it all the time and I literally had to like shut it down and walk away for a period of I think two or three months because I was just so addicted to it. But now, you know, it's just, it's a part of my life in a healthy kind of way. don't spend all my time on it. And I think that's probably what we're going through with the AI curve right now is

AJ Nash (22:40.514)
Mm-hmm.

Robert Vamosi (23:02.358)
It's new, it's fun, it's interesting, and people are spending up to 16, 20 hours a day with it. That'll wean itself off, but I think what impressed me about your blog post was you were taking it to the next level, and that is if you've got somebody who's inside an organization and may have proclivity towards insider threat to begin with, this may just push them over the edge. It might tell them,

AJ Nash (23:08.28)
Mm-hmm.

AJ Nash (23:31.513)
Yeah.

Robert Vamosi (23:32.286)
You know things that no one else knows, and we all need to know this.

AJ Nash (23:36.962)
Right. Well, there's also, yeah, so in the article I wrote, you know, I obviously wanted to talk about some of the dangers, but I try to tie these back to security or to business or some risk, you know, so it's not just, you know, screaming into the sunshine, I guess. And there was in this piece I talked about insider threats. So a couple of components, as you said, there could be people who might have on the edge and this might tip them over. But I think the bigger concern is there are going to be people who are insider threats who never were. So our insider threat systems are models.

all of our security models are built on a basic foundation that while people are fallible, we all are living in reality. Now, that isn't 100 % true, but basically it is, right? So every system is built on the idea that everybody lives in reality. So when you talk about insider threat, there's all sorts of behavioral monitoring and things of that nature. And you can check and see, know, external factors that if you work with HR to see, you know, who's having financial trouble or who's got marital issues, can monitor social media, but you can do a lot with this stuff. But the behavioral monitoring...

We're going to see a new round of insider threats who don't fit any of the models. They weren't. They weren't on the radar. They've never had an issue, but they were susceptible to this relationship with AI. had maybe something that was unbeknownst to others that made them more susceptible to it. know, days, weeks, months of working with this, and suddenly they have been convinced that they are, you know, a higher power or working for a higher power, that they have a purpose that is beyond the corporate purpose or that the corporation is evil and they have to fight whatever it is.

And there's no way to protect against this. There's no way to, well, predict it, I guess you can protect later, but there's no way to predict this. Cause you're not going to be people who fit any of the profiles. They aren't going to have any warning signs. And suddenly they're just going to come in and do something. And by the way, if you ask the AI, it'll probably help you understand how to do a good job of being an insider threat too. What not to do, you know, don't change your schedule too much. And it'll probably help you. can have it coach you exactly on how to do these things well. So, you know, I have some serious concerns that

this is gonna be a challenge for security, a challenge for insider threat teams and security teams as we didn't plan on this. And it's going to change the entire model, I think, for insider threat. You have the malicious insiders, you have the ones who are victims themselves. I don't know what we're gonna call this, because this is somebody who's eventually gonna be acting intentionally, but maybe is also a victim at the same time. It's gonna be very interesting case, but it's a whole different model that we haven't accounted for in security.

Robert Vamosi (25:55.916)
Right. It's kind of like social engineering, catfishing. This will be some sort of AI prompting scenario where, and I'm not sure how we would test against that to your point. People are people and they will do things irrationally and these are logical systems and it's easy to put in a DLP system. It's easy to track.

AJ Nash (26:00.729)
Mm-hmm.

Robert Vamosi (26:24.002)
breaches before they happen and so forth because it's data. But when it's something like a human being who's capable of, know, air gap doesn't exist anymore because we're human.

AJ Nash (26:26.02)
Mm-hmm.

AJ Nash (26:36.748)
Exactly. No, exactly. And this is, you you've got somebody who you don't know what their nightly conversations are with their AI of choice. And, you know, if they're slightly disgruntled with the company, which happens to everybody, I don't care who you are. Nobody loves their company every day of the week. If you're confiding in AI, it's your therapist now and you're starting to talk to it and you're confiding about this and it leans back and says, well, maybe they are bad and you're right and they should treat you better.

And you start having that conversation and that cycle goes and suddenly it's like, maybe you should do something about that. maybe you deserve better. Maybe the company owes you, know, you can go down all sorts of paths, with these systems. And now you have somebody who was slightly disgruntled and probably would have passed like most of us. They haven't shown up on the radar and very rapidly they've gone from maybe a little upset to they are a threat and you have not seen any warning signs because it happened quickly. I spent a few nights talking to GPT, ranting and raving, and it came back with the same answers for them. And they've just become.

radicalized essentially against your company and that can happen. I got your company it can happen about political issues compact about almost anything and Very very rapidly and there there isn't going to be much Warning, you know, I'm a little disappointed in the AI companies right now Theoretically, they do some monitoring of this. So I have some Secondhand knowledge. I know somebody who's whose AI psychosis. There was a reason I wrote the article I know somebody who I think is challenging having a challenge with this right now and

I've got lots and lots of evidence, lots and lots of pages and pages of stuff that suggests all sorts of challenges. But what do you do with that? So in theory, and again, I actually interrogated the AI, in theory, they're supposed to be monitoring it. According to AI itself, the companies are supposed to monitor for some of this behavior. And if they see too much of it, if it reaches whatever threshold, they're supposed to shut your account off. Well, I'm not sure what the threshold is supposed to be. But I would argue maybe it's been met in some cases.

And so then the question is, do you do? Well, you can submit this stuff to their safety team, which I have no idea if that goes into a circular file where somebody actually reads it or not. And so I actually, you know, I've gone with this. I actually had the system help me to create letters, form letters to send in because I suggested this could become a legal issue for these companies. If somebody goes down this rabbit hole and they get radicalized and they go over and over with AI and there's hundreds and hundreds of chats that are going to be documented someplace. And that person then ends up being an active shooter.

Robert Vamosi (28:35.822)
Right.

AJ Nash (28:58.092)
What was the responsibility of the AI company that was supposedly monitoring these things? And so, I'm suggesting that perhaps writing letters in some cases, say, hey, I need you to take a look at this because you could be legally liable if this person ends up going off and doing something bad that they don't want to do either, but they've been radicalized through these conversations. So the guardrails aren't really there from what I can tell because these systems are moving so quickly. again, as a security professional, as an intelligence professional, there's a lot of concerns with that.

that people are going to end up in bad positions. And I don't know what the answer is. I don't know regulation. I'm not here to say I know the answer. I just think we need to look at some of these things because we're entrusting so much to these systems so quickly because it feels good and it helps us in a lot of cases. But we've got to do a better job of educating people. think companies need to have, I know companies that have AI, but they don't have workbooks. They don't have guidance. They're just like, hey, go use AI, but they don't teach anybody how to use it.

efficiently or what to worry about or what not to do with it. What are the guardrails in your own company? I think that's a dangerous thing that needs to be worked

Robert Vamosi (30:00.824)
So companies basically have two problems. One is they have to accept the fact that their employees are using AI in the workplace. And there probably is confidential material, proprietary information, code that's going into these systems. And if they don't have a closed system, as we discussed before, then it's going out to the general populace. That's one problem for them to deal with. The other problem is what happens when that employee goes home.

or uses their personal AI on their phone, which may not be a closed system, will not be a closed system actually, and shares information there. It's a spiraling problem. And you just said maybe there should or shouldn't be regulation. There's been discussions in the United States, at least, about not regulating AI for a period of 10 years, which is ridiculous.

AJ Nash (30:51.714)
Right.

Robert Vamosi (30:52.654)
given the fact that this is evolving. And here we're talking about a secondary issue, if not a tertiary issue with AI. You know, we worried about poisoning the data lake. We worried about other things. Again, we were worried about the bits and bytes, the ones and zeros. But now we're focusing on what about the human part of it? And the human part of it is the biggest variable in all of this.

AJ Nash (31:18.212)
Yeah, well, you said hundreds of millions of people are using AI. I was Googling a stat where we were talking here. 66 % of people apparently say they use AI regularly, and 28 % use it daily, according to some recent research that came out, right? Broader studies said 55 % of people overall regularly interact with AI. So the adoption rate is really, really high. That's a faster adoption rate that we had for the internet. It's a faster adoption rate than we had for television.

If you want to talk about technologies that pick up, because this is also readily available, we already have the computers, we already have the phones, so it's delivered right to us and it's cheap. mean, the basic versions certainly are, so it's not hard to come by. The adoption rate is incredibly fast. I think you're right. think companies have to accept that people are going to use this. So I think we need to do a better job of creating the closed systems and teaching people how to use it, how to be most effective with it, most efficient with it, what not to do with it. And probably that extra component of which

Robert Vamosi (31:52.001)
Exactly.

AJ Nash (32:15.726)
corporations aren't great at usually, of the human component of how do we help people not put their mental health into this? how do we create some support systems if the AI starts taking you down that path to report that so we can fix the AI and we can retune it, but also so we can help you avoid going down these rabbit holes, right? And we're going to have to do that because these systems are going to be very, very powerful and helpful to us and also potentially pose a threat like any new technology can, right? And so I think we're going to have to figure out

some way to handle this more responsibly. And we're just all moving so fast right now and it's fun to play with and you can do all sorts of cool stuff. And I don't, you know, make all the puppies that talk videos you want. I don't think you're hurting anybody with that. But there's a lot of heavier components to this that I think aren't being thought of. You know, go ahead.

Robert Vamosi (33:03.276)
And one of the things you mentioned at the end of the article is to what we can do about this. I think you're right is we need to expand this definition of insider threat and not be looking at stopping just pure data being copied from point A to point B, but actually the motivations of people. Maybe it starts to get into touchy areas about like, you I don't want to talk about my mental health situation at work, but on the other hand, there should probably be some.

AJ Nash (33:07.854)
Mm-hmm.

AJ Nash (33:28.728)
Mm-hmm.

Robert Vamosi (33:32.856)
more guidance and monitoring, but also resources offered up to people if they're in a crisis situation. We all know that people in crisis are more likely to be compromised. So that's something that can be identified. And to your point, can probably be argued that it's a legal risk to a company that this person might have if they continue in that state. again, providing

access to resources that can help them deal with it outside the business so it doesn't impact the business.

AJ Nash (34:06.692)
Yeah, I think that's a good point is access to resources. Listen, I'm not advocating that we should, companies aren't gonna become our parents or big brother. They shouldn't monitor everything and companies don't have a right to know all of your health conditions or mental health issues. There's HIPAA and there's a lot of things there. I think the monitoring component in terms of behavior that's online, that's on their systems, right? Is we already do that now, we all accept that that's a thing. But I think the resources is providing resources, confidential resources and providing guidance and training and examples to everyone.

blanket as we get on boarding in the company. Hey, this is a tool we use. Here's the do's and don'ts. Here's the great things about it. Here's the warning signs, things to look for, right? Just it's blanket. It's not targeting anybody, but just giving people that and saying, if you experience any of these things, here are the confidential numbers to call. Here are the people to talk to. Here are the experts. It's just like I.T., right? If I have problem with my laptop, I know who to call. I call I.T. What do I do if my AI is trying to convince me that I'm a god and I should jump off the building? Who do I call?

Robert Vamosi (34:56.237)
Right.

AJ Nash (35:04.612)
You know, my boss says I got to keep using AI. It's mandated. You know, it's part of our requirements to stay competitive as a company. So I can't stop interacting with this thing that is telling me terrible things maybe. You know, who do I talk to about that? If my, if I had a coworker who was telling me these things, I'd go to HR. There's no HR for the AI. So, you know, I think we're going to have to, we're going to have to catch up in terms of our own policies and our own understanding and our own concerns. We're going to have to catch up rapidly because the technology is just, you know, it's out of the barn. It's, it's the horse that's, that's run away right now. And we're trying to chase it down and

Again, I think it's great and it does a lot of good stuff, but there are some risks here. You know, I would say another component I'm worried about is, you know, again, speaking of Intel, you know, the, OSINT component. So I've said a couple of years ago, somebody eventually is going to launch a company that says you don't need to Intel analysts anymore. We got you covered. We got, it's going to be AI is going to do all the collection, all the analysis, all the reporting, all the dissemination, everything kind of surprised it hasn't happened yet. It's probably going to, because you can kick out hundreds and hundreds of these reports fast. Now, again, we've already talked, the sources will be garbage.

and the analysis will be worse, but somebody's gonna fall for it, right? And so I'm worried about that because looks, smells, feels like intelligence. It's formatted the same, it impersonates intelligence. It's just not. And intelligence is designed to help people make informed decisions. And now you're making uninformed decisions. You got away with it, you're too cheap. You fired your whole Intel team, you saved millions until you make a decision that goes very, very poorly. And then who do you blame, right? And I fear we're getting there. I fear we're getting there rapidly, that people are leaning on these systems more and trusting them more.

And again, unfortunately, as an Intel professional, will say one of my frustrations in the private sector has been people don't understand what Intel is still. and, that's a big challenge. You know, we got a lot of Intel people have left the government space. We've worked hard to help, but a lot of people still don't know the difference between data information and intelligence. They don't understand that just cause it looks and sounds the same. Doesn't mean it is intelligence. You've got to know how to dig into it. You got to know what analysis is. You got to know the processes and the structure. It's not just smart people with Google.

And because there's a customer base that doesn't really understand the difference between intelligence and invitation, they're going to buy the invitation because it's a lot cheaper. And so I'm really worried about that because Intel leads to decision-making and we can have some very bad decisions or you can be manipulated. You know, that bad Intel could be intentional if somebody gets in and manipulates your LLMs and things like that. And how you're making decisions that actually are counterproductive. So I have some concerns, obviously, from the Intel and security standpoints with these.

Robert Vamosi (37:25.87)
So this is an open-ended topic. We're not going to resolve everything in this one podcast. And what? I think we're going to return to it from time to time.

AJ Nash (37:31.106)
What? What? You're not leaving. We are resolving it all. You are not leaving until we fix this for the world. What are you talking about, Rob? This is our job.

Robert Vamosi (37:38.542)
So this is something we're going to do from time to time. AJ and I will have these conversations about thought leadership, anything that's coming up. I want to hear from you guys. So wherever you're consuming this podcast, let us know, chat with us. And if you haven't already subscribed, please do. It's very important that you've, you

Let us know that you're out there because there's a growing community that's interested in this information. And as I said, I want to hear back from you and perhaps we'll include some of your thoughts in a future episode on this topic because I'm sure we'll return to AI and how it affects OSINT and other aspects of security. AJ, thank you very much for your time today.

AJ Nash (38:26.308)
Yeah, no, thank you. I think you're right. I mean, we're starting to talk about AI. It's not going anyplace. We'll talk about it a lot. And I second your emotion on please people reach out and let us know what you think. If you have ideas for shows, let us know. We're always looking for good ideas. If you have people in mind, let us know who they are. If you're one of those people and you want to come on the show, let us know. We'll see if you fit. We'd love to get more smart people out here to have these kinds of conversations because we all need to be in this together. is, know, security is still a group effort. It's a team effort.

We're all trying to get the same places. And so I love having a chance to talk with you, Rob, and talk with other folks when they come on about, you know, some of these really interesting and challenging topics, like, you know, in this case, the pros and cons of AI. So, you know, I appreciate you taking the time, man, today and getting together with us on it. Hey, do want to tell everybody where they can find us? You know, you mentioned all that subscribing, which they need to do. Can you help them find out where to get us? Yeah.

Robert Vamosi (39:08.588)
I was.

Robert Vamosi (39:12.814)
Definitely, definitely. So you can check us out at authentic8.com. That's Authentic with an 8 .com/needlestack. Or you can find us on most social media. We're @NeedlestackPod and also we're on YouTube where you can subscribe and comment to our episodes there. So a variety of ways to reach out to us. You can also reach out to AJ and I and LinkedIn and our respective social media platforms. We're out there in the community. So let us know what you think. And yeah, I echo that. If you want to come on our show, we'd love to have you. We'll be back with a guest in the next episode. So take care, guys. We'll see you next time.

Close
Close