July 21, 2020
TEC37 E08: Securing AI Models
Businesses moving forward with an AI strategy need to ensure the security of those AI models is part of that strategy. In this briefing, we will highlight issues in AI model security for the organization and suggest approaches for identifying and closing gaps in defending AI models. We will also touch on the importance of building a robust, mature, and repeatable program for assessing AI model vulnerabilities.
Please view the transcript below:
Robb Boyd:
The use of artificial intelligence, AI, has exploded in the business world. And I'm not just talking about the number of times a marketing team makes reference to it. It's everywhere. It's changing the way we work, live, play, and learn.
Robb Boyd:
While many of us are still ramping up on how to best leverage artificial intelligence, we all must recognize that it's increased use also brings new risks that must be addressed. Security researchers are seeing more and more AI model attacks. From attacks that range from vulnerabilities that never seem to go out of style, like poor access controller, exploitable software bugs. Through risks that are inherent to poor logging, or perhaps no visibility. Well these can and should be addressed. The AI model security includes a whole new species of cyber attacks, many of which derive from the nature of the mathematics involved. Attacks designed to fool the models, to poison the input data, or perhaps steal personal data used in the training.
Robb Boyd:
WWT has been working with partners like Intel to develop practices that every organization working with AI should understand. And there are quite a few interesting new angles here, things I had not previously considered. Welcome to the TEC37 podcast, your source for technology, education and collaboration from World Wide Technology. Today's topic, AI model security, is sponsored by Intel. My name is Robb Boyd, please enjoy.
Robb Boyd:
All right, as always I'm joined by a bunch of smart people. And this topic has fascinated me, been very much looking forward to getting into this. So let's go with some introductions. You all have very interesting backgrounds, so I'm just going to do it this way. Melvin, at least as the picture appears, you are right underneath my square. But Melvin, let's start with you as our guest. You're from Intel, but you're a data scientist with Intel. I wonder if we can get a little bit about your background, and also what you do specifically with Intel that's of interest for us today.
Melvin Greer:
Yeah, sure. No problem, Robb. I'm Melvin Greer, I'm the chief data scientist for Americas at Intel. And my primary job is to ensure, number one, people know Intel is 100% focused on artificial intelligence and data science. And to bring the full portfolio of our capabilities in AI to the development of mission capable solutions. I teach at two universities, at Johns Hopkins and at Southern Methodist University in Dallas. And I am also a special government employee at the FBI. But for this group, I'm really excited about participating with the World Wide Technology's Jamie and Gene, are really great friends. And I'm excited about being able to be a guest on this panel with them.
Robb Boyd:
Yeah. As we were building up to this it was obvious that you guys have all worked together in many different ways when it comes to this practice. And this is an area that I think not enough attention is being paid. And so I've been looking forward to this for that. So thank you very much, Melvin. Jamie, let's go to you. I want to know a little bit about your background. What do you do for World Wide Technology?
Jamie Milne:
Yeah, hello. My name is, Jamie Milne and I'm a senior engagement manager with our business analytics advisors group. What that means in practice is I sit somewhere between our customers and their mission and their use case, what they're trying to do, what changes they're trying to propagate. And our technical folks, our consultants, data scientists, engineers, and architects, who can really bring innovation and AI to the fore and leverage that in order to solve those use cases. And then I guess somewhere else is Gene, trying to pull me in other directions to make sure we're secure and everything is as low risk as possible.
Robb Boyd:
Oh perfect. Okay. So a couple of data scientists, and then we have Gene pulling us in different directions. And Gene, I learned that, well, you're good at math. But I don't think that's something anyone here on this show is afraid of except for myself. But Gene, what do you do for World Wide Technology?
Gene Geddes:
Yeah, Robb, thanks. I'm a chief scientist and principal architect on our global security team, which is part of our consulting services. And really, my main job is to support everyone on the team. And given my background, which I spent 20 years at NSA and 10 years with a research institute supporting NSA. And my background mostly is mathematics and computer science. So I get handed all the weird cutting edge things like AI mile security, blockchain, all that good stuff. Yeah. And so I'm really happy to be here. And one of the great things about my job is I get to work with people like Mel and Jamie, it's just a delight.
Robb Boyd:
Yeah, this is interesting. So I've learned in our lead up to today's conversation that Intel does so much more than I had originally been aware of. And I was excited to have them as a sponsor for this. And the fact that we get to work with Melvin, or you guys have already been working with Melvin. But then I get a chance to meet Melvin. And specifically because this came up in his background, but for all of you guys, there's this intersection where our focus is today around security and artificial intelligence. And for me, a lot of this, there's an immediate reaction I think when we talk about artificial intelligence, because it's such an abused term. That as well as machine learning. And people, especially marketers in our industry, are very good at just throwing that terminology in anywhere. We're going to assume that there is a general understanding of things that happen within that area, because our focus is not on teaching anybody what AI is or what machine learning is, or anything like that.
Robb Boyd:
But it's really about this intersection of anyone that is approaching this, where does security play a part, and where does it need to play more of a part in terms of how people are doing things? And I think Gene, you've written a couple of articles on this for World Wide Technology. One of our common calls to action on TEC37 is wwt.com, where you can find these articles as well as more information on everybody here, and workshops that we'll talk about here in a little bit. But Gene, based on what you've written around this intersection of security in AI, I wonder if you could set us up to talk a little bit about the threats that we need to be more aware of. How would you begin to characterize what's important here?
Gene Geddes:
Sure. So AI models are becoming more and more important in business. And so what we see, is we see our customers are starting to deploy them. And they're actually placing them in harms way a lot of times. Very often the AI model, which really sits on Python software or some very clever libraries underneath it. And the model that's been trained for weeks or months by a team of data scientists. So all of this good stuff, that's good IP and really useful technology is placed on a web server, and open up to the outside world. Well, the first thing that happens, of course, you have vandals, criminals and nation states coming after you. And so what we're doing is we're working with customers on two levels. One is to look at the big picture in an enterprise. And how are they producing these models? Where are the security gaps along the way?
Gene Geddes:
And we've also looking very carefully at, what until now has been really an academic exercise, which is finding very AI specific, very heavily mathematical attacks in [inaudible 00:07:05] models. And the problem right now is this is all off in the academic realm, but the attackers can deploy it. There's software out there. It's not that hard to deploy once the clever people have figured out how to do the attack. The hacker can come in and deploy it. So it's critical for the customer to understand what the threat is. And the overall threat and specific threat against the AI models. And because of the difficulty of understanding it, it's very easy to just throw your hands up in the air and say, well, it's probably okay. And so what we're trying to do, and this is why we're working with experts slight like Mel and Jamie is so critical to the security team. Is we want to go out and try to explain to customers where the threat is, and more important, how they can counteract it.
Melvin Greer:
Yeah, Gene, I think you're right. This is important because typically the security framework or security architecture extends to the dataset where the actual training data is. Or at least to the insights that have been generated by the algorithmic models. But often overlooked is the value associated with the algorithmic model itself, and being able to secure that model is important. Not only because you want to prevent malicious attacks, but you really want to prevent unintended consequences as well. You really want to have people who can put trust, and have veracity and faith that the model that you or someone else trained has been sent to you in an immutable fashion and not been tampered with in a way that would compromise the ability to put faith and trust into the insights that are derived there.
Robb Boyd:
Let me ask you this, and I'm going to go to Melvin since you were just speaking about this. But just to make sure that we understand where you're speaking of, is on one hand you're saying there's an overall goal of making sure that there's trust in what the model is, say, telling us. But there are specific points. And I think Gene's laid this out in some of these articles, and I think we've talked about this notion of how we get to trust. But there are specific areas that, at least for me, it helped to subdivide the different places where the potential for mishaps could occur, whether intentionally or not. Is it appropriate, Melvin, could you walk us through, what are the different places from the data gathering, I believe, all the way through to where different things are happening in those models that are ripe for potential mishaps?
Melvin Greer:
Sure. So if we take and pull the curtain back on the whole process of data science and analytics, what you will see is that there's this training and inference capability that's taking place. And so the very first area where we're seeing opportunities for some bias mishap or unintended consequence is the selection of the dataset itself. But very quickly we move to this idea of, what am I going to do to acquire the model that is best fit for the mission or capability that I'm targeting? And when that happens, typically data scientists are not building these models from scratch. They're typically going to a repository, or going to a library where the models are already prebuilt. And they're looking through them trying to find ones that might be close to the model, either in terms of function or capability that they're interested in ultimately arriving to. They'll take the one that's closest, then they'll figure out how to do the re-weighting and then massage it, maybe train it with new data so that they can get closer to that model.
Melvin Greer:
And it's this reuse of existing models where we think the most significant impact will be with respect to not having a security framework that extends to algorithmic models. If we do, then what it means is that we have the provenance, the metadata, an explanation of who created the model. Under what conditions does the model work well? Or what conditions does the model not work well? And that will be carried forth to whoever decides to reuse that model.
Robb Boyd:
What is the [crosstalk 00:11:15]-
Jamie Milne:
Sorry.
Robb Boyd:
No, jump in Jamie, because I was going to direct this to you anyway.
Jamie Milne:
I mean, I totally agree. Having that reuse to model is a key part. But I mean, even at the most fundamental level, if you go right into training and inference, as you started to mention there Melvin. One of the things we talk about a lot with our customers, or have done for the last five, 10 years, is this is different from classical analytics. Right? AI is not just analytics but bigger, it's fundamentally different. Right? The way in which we think about how to train a model necessitates that you have real data. Right? You can't just use dummy data. You can't just work it out from first principles because it's the machine that's learning. Or if you're at deeper levels or classical machine learning, you need to start using production or production-like data in order to make those inferences and start to train those models.
Jamie Milne:
So that automatically opens you up to breaches or potential considerations. And then you're using, almost by definition, we would have to do that with stale data or something that's slightly removed from production. Or I think as you were saying, the prebuilt models, a lot of those rely on data that you don't necessarily know where it came from or what it was. Right? You're just trusting that this facial recognition software, or pattern recognition, or natural language processing algorithms, the base algorithms that you start to develop from, were using secure or unbiased sources. And that's where it comes in is in some cases with, especially with deep learning models, you can't figure out the source. Right? It's so deeply embedded in the model that you can't do a traditional root cause analysis or go back and look at, where did it come from, and pinpoint the exact point. Because that's been lost through the [crosstalk 00:13:18] development of the model.
Robb Boyd:
Let's stay on this for just a second in terms of how we actually deal with that situation. First, there's obviously the awareness that's coming up in this conversation. But what I think of when I think of reasonable models, I think in any time, I think I'm using a model because it's faster for me to, supposedly, faster for me to begin working in a certain direction. And as you said, it's important to know, I'm going to use this word because I'm just excited to have learned how to use a new word in a sentence, provenance, that Melvin taught me. But the provenance of this background and the history of something which may or may not be knowable, how do you deal with that situation? So one, there's awareness, you need to be aware of it. But assuming you even could, what's the right balance with needing to go do that or recognize that might be the source of data you can't trust as much later down the pipeline?
Melvin Greer:
Well, certainly there's an issue there Robb. But I think what we're really trying to acquire with securing algorithmic models, is that once a model is built and it satisfies a certain need, that it's not artificially changed somehow that can't be determined. So that if I know that this model actually does work, and I have the ability to put some trust in its ability to create insights, I don't want that model changed over time. I don't want to have the ability... Well, certainly it can be changed, but I want to be at least aware of the change. And I want to have the transparency associated with it. So this idea of securing algorithmic models is really a desire on the part of application developers and data scientists to add trust and immutability into the data science process.
Robb Boyd:
Security should be an ongoing priority, not a onetime event or project. With advancing threats against digital security, safety, and privacy, World Wide Technology and Intel are helping to solve the most challenging IT problems in artificial intelligence and machine learning, leveraging the power of the latest technologies like Intel Xeon scalable processors, networking FPGAs, Intel Optane SSDs, and persistent memory. Intel protects every layer of the compute stack with hardware enabled security capabilities built directly into their solutions. Through workloads, privacy policy and other factors, WWT and Intel are working hard to enable the security of business critical data being created, moved, and stored around the world. WWT and Intel, we're helping turn ideas into outcomes.
Robb Boyd:
Well there's something that Jamie brought up when we were a meeting before the recording today that I thought was interesting here. And it's this notion of understanding how we can recognize that AI is a part of... How's a right way to word this? There's a process-
Jamie Milne:
I think we lost-
Robb Boyd:
Yeah, I think he'll be back in just a second. Let's see what happens.
Jamie Milne:
Okay, sorry.
Robb Boyd:
No, that's okay. Let's say, it was this notion that in my mind of understanding that AI is not something that should be distinct from processes and procedures, which entail security as well as a lot of other things that lend themselves to security foundationally, just like you would with your data, your infrastructure, your ops, operations plan. When it comes to AI ops and its importance in round where security is placed into there, the documentation, the ways in which this stuff is supported. I think it was you, Jamie, that had said that there's a need to do more of this because it's now AI is more part of an internal business process that needs to be recognized and treated just the same as all the other business oriented processes. I wonder if you could help me word that better?
Jamie Milne:
Yeah, absolutely. It's recognizing that... I guess, as we've grown up through the, as AI has become more and more embedded, it's become more of a standard practice. Right? I think in the past it was more potentially project based, specific models or specific groups that we're leveraging in small components that could be compartmentalized and dealt with individually. Now it's really spread across, it's grown up, it's matured to the level where this is standard practice that needs to go through all the security levels and all other measurements of efficiency and effectiveness than any standard practice needs to go through. I think a key thing there, you mentioned ML ops, that's something we want to be able to maintain the models, secure the models, make sure our models are as low risk and as high impact as possible. That's something that the data scientists want that can be done at the same time as the security folks.
Jamie Milne:
So recognizing that security is a necessary component of putting something together, it shouldn't be stifling innovation or stifling development. It's something that absolutely needs to be done. We can do that in a way that actually helps the engineers and helps the data scientists in their model building by understanding what kind of results or what kind of eventualities or predictions my model will make. That's extremely helpful for a data scientist just as much as it is for a security person. So if we can embed those procedures into that using an ML ops framework. And there's multiple tools that have started to become available, open source and commercial, that support the kind of ML ops process.
Jamie Milne:
But building that process around. And we talked a little bit before around the different stages of the data science life cycle. So from everywhere from training, to inference development, but then maintenance and operations. Throughout all of those pieces, it's not just creating that model, that algorithmic development, there's a whole ton of things that come around that in order to make that production ready. If you can streamline those and automate those to the best way possible, then you can really start to build that pipeline then start developing at scale and at speed.
Robb Boyd:
Gotcha.
Melvin Greer:
Yeah. I agree with you 100% there, Jamie. I know that in many ways application developers think of the security folks as the department of no. No, you can't share this. And no, you can't do this. No, you can't invoke that kind of logic. But in reality, if we can embed security into the algorithmic model development process, this goes a long way. And then you mentioned some of the traditional tools associated with the security framework that we can layer on top of this application development and model development process. I will say that we are currently experimenting with the use of zero trust tools, like a distributed ledger and blockchain, that help to ensure that once a model is set, that we have a hash and we have an ability to determine via consensus model whether or not that algorithmic model is the same, or has it been tampered with. And so because of that, there are many, many different styles that we can add to use to ensure that security is an integrated part of the algorithmic model development process and not a built on later.
Robb Boyd:
Yeah. So the notion that security is something about the need to embed security deeper in the process. It feels like an age old thing in many parts of the business. From software development to everything else, it's not new. And as you said, people usually don't want to deal with security because of the fact that it just feels like it's the department of no. Or even conceptually, it's just a place where people like slowing it down. Where my main focus is on figuring out if this works first, I'm not concerned about security. And if this goes into production, whatever you happen to be working on, then I'll worry about security. But by that point, of course, we know we all move quickly and it's much, much harder if not impossible to really properly embed security appropriately if it wasn't integrated there from the very beginning.
Melvin Greer:
And I must state, Robb. That as a security practitioner, I do not believe that security is the department of no.
Robb Boyd:
Yeah, yeah, yeah.
Melvin Greer:
These are the professionals that help keep our enterprise data, that keep the valuable AI components and reusable code bases that we develop, secure and safe. They are the ones that make it possible for us to do the great things that we do with this data. So they absolutely are not the department of no. We just need to reframe that-
Robb Boyd:
It's a perception. Yeah.
Melvin Greer:
... with some practitioners.
Gene Geddes:
So Mel, I have to tell you, I love saying no. [crosstalk 00:22:15] Having that power, it's a great feeling. But I agree with you 100%. I mean, our job really is to make business better, right? Is to improve operations and guarantee operations. So we can't say no. And I just want to share, I had a really good experience last year, I spent six months working with a customer. And I was more or less embedded in the data science team.
Gene Geddes:
And what's so great about that was to get security in there, right? So the Holy Grail with application security is to have people be expert application developers and expert security people. And it's impossible to really achieve. But I was lucky enough to be in with the data science team working with them on some of their models, and helping to bring that security to them, and getting that in the mix. And I think in the short term, anyway, that's what we have to do. We have to find people who are comfortable in both worlds and bring them together.
Robb Boyd:
Well, let me ask you, because I would like to spend the remainder of our time, which is roughly 12 minutes or so if my timers are correct. But Gene, you bring up an interesting point where you came from, you'd been embedded with a customer along with another team of people working on some of these very things. And if I understand correctly, you've also been part of building out the practice at World Wide Technology. To not only raise awareness of this type thing, but it's one thing to raise awareness, it's quite another thing to then say, okay, here are the prescriptive things that you can do to begin in your own organization to recognize and then work through firming this up so that we could reach the goal of having a higher level of trust in our models, in the data we've got.
Robb Boyd:
Because I would say everybody wants that and we need to be able to trust it. Let me ask you on this one, what is being done at World Wide Technology to be able to make this approachable? What has been developed, and how does someone get engaged? Where do these kinds of things start?
Gene Geddes:
Yeah, no, that's an excellent question. So of course there's education. That's what we've been talking about so far, is just being able to educate users, educate organizations on what they have to do and what the threat is. And that's always the most important step in security, it's explaining the threat. What we're doing specifically here, there a couple of prongs. The first prong is ideally go in and work closely with the customer for, not too long, maybe a couple months. And work with our data scientists. And pick one or two models that are particularly vulnerable, and just get down and do quote analysis. Look at the data that they were built on. Look at the data that they take in from the outside world.
Gene Geddes:
Because that's where the risk is, that's where the attack surface is. And just look at possible vulnerabilities. And work with a team on how to close those vulnerabilities. So that raises awareness and brings education on a much deeper level and proceed from there. It's like the organization needs to decide, they can either go with the continuing consultant approach, where for every model they bring security experts in to look at the model and just get it up to snuff. Which is a very common practice in application developers. The other model is for them to build their own for security evaluation program. And really, it's almost like a red team thing, not really a pen test. And that comes with training a couple people. Easiest approach might be taking a couple data scientists and having them get up to speed on the security piece of it. And then incorporate a lot of open source tools, and then a lot of custom code. And usually it's Python, nothing too complicated. But just so you can really hammer your models and test them from every possible angle.
Robb Boyd:
Let me ask you this, and all of you can weigh in on this, but I'll start with you Gene since you're talking there. But how comfortable is your normal security practitioner? Because it does feel like, I used to spend a lot of time on the security space, and it always felt like, boy, it's the gift that keeps on giving. There's always something going wrong, no matter what's happened. And we're always asking security people to increase their knowledge in different areas to be able to do things. What is the reality of the learning curve to add enough knowledge to be combined with the cybersecurity knowledge, to where someone can look at the models and understand the programming language and begin to understand the threats and the vulnerabilities that are appropriate with what they're looking at? You're making it sound like it's not a huge stretch. And it sounds like something we all should be aware of, because this is where the world has gone.
Gene Geddes:
Right, right. Awareness and a basic understanding, I think, it's not a huge stretch. And pretty much everyone in security, it's like cloud security. Over the next five years all of us have had to learn about it on some level. To actually be able to go in and analyze and make the model more secure though requires some pretty deep experience, I think. And it's like, I forget the scientist that came up with this. But the idea that you need to put at least 10,000 hours into something to be an expert. It's just, it's work. It's a lot of work. And to get that comfortable with the application development, the programming, and in this case, of course, the data science, it takes a lot of work. So to take a security person and train them up, I think just means, you spend a year working with a data science team. You can do data science.
Robb Boyd:
Well I imagine [crosstalk 00:28:09] it's a two-way street. Yeah, Jamie. Go ahead.
Jamie Milne:
Definition of a data scientist, there's many of them. One that I think resonates with me the most, right? Definition of a scientist is someone who knows more statistics than an application developer, and knows more about code than a statistician. Right? So almost by definition, you have the data scientist fits between these traditional disciplines. I think you need to add security in that mix, right? So the data scientists need to have an awareness. They're not going to be better at security than a security expert.
Jamie Milne:
In the same way as our data science team reach out to our application development team all the time to get expertise on, okay, I've got this problem. How do I solve that? Same way for mathematics back in academia. It's the same thing, I think you need to have that hybrid where you do you want to have some skills and knowledge within the team itself, but you're always going to have to reach out to experts to understand, what are the new things in that? What are the new tools, processes, ways to think about doing that in a more innovative and efficient manner? And that really comes from the expertise of a cybersecurity group.
Gene Geddes:
[crosstalk 00:29:18] Actually, one of the things that we've seen is-
Robb Boyd:
I knew you were going to jump in. Go ahead, Melvin.
Gene Geddes:
[crosstalk 00:29:21] leadership in Mindshare. Some of it is in policy and some of it is in technology. So on the policy side, certainly we are aware of what motivates application developers and data scientists. Their motivation is to build composable capabilities that are then deployed as applications. And it's this innate desire to deploy things that we want to take advantage of on the policy side. So in order to deploy a model into the repository, then we require a certain level of details to make sure that is capable of being traced back to them, and understanding of how things work.
Gene Geddes:
So all of the things that we do in classification labeling and tagging of data sets, now we're going to encourage data scientists and developers to adapt before we allow them to deploy their model into the repository. So that's kind of a forcing function. I think on the technical side we're seeing two really important areas where things get interesting. One is in the co-training activity associated with model development. And this is really about not using one model to create a mechanism for insight development, but actually to have more than one model participate in a co-training exercise. So that if a compromise is made in one model, it won't have the rampant and deep effect that a co-trained, or a number of models collaborating might have.
Gene Geddes:
So we're muting the attack surface by encouraging more models to participate in the training exercise. I think the other technical capability that we're really seeing is investigated is this idea of blending. And this is where models that are created have an output. And instead of taking that output as a singular source of truth or probability or risk, what we do is we take a number of these outputs, blend them together. And again, what this does, is it mutes the attack surface so that no one model and no one output is driving the entire decision. Thereby lowering the probability and risk, and decreasing the attack surface. So I think it's a combination of both policy and technology that is going to help this securing of algorithmic models discussion gain traction in the application development and in the cyber security world.
Robb Boyd:
That's interesting. Because I think on one hand I was thinking, oh, you got machines watching the machines. But this notion of, you've got the outputs working at things from different angles. And you're saying the collective response statistically should yield a better outcome because there was less room for its biases to have an overarching effect on the entire thing. I guess, is maybe one way to put it. That's interesting. And Because [crosstalk 00:32:18] a decision maker-
Melvin Greer:
I think the bias portion is for sure part of it. But really, what I think we're trying to focus in on is the attack surface and vulnerability of one algorithmic model, right? So if I'm able to compromise a model, and then I use it either in a co-training or a blended environment, the ability of that compromise to wreck the entire system is muted.
Robb Boyd:
So like an AI version of the build defense in depth type principle.
Melvin Greer:
Kind of like that, I think-
Robb Boyd:
I feel like I'm going to get closer to... Reproducing what you say, Melvin, is not simple. I'm learning the hard way.
Melvin Greer:
Well, defense in depth has a certain associated principle. But I think what I would say is, a cyber intelligent capability.
Robb Boyd:
Okay.
Melvin Greer:
Right? So defense in depth might be a reactive. Whereas what we're really advocating is a cyber intelligence capability that's built into the algorithmic model development exercise.
Robb Boyd:
Well quite a bit of food for thought. As we wind up here, I just want to get a couple of final thoughts. Gene, I'm going to go to you, and then I'm going to go to Melvin here real quick to have the last word as our guest. But Gene, I would assume that you recommend World Wide Technology has this AI model security workshop that may be a good starting point if someone's a little bit on the fence, not on the offense. On the fence about where they should get started, maybe a good first step to take. Would you recommend that to our audience here?
Gene Geddes:
Absolutely. And it's very low impact STR, just sit down and just talk about where the companies bring their AI. And just looking at where the risk is. So that's one thing. And again, just explaining what the threat is. And what is it, why should they care? That's basically what it is. And at the end of the workshop, then we put together a summary. And the customer can decide whether or not they really need to pursue this.
Robb Boyd:
Excellent. Well, and I think that one's high on my list. Melvin, I'll let you close this out. I'm just thinking, you've developed curriculum for both security and AI across multiple universities. And you've written books on these subjects. I don't know if you have any time, because you're also a father and a husband. I don't know how you have time for anything. Is there a way that someone follows you, or is there material that you recommend someone that's interested in learning more, maybe about what you've done and what you continue to do and experiment with that should be worth calling out here?
Melvin Greer:
Well certainly. I mean, we can engage as a part of Intel ecosystem. We have a very large ecosystem. I think World Wide Technology is a leader in this area. And I think the reason why Intel is so determined to partner with them on a strategic level, is because they represent the tip of the spear with respect to being able to understand how to apply it to mission capabilities. And so my recommendation would be to get in touch with the WWT Intel team with a serious focus on the development of a proof of concept, a demonstrable capability that would help evaluate one or more of your algorithmic models to determine how we might apply a more secure framework for it.
Robb Boyd:
That's perfect. Yeah, no, I agree. I love World Wide Technology's focus on education. And the multi-vendor aspects that supersede everything in the fact that it doesn't matter where the technology is made or where the knowledge comes from, it's whatever's appropriate to that customer at that point in time. And they continue to build their practice around that. So I do recommend everybody watching this here to check out wwt.com. Look for articles from everyone here. As well as, of course, the workshop on the AI model security. Definitely worth a followup. Thank you so much for watching today. Thank you, Jamie. Thank you, Gene. And of course, thank you, Melvin. We'll see you guys on the next one.