Search

×

The Ethics of AI in the Hotel Industry, with Mark Brown

 

 

What are the complications and opportunities with AI technology in the hospitality space? Discover the differences between “human ethics” and “machine ethics” in this conversation between host Robin Trimingham and Mark Brown, founder & CEO of Viewz.ai.

 

Highlights from Today’s Episode

Episode Sponsors:

This episode was supported through the generosity of the following sponsors:

Front of the House  (fohworldwide.com)

Since our start in 2002, FOH has transformed an industry accustomed to the ordinary, by offering stylishly unexpected and uniquely trend-forward collections for hospitality and food service. fohworldwide.com

 


 

Episode Transcript

Mark Brown: When the internet came about, it was pretty useless to anyone who wasn’t technical. You know, if you wanted to send a file from A to B, you had to get on a command line and type manual commands. And it was only when the web browser came out that suddenly everybody who was non-technical could actually use the Internet. And that moment hasn’t happened yet in blockchain technology. But once it does, when that killer app that enables everyone non-technical to use that technology, people are going to own their data and the ability of big tech to just amass more and more data about our behavior and our preferences, our likes and dislikes, blah, blah, blah, that’s not going to go on forever. The balance is going to shift in the quite near future. 

Robin Trimingham: Welcome to the Innovative Hotelier podcast by HOTELS magazine with weekly thought provoking discussions with the world’s leading hotel and hospitality innovators. Welcome to The Innovative Hotelier brought to you by Hotels magazine. I’m your host, Robin Trimingham. The tech industry produced 32 new machine learning models in 2022, and literally dozens of other AI applications have been introduced, including some specifically for the hotel and hospitality industry. But are these new products and tools actually ethical and safe? That’s the big question. The companies involved say yes, but governments and security watchdogs are less than convinced to the point that they’re now calling for these companies to agree to permit their systems to be publicly evaluated by an independent organization to determine the current and potential risks that AI poses to individuals, companies, society and national security. Today, we’re going to chat with AI and Metaverse specialist Mark Brown, who is the founder and CEO of Views AI regarding the ethical use of AI business applications in the hotel industry. Join me now for my conversation with Mark. F.O.H is a global food service and hospitality company that manufactures smart commercial grade solutions headquartered in Miami. The company designs and manufactures all their restaurant and hotel products. They have showrooms and distribution centers located throughout the globe, and their products are always in stock and ready to ship from any of their distribution centers worldwide. Welcome, Mark. It’s great to get a chance to chat with you today. 

Mark Brown: I’m really pleased to be here. Robin, Thank you for inviting me. 

Robin Trimingham: Well, you and I are going to have one whale of a conversation, as they say, because this is a PG show. Everywhere you look in the media right now, there’s concerns mounting that if AI is allowed to develop unchecked, then an application by private companies could threaten jobs or there could be a risk of fraud. Some parties are even saying risk to national security. So let’s just try and sort a few things out here. To what degree are these concerns actually valid and in your opinion anyway, to what degree is this all scare mongering? 

Mark Brown: Well, it’s interesting that whenever there’s a new potential paradigm shift in a technology, the same fears are always coming up and expressed. So with robot technology is everyone’s going to lose their manufacturing jobs with agricultural machinery going back a couple of hundred years as that technology developed or everyone’s going to lose their jobs working on the farm. So and but when you look back, was what’s so terrible about losing your subsistence based job on a farm where and in some seasons if the crops failed, you starved to transitioning from that employment into a town working in a factory. So to those factory workers at the time, that seemed like an amazing transition. Whereas we’re looking back on it now, we’re thinking, Oh, working the factory, that was terrible. But then those factory workers, as they get automated out of jobs, working more office based jobs and so on and so forth, and it always seems that no matter so far in history when there’s a new. Major technology shift that happens and oh, those people are going to lose their jobs. It turns out that all these people that are freed up from those jobs end up doing jobs that we hadn’t even thought of existed yet. So from the jobs front, I’m not that concerned, especially I don’t know what it’s like in the US at the moment, but in the UK we’ve got a million more jobs than there are for people to do the jobs, desperate for people to do the jobs. 

Mark Brown: So that’s why I’m there Now, where I do think with this particular technology there are risks are in terms of in politics, in terms of manipulation of political opinion, making it. Yeah, elections. What recently happened in America with certain presidents claiming it was a fraud, etcetera, etcetera. I mean, that could be nothing compared to what the potential could be in the next round because things are happening in that fast. And also in terms of military weapon systems, think to me that’s the really scary potential part. I mean, the Terminator movie where the AI decided actually, well, the logical thing is to strike against the enemy first and then there’s a nuclear Armageddon and that’s that. So there does need to be a regulation. But I think the dangers are more political and more in the military sphere. And the notion that we have privacy at the moment is a misnomer. I mean, big tech can predict pretty much what you will do and the way you behave and what you will buy based on a 13 different variables, what people think of it as AI and what sort of narrow AI can do. It’s already been happening for several years, so it’s sort of AI algorithms are suggesting the next movie to watch on Netflix, YouTube have been doing it. 

Mark Brown: You’ve watched these videos, you’re going to like these videos. So this technology has been there in the background for a while, but now we’ve kind of had like the Web browser moment beforehand. You had to be a programmer to access it and deploy it. Now we can see it in ChatGPT where we can we as normal users can type in questions and get answers. Was using a platform last year driven to write marketing copy, which so we’ll do like 95% of the heavy lifting and then you just personalize it at the end to fit your voice and just to make and also just to make sure that it doesn’t get obviously spotted as being written by an algorithm. So just using Microsoft teams, doing meetings and getting a transcript in real time generated again, it’s using like AI to do that. So I’d have to make notes. So I’m just being more productive. So instead of having to maybe hire ten salespeople to promote a product, I might only need 2 or 3 people. So yeah, I think the main risk I would be fearful of are more sort of population control and military rather than commercial enterprises. 

Robin Trimingham: That’s an interesting point because it’s from some perspectives you very well could be right. Maybe it’s not the private sector so much that we need to be worrying about. It’s people who are outside the private sector, whether that be somebody with a criminal tendency or something. But I’m just going to call political and we’ll just leave it there for the moment. Through the history of the world, we have a very big tendency to fear everything that we don’t understand. Yes, cavemen feared lightning because he didn’t understand it. Yes. When they first introduced telephones. I think most people have forgotten or are too young to know that telephones, a lot of people were absolutely appalled that they were going to have this thing in their house to the point that the very first people who had them installed phone booth type structures inside their home because it was considered so invasive that. Right. They wanted it removed from the household and yet still there at the same time. Lots of groups out there for various reasons of what I’m going to call self interest are calling for a complete moratorium on the deployment of new generations of AI technologies. Is that a viable solution to the current situation in your opinion? 

Mark Brown: Well, I had a wry smile when I noticed the people that were calling for moratoriums or the whole debate within the tech industry. It’s interesting that these are the people that are a bit behind the curve in rolling out their own AI. Otherwise, that calling wait that stall for a second. Because once someone gets to a point where the AI doesn’t need any more human input, it can create more algorithms, Then it’s game over for everyone else in the competition to supply or come up with the AI products. But at the end of the day, I think that we still have the ability to unplug these machines. So it’s really important, especially to have strong democracies where big tech is not allowed to censor discussion and opinion on this topic in the same way that they kind of have done on other contentious issues. As long as we’ve got open democracy that people with concerns can voice those concerns and governments listen and don’t get into this groupthink mode, but actually just listen and be aware of what’s going on and be prepared to intervene fast if things start to look like they’re going wrong, in particular use cases. 

Robin Trimingham: You made a very interesting point at the beginning of your discussion there. You’re talking about technology companies perhaps being the ones who want to slow things down so they don’t miss their big chance. Elon Musk, the guy who wants to send people to. Outer space has been quoted as saying we’re not taking safety seriously enough. He would appear to have a fair degree of self-interest. What do you think eye safety should look like? In your opinion? 

Mark Brown: First off, from what’s publicly available, he would be someone I’d put into that category. I mentioned earlier, which is a bit behind the curve in in bringing such a product to market. And maybe he doesn’t plan on bringing such a product to market. But that being said, within his, say, his Tesla business, I is a is going to be a big part of creating safe self-driving cars. How do you make a decision if there’s going to be an avoidable accident where either the child crossing the road gets injured or the driver gets injured? What’s the choice? Most adults will probably take action to avoid hurting the child, but the I might think my priority is to protect my driver and hit the child. That’s so that’s the kind of software needs to have maybe some kind of morals in it, like an intelligent algorithm to prioritize saving the child maybe, I don’t know. But as I said, the real danger is pretty much everything that could be done in a commercial environment can be undone through regulation and law makers. They can just say to a tech company, You’ve developed that system, it’s harming society. It’s got to stop. But if I it sort of gets out of control, in control of, say, nuclear weapons and a nuclear weapon is launched, that you can’t take it back. It’s done. No. 

Robin Trimingham: That you can’t take back. Okay. So we’re talking about some really big, very, very serious topics here. I think that for both our sakes, we should make it clear for our viewing audience that don’t think either one of us would be okay with the self-driving car that either injures a person outside the vehicle or makes a decision to injure a person inside the vehicle. We would both lobby like to think for door number three, a world in which nobody is injured by AI. So what we’re essentially talking here is about the difference between machine ethics and human ethics. What would you say is the difference between the two? 

Mark Brown: I would say you’d have to go further than that and say machine ethics and good human ethics and bad human ethics because yeah. 

Robin Trimingham: Guess I have to allow for that. 

Mark Brown: Because for example, if we put a certain Russian leader as the model for an AI to learn from, it could be very, very bad as opposed to the Dalai Lama and where the world would be a much more happier place maybe. So here’s an interesting kind of use case where algorithms have gone a bit wrong. So a few years ago, maybe I’m talking about in the last five years it’s come out that software that courts have been judges have been using in America. They feed in certain parameters about a case and it recommends a sentencing guideline. It turns out that the sentencing guidelines the software was generating was biased against black people. They were getting longer sentences for the same crimes in the same circumstances. And it turned out it was because the algorithm that had inherited the biases of historical. 

Robin Trimingham: So that’s horrible. 

Mark Brown: Yeah, that is horrible. So people having years of their life put into prison and that’s not even what you would say was AI, but kind of the precursor to AI. There’s an algorithm that looks at historical cases and said, okay, well generally speaking, in these situations, when you put these profiles into the system, this is what the sentence spits out and there’s a racial bias in it. 

Robin Trimingham: You’re making me want to ask you a question and your opinion. Can I learn to be unscrupulously ethical? 

Mark Brown: It could do. But then we’re talking about what’s who decides what’s unscrupulously ethical? Okay. 

Robin Trimingham: Chicken and egg. Yeah, you’re right. Okay, so our listening audience this morning are hoteliers, so let’s try and give them a question that’s going to resonate with them. There’s a fundamental concern regarding whether AI companies can be trusted to make sure their products are safe before they’re deployed to the public. So for hoteliers who are all about being entrusted with guest privacy and protecting that. Which we’re getting into the realm where we’re talking about the surveillance of tourists. What parts of the hotel operation would you say are appropriate uses of AI? Where can it be beneficial? 

Mark Brown: So I think it can potentially be beneficial in terms of health and safety. And I’m using that as a big kind of umbrella terms there without actually get too specific. So for example, in terms of managing the environment, especially in hotels where you’ve got sort of air conditioning and humidity controls and maybe that kind of stuff. But your question I think is slightly flawed because if it’s a true AI 2 or 3, four years down the road, I don’t think a company can guarantee it could ever it would ever be totally under control of the company because it just might evolve and not be controllable by or do things that the company doesn’t know. There was a case, I think it was Google. They had two AIS conversing with each other and somehow one of them taught one the other an Indian language or something. They just started talking. It was totally unexpected. No one saw it coming. So if they use AI in a way that is very narrow, so it’s only good in one particular task that’s safe. But when you start getting so, for example, I could see a use case where an AI would be quite useful for a hotel, which would be an AI which could analyze. Lots of different feeds coming in from weather to three month weather forecasts. What events are on? What flights are being booked, blah, blah, blah, blah, blah to forecast demand for rooms in a particular city or what have you. I can see an I could be quite a useful tool in taking all these different feeds in real time and coming up with a rolling prediction on demand. 

Mark Brown: No one would be arguing against that or saying that’s harmful to the guest or anyone at all. And that would be a good use case where things could not be so good would be exactly as you said. On the privacy side, if if it was able to access information and then pass that information off to another algorithm or just trying to think the equivalent of a virus in an algorithm that a hacker had planted in there that was feeding data like credit card information, bank information, that kind of stuff, passport information to third parties who can then perpetrate fraud and what have you. But think at the end of the day, in the near future, any products that a hotel might buy are going to be like single task orientated and I’m not going to be I in the sense that what they’re talking about in the media at the moment, the Turing Test, after Alan Turing, the computer genius, he said, look, the test of a true AI is if I could have a conversation with it and I can’t see the person I’m talking to. And just from the replies and the conversation we have, I can’t tell the difference whether it’s a human or a computer, then that’s artificial intelligence. And we got a way to go to that point. But such a big term at the moment, it’s like saying when computers were invented, will computers harm society or will they be a benefit to society? 

Robin Trimingham: There was a famous movie with Hal, the computer, that definitely implied that it could, and yet we managed to evolve past that stage of fear into a stage of useful applications that are, for the most part, safe at the moment. 

Mark Brown: And if we just get back to the fear bit, remember that we’re sort of hardwired in our brains to be scanning for fear in the background all the time. 

Robin Trimingham: Primal. 

Mark Brown: Yeah, absolutely primal, which is why the media and the news is just full of bad news every day because it gets our attention. And so if you want to get attention right now, stop putting your hand up and saying, I could end the world, then humanity, blah, blah, blah, and you’re going to get yourself on the TV promoted by linked. So we are we’re until the next thing comes along. This is the main fear thing at the moment. And most people, if I walk down the road here and ask the first hundred people I came to, do you fear AI? They’ll go, What? Yeah. Okay. It’s so early. You know. 

Robin Trimingham: Established in 2002 is a woman owned global food service and hospitality company that manufactures smart, savvy commercial grade products, including plateware, drinkware, flatware. Hotel amenities, and more. Driven by innovation. F.O.H is dedicated to delivering that wow experience that restaurants and hotels crave all while maintaining a competitive price. All products are fully customizable, and many are also created using sustainable eco friendly materials such as straws and plates made from biodegradable paper and wood and PVC free drinkware. F.o.h. Has two established brands. Front of the house focused on tabletop and Buffet Solutions and Room 360, which offers hotel products. Check out their collections today at FOHWorldwide.com. We’re not here to scare people. We’re here to try and generate discussion, help people increase their level of understanding so they can make good decisions. When we talk about big data, there are definitely suggestions out there that in the hotel hospitality industry, big data will be used to analyze personal details and make recommendations about your trip itinerary, where you should stay, what airline you should fly, all of those kinds of things. Yes. There’s also a real concern out there that the AI might have some kind of intuitive bias, and you gave a horrifyingly true example of that before. So what would you say is a hoteliers level of responsibility here when it comes to understanding what they’re getting into when they buy or take on board AI based applications? 

Mark Brown: Okay. I’m going to say it’s a tongue in cheek, but if I was a hotelier looking at a new contract for a for a next generation of my CRS or system or CRM system that had this stuff built in, I’d definitely be wanting to put an indemnity and indemnification clause in there saying that anything goes wrong with this, it’s your liability, not mine. As often is the case with software companies. If their software doesn’t work, we’re not responsible. But if you’re buying something like this, which is going into uncharted territory, I would definitely want to make sure that anything that goes wrong, which causes that might open me up to a lawsuit from guests or what have you, that is going to be covered by my software provider. Also, just as a slight aside as well, whilst there is all this conversation about AI and stuff, people are also forgetting there’s another big conversation going on about privacy and how decentralization and blockchain is going to offer that to people as well. So I think there’s going to be a world where guests will have in the near future will have a lot more control over their personal data. It can be on a chain and it’s encrypted. And every time that someone wants to query that data, so it could be an AI from a hotel system or something like that, it has to get permission and say, This is the data I want to access. And the user says, Right, you can access that for five seconds and then you can’t anymore. 

Mark Brown: So there’s that control. So it’s not just you’re giving all your data away to a third party and they keep it forever. It’s like I’m keeping my data and I’m only going to share with you what you need to know in a time period when you need to know it kind of thing. So it’s not going to be all in the domain of big tech. There’s a there’s a whole generation of people growing up who are way more tech savvy than we were. And. Or just talking to someone this morning when the Internet came about, which was late 60s, early 70s. It was pretty useless to anyone who wasn’t technical. If you wanted to send a file from A to B, you had to get on a command line and type manual commands. And it was only when the web browser came out that suddenly everybody who was non-technical could actually use the Internet. And that moment hasn’t happened yet in blockchain technology. But once it does, when that sort of killer app that enables everyone non-technical to use that technology, people are going to own their data and the ability of big tech to just amass more and more and more and more data about our behavior and our preferences, our likes and dislikes, blah, blah, blah, that’s not going to go on forever. The balance is going to shift in the quite near future, I think. 

Robin Trimingham: I think that’s an incredibly insightful point because you’re absolutely right. You cannot have yin without the yang. Somewhere along the line, there has to be a degree of balance or the whole thing will never work for anybody. So let’s talk about a couple of the things that have been going on. I’m not going to name the car manufacturer because it’s conceivable that I haven’t got the right one. But there’s definitely a story out there that there was a luxury car manufacturer with a enormous billboard, and I think it was in the London area, and the billboard had some kind of a chip or whatever scanner thing on it. It was reading vin plate numbers of cars at traffic lights and only displaying the ad for the luxury vehicle to somebody whose Vin number indicated to the billboard. It also the driver was inside a luxury vehicle. There’s a big concern that that stuff could be going on with AI and hotel booking engines where the AI is deciding what hotel properties you and I could even know exist or what properties we can create a booking for. How can a hotelier better understand what okay and will not have control over if they even know what questions they should be asking about this technology? 

Mark Brown: What you’ve described there similar. Have you seen Minority Report? Tom Cruise? There’s a scene in it. He’s on the run from the feds. It’s a sci fi in the future. And he walks into a mall and as soon as he walks in, he’s retina scanned and all the visual displays he walks past. All the ads are shown are all tailored to him. So that’s what you’ve just described with that billboard in London. Now, generally speaking, people that buy Apple phones are in a wealthier demographic than people that Android phones certainly was ten odd years ago. And certain OTAs whose names were not mentioned were showing higher rates to people that visited with Apple devices. But this was going on ten years ago. Yeah, So this is not an AI specific thing. This is just a thing that if humans can get away with profiteering more based on what they know about you, they’ll do it. You don’t need to pull that fee off because certain OTAs are doing it and they were told to stop doing that. And not just OTAs, e-commerce sites as well were doing it as well, showing different prices to people that they thought were in a wealthier demographic based on their device and location. 

Robin Trimingham: So that’s the perfect example of why we’re having this conversation. One of the other things that they’re scaremongering is talking about right now is the idea that AI will be used to nudge people towards certain actions, certain behaviors, because they know so much about you, it can predict your tendencies. This discussion goes all the way back to drive in movie theaters in the 1950s. If you did a university degree in marketing, you know this story. What happened was the movie theaters were getting the films and they were splicing one frame image of popcorn dispersed between the movie, and they found that when they did that, miraculously, people leapt out of their cars and bought mountains of popcorn. It was called subliminal advertising. Yes. Now, this is not a new discussion whatsoever. They outlawed that in movie theaters and the movie industry. Somehow, though, it’s crept right back in. Whenever you look at the Internet these days, there are these display ads that are following you around from the. Or that you shopped at two weeks ago. And we all seem to just sort of accept that that’s okay, But really, it’s not so much, okay, so what is the smart and ethical use? Of I, and if possible, an example that benefits hoteliers as opposed to one that harms everybody. 

Mark Brown: Right. So, for example, a good use of say, for hoteliers and I did this while visiting Riyadh about two months ago and had a day off and I asked ChatGPT, I’ve got I’m free on Sunday. Can you create a tourist itinerary of places of interest to visit from 10 a.m. to 5 p.m.? And literally within 30s I had like a really cool itinerary printed out. So for hotels in destinations where there’s lots of things people could do, for example, and normally what they do is they have the historical hotel. You’d have a display of leaflets of different attractions. So you go down there, you grab all these leaflets, then you go talk to your family. No, no, no, no, etcetera. Now, whereas now you could just talk into your phone and just say, okay. Or the hotel could have like a panel where people just answer some questions and then the perfect itinerary for them pops out. Again, it depends on what data it could be available. But remember, before, even before I as we think about it now, five, six years ago, Facebook could predict behavior based on 13 variables, 12 or 13 variables that you knew about you. So you could like suggest menu options for people. So menu options, itineraries, tours, even it could potentially based on historical bookings, know that the kind of room that you might prefer to stay in and recommend that room as being in our hotel based on your past stays at other places. 

Robin Trimingham: Enriching the experience? 

Mark Brown: Yes, in a way that people can do it. If you have someone there to ask you loads and loads of questions, but it’s almost like the AI, it knows the questions to ask, but it can also get the answers without asking you and suggest ideas. 

Speaker3: Yeah, I’m. 

Robin Trimingham: Onto something there. A world in which every single guest can have a highly customized experience. 

Speaker3: Experience. 

Robin Trimingham: To their own liking and lots of individualized attention. That can’t be bad for a hotelier. The tricky bit is we’re in this gray zone. Like, how do you enact that? So it happens in a responsible. 

Mark Brown: Imagine how cool it would be if you, at the penultimate episode of a Netflix must watch box set when you left home and then on the plane you watch the final episode, then the next series. Oh, I can’t wait. And then you arrive in a hotel, turn the TV on, and it’s already at the top of the list teed up, ready for you to watch it if you want to. You’ll be thinking, Wow, this is great. That’s amazing. And then you get a prompt saying because now they know you’re watching a movie. You get a prompt and the room service app saying, Would you like your and it knows what your favorite snack is. 

Robin Trimingham: Drink is back to my ethics question. 

Mark Brown: Well, what’s unethical about making people happy? Yeah. 

Robin Trimingham: It’s only okay if they would have spontaneously popped popcorn on their own. I mean, I think we could argue this back and forth all day long. 

Speaker3: Okay. Well, any. 

Mark Brown: Difference I would say is that when you’re staying at a hotel, you expect to be interrupted with offers of service. Would you like a drink? Would you like something to eat? Blah, blah, blah, because that’s what a hotel experience is. If I was in a space where that wasn’t normal, like a library and someone was coming up trying to sell me popcorn, then I’d be not very happy about it unless of course, I was really hungry for popcorn. 

Robin Trimingham: Maybe not realistic. I’ll tell you what would be amazing, though, is if something came up on the screen and said, Hey, we think you like popcorn. Did you know there’s a popcorn maker down the hall? You can get some for free? That would be an amazing application of the AI anybody would appreciate and I don’t think would do a whole lot of harm. We don’t have a whole lot of time. So let’s take a couple more tough questions here. So obviously, these technologies are going to require people with a new skill set. To what degree do you think there should be a requirement that the vendors or the employer, in this case the hotelier, provide training or apprenticeship basically for people so that they can continue to have employment in new ways? 

Mark Brown: Well, that’s an interesting one. They could say to the person, go and ask the AI. 

Mark Brown: Can you give me some training? Recommend training on how to work in an AI enabled future and see what it says. 

Robin Trimingham: That’s a very good point. Yeah. So maybe what we’re then saying is that the vendor has to guarantee that if the AI is capable of training people to do the new things. 

Mark Brown: Well I remember having a conversation with a lawyer about 20 odd years ago who was very concerned. He still had a secretary and he didn’t want to get a computer himself because that would put his secretary out of a job and said, Well, hang on a second. If you did your own emails and what have you, have you not thought that there might be better, more interesting things the secretary could do within your business, give them more responsibility, blah, blah, blah should be more than just the secretary. And he went, Oh. As so I’m talking about. He was quite a high end lawyer, so he’s not a stupid guy. But it’s just people think don’t think outside the box. Now, one big problem in hospitality post-pandemic is there is staff shortages. So I don’t think in the hospitality industry it’s going to be a big issue that people are going to be displaced from jobs because there’s a real shortage of people to do the job. 

Robin Trimingham: Short run, you’re right. 

Mark Brown: And well. Demographically, it’s just going to get worse, not better. 

Robin Trimingham: You could very well be right. I read a report yesterday that indicated I think this was a report from Triple A that Generation Z was taking three trips for every one that Gen X was taking. It’s a whole new world out there and everything is changing very, very quickly. 

Mark Brown: So, for example, when I was talking to about this project in the line in Saudi, there answer to staff shortages. We’ll go out and get some robots. And if we can’t find the robots, we’ll get our own robots built. Because seriously, I don’t see anyone who’s saying there’s going to be a surplus of staff to work in hospitality in the future. It’s more the opposite across the board. 

Robin Trimingham: And that’s the thinking of unlimited budget as well. So for some of our listeners, budget may be more of a concern. 

Mark Brown: Well, I’ll tell you what, robots tend to work out to be cheaper than humans. 

Robin Trimingham: I’m going to push back on that one because there obviously there’s the classic case study of the hotel in Japan that went almost entirely robot. And I think the place practically shut down within three years because all the robots broke down and they had to hire so many people to fix them that they went back to human staff. 

Speaker3: So yeah, but that would have been also the case in the first generation of robots in car manufacturing plants. Now it just doesn’t happen. I’m just saying at the end of the day, there will always be jobs in the forseeable future for people that want to work in hospitality. Even with AI, they’re not going to be out of a job because there’s just such a shortage of people and the demographics are that birth rates dropping and there will be, over time, less people of working age available to do work. 

Robin Trimingham: And maybe there’s something to that because as we move forward and jobs become more mobile, more home based, more whatever they’re going to be, we’re also to a certain degree, have more free time at odd hours. So yeah, it could very well be that we have more flexibility to get out and do things and need more. I’m going to call it machine based ways to service all of that. I think we could probably chat for a whole nother hour, but I think we’ll call it here for today. Thank you for chatting with me on this very timely and difficult subject. There’s a lot of unknowns here and there’s a lot of temptation to stray into the extreme to the left or the right. But I think knowledge is power. And the more we all chat about these things, share ideas, share information, the better the ultimate outcome is going to become. 

Mark Brown: It’s been absolutely a great fun and we should definitely be doing it again in six months time to see if the robots have taken over. We’re still here. 

Robin Trimingham: If I still have a position, let’s do that. 

Robin Trimingham: I don’t think they’re ready to replace me yet. 

Mark Brown: Awesome. Well, thank you for having me. It’s been a pleasure. 

Robin Trimingham: You’ve been watching the innovative hotelier. Join us again soon for up to the minute insights and information specifically for the hotel and hospitality industry. You’ve been listening to the Innovative Hotelier podcast by Hotels magazine. Join us again soon for more conversations with hospitality industry thought leaders. 


Subscribe to get notifications of new episodes.