6 'P's in AI Pods (AI6P)
6 Ps in AI Pods (AI6P)
🗣️ AISW #061: Michael Tisher, Japan-based university professor
0:00
-27:37

🗣️ AISW #061: Michael Tisher, Japan-based university professor

Audio interview with Japan-based university professor and musician Michael Tisher on his stories of using AI and how he feels about AI using people's data and content (audio; 27:37)

Introduction - Michael Tisher

This post is part of our AI6P interview series on “AI, Software, and Wetware”. Our guests share their experiences with using AI, and how they feel about AI using their data and content.

This interview is available as an audio recording (embedded here in the post, and later in our AI6P external podcasts). This post includes the full, human-edited transcript.

Note: In this article series, “AI” means artificial intelligence and spans classical statistical methods, data analytics, machine learning, generative AI, and other non-generative AI. See this Glossary and “AI Fundamentals #01: What is Artificial Intelligence? for reference.


Photo of Michael Tisher, provided by Michael and used with his permission. All rights reserved to him.

Interview - Michael Tisher

Karen: I’m delighted to welcome Michael Tisher from Japan as my guest today on “AI, Software, and Wetware”. Michael, thank you so much for joining me on this interview! Please tell us about yourself, who you are, and what you do.

Michael: Yes, I'm an associate professor of math, computer Science and statistics with UMGC, which used to be called UMUC since 1999, which was a time when wetware was most likely prominent.

And then, after working in the computer field for a few years and tutoring math on the side, I decided to make the leap from tutoring to teaching and have enjoyed it immensely. And I'm also a musician. I play violin, viola, piano, a little percussion. And I'm currently involved in two orchestras where I play violin in one of the orchestras and viola in the other. And I started that since I was a child, since I was eight years old. I started violin.

And also about that time, I had a brother who I lived with, who I shared the room with, and he always had the radio on. And I would always ask him, “What is this song, and what is that song?” And he could tell me. But then when he left for college, I couldn't ask anymore, of course.

But then I discovered this DJ named Casey Kasem, who would track the top 40 hits. And I thought, wow, every song has a number. That's pretty cool, you know? And then, to make a long story short, I continue that research every week on the songs of the week, and I do little statistics on them, and it keeps me pretty busy. So, that's what I do.

Karen: That's a very cool hobby. I had an earlier guest,

. He's a musician and a data analyst by profession, and he writes a Substack newsletter. And one of the things he writes about is that he often analyzes trends in songs over the years, like how long songs are, or when musicians started introducing bridges to their songs, and things like that. So if you ever publish your analyses, I would love to read them.

Michael: Yeah. I'm not a publisher, but I can share some ideas later on, of what I do, privately perhaps. Yeah.

Karen: Sure, yeah, that'd be interesting. So tell me a little bit about your level of experience with AI and machine learning and analytics, whether you've used it professionally or personally, or have you studied the technologies?

Michael: Yeah, I use it both professionally and personally, AI. I'm not much into machine learning. I did take a class in machine learning and it was very interesting. There's so much to it. It was very interesting to learn, but I’m mostly into AI.

And, many years ago at the university I'm at, we would have classes twice a week. And so students would come and learn all the material. And then a few years ago, they decided to switch from face-to-face to hybrid courses, which means half online and half face-to-face. And the good news is, for the student, is that they only have to drive once a week to class, rather than twice a week.

But for me, I lost one day in the week, so I had to figure out all kinds of ways to get the material to them in one less day a week. And AI fortunately has been one of them. There's all kinds of calculators out there, like mathpapa.com and wolframalpha.com. In class I make use of those calculators.

So mathpapa can actually solve an algebra equation, for example. And then I ask students, “Can you tell me how they got this step or how they got that step?” And for the homework, I actually allow them to do that because it does save a lot of time and we can do a lot more, especially in class.

So I'll just give you one example in statistics. So in statistics, I like to do simulations where a student rolls a die virtually a thousand times. And I have them record what percent of the time, say, the three comes up. And then I collect all their data, and then make a graph of it, with AI. So it has really helped out a lot, with having just one less day a week to deal with.

Karen: I remember doing those kinds of experiments when I took statistics in college, but not with AI. That would've been interesting to have it help with some of the simulations and such, using it for generating graphs as well.

Michael: Oh yes, definitely generating graphs, and I like to use it to look at trends. For example, there's another exercise I do where I have them find lengths of curves, and I have 'em do like a thousand curves. You can't do this by hand. But with AI you can find lengths of a thousand curves, and then you can see a trend in the lengths of those curves. I like to do a lot of things like that, you know, in just the short time I have in class with them.

Karen: Yeah, that's really interesting. One thing I remember hearing when LLMs were first coming into wide use was that they really sucked at math, but some of the other specialty math oriented tools like Wolfram Alpha were able to integrate with it and overcome that limitation. Is that your experience? Is it generally pretty accurate, the tools that you're using?

Michael: Well, there are some hallucinations, like for Copilot. For example, one time I asked Copilot, “What is zero to the zero?” And it told me the wrong answer. But then there's more specialized AI tools like wolframalpha.com that specialize more in math. And so you're more apt to get the right answer with that.

And so I asked wolframalpha.com, “What is zero to the zero?” And it told me the correct answer, which is “indeterminate”. I went back to Copilot later on and I asked, “What is zero to the zero?” And Copilot apologized to me and said, “Oh, I made a mistake. And it's really indeterminate.”

And so it's interesting how it seemed like all these tools learn from each other. Which is good, right? The whole thing with AI, whenever we use it to learn to get the right answer, we can all strive to get the right answer eventually and learn the right way rather than the wrong way.

Karen: So, I'm curious, when Copilot gave you that wrong answer on zero to the zero, did you give it thumbs up or down feedback? Or any other way that you basically told it, “no, you got that wrong”?

Michael: Oh, I didn't tell it that. No, I just went to Wolfram Alpha, and then I went back to Copilot and it apologized to me that way. 'cause maybe it saw me do it on Wolfram Alpha or something like that, or somehow got information from that.

Karen: Oh, okay. Yeah, I'm always curious about that. One of the things that I think we don't always realize when we use these tools is that we are using them and giving them information. But we're also indirectly giving them information based on how we rephrase our questions or how we ask them, or sometimes it is with a thumbs up or down on how good the answer was. Things like that. So we're actually providing additional training input and sometimes that is obvious and sometimes it's not.

Michael: So, I'm part of a committee where we're trying to make guidelines for how to use AI in our classes. And there's a group of faculty with various disciplines like English and math. But AI is always changing, so sometimes it's kinda hard to make guidelines. But we're trying to do our best and come up with ways to mitigate the situation.

Karen: That's one thing that I've heard about from a few interview guests, who are either teaching or are students. And there's a lot of confusion generally around what the AI policies of the school are. It can be consistent across the school. It can vary greatly for each individual teacher. And so students are navigating that, and teachers are as well.

Michael: Yeah. Yeah. So now we have to put it in our syllabus, that we have to be open to using AI in class. And I think now we should actually have at least some lessons on using AI. So some faculty and I, on the first day of class, talk about AI to our students. And we tell 'em, you know, sometimes it gets the wrong answer. And so just please make sure to not be a substitute for your wetware. And instead, take it in and then maybe do some revisions or some additions to try to strive for the correct answer. And also, I usually let my students cite their answer as well.

Karen: So this is all about your professional use of it. You had mentioned that you also use it personally. Can you say a few words about that?

Michael: Oh, yes. So, years ago I was diagnosed with sleep apnea and I got a C-P-A-P [or CPAP] that monitors my sleep now. When I had sleep apnea, I would fall asleep during driving and I had circulation problems.

And so when I got the CPAP that seemed to solve all that. And then sometimes when I take the CPAP off, I'll get an email from the CPAP place, saying, “How's your machine? Do you think you need to clean it? Or do you think you need to adjust this or that?”

And so I'll look into it and try to make it better. So far, it's been very helpful. I haven't had any medical problems since the CPAP, even though I don't like to wear it. You ever seen one of those? It's like wearing a space suit, okay. So you have to really learn. I hate it and I want to take a break from it. But I fear if I take a break from it, I might have bad problems again. So I guess I'd better wear it.

Karen: That definitely makes sense. So you also mentioned that you've been in Japan for quite a long time, and I'm wondering, have you learned Japanese, and do you use the AI tools in or with the Japanese language?

Michael: Yeah, certainly. AI tools and the internet have gotten better about translation. I remember 20 years ago I tried to use some translation tool to translate from English to Japanese. And a Japanese person had no idea what I was trying to say.

But now, being part of Japanese orchestras, I get emails in Japanese. And so I use Google Translate and even Copilot, to translate what's there, and then I can instantly understand what they're trying to say.

And so it's very interesting that you can have a friend who, you don't share the same language, but you can still carry on the conversation and get to know each other that way, to some extent. I know it's not perfect, right? Because there's some idioms out there that might not be translatable or there's a lot of things that are not translatable. But maybe one day AI will get smart enough to deal with idioms eventually, I think.

Karen: So that's a great experience. I'm also wondering, you mentioned being a musician that plays multiple instruments. Have you ever tried any of the AI tools for various aspects of music?

Michael: Yes. So I mentioned earlier that I'm always interested in “what song is this playing or what song is that playing?” And so now I have an app on my phone that I asked it directly, “What's the name of this song?” And then they'll listen to it and they'll tell me what it is. And I'm so impressed.

I wish I had this thing when I was a kid. There's some songs as a kid that I heard that I still have no idea what it was. Fortunately the countdown shows and some other books that I've had will tell me. But now with the tool, I'm totally in tune with what music it is.

And also, I don't use AI tools for performing per se, but I like to use it to learn music. There's some apps out there. ChordCreate. So if you want to ask it to play a C chord, then it'll play a C chord for you. Or at least it'll tell you what are the notes in the C chord. So that's very helpful, especially whenever you write music, you can even get it to generate certain chord progressions.

There's also a tool called Mazmazika. It identifies the chords used in a song. And it could also use it to write sheet music based on the YouTube music file, which is pretty cool. Yeah.

Karen: That is cool. So you mentioned recognizing the songs. Did you ever watch that show, “Name That Tune”? That's what these apps remind me of.

Michael: Yeah, that was very popular when I was a kid. But unfortunately those songs were out way before my time, so I never could identify them. So I lost interest. It is not on the air anymore though, I don't think. But there have been later versions, and I think in later versions I would know what the songs are.

Karen: Yeah. So those are great stories about how you've used different AI and machine learning features and tools. I’m wondering if you have avoided using AI-based tools for some things or for anything. And if you can share an example of when and why you chose not to use AI?

Michael: Sure. When AI first came out, of course, many people were adamant about it and didn't want to use it. And I had this one colleague who tried to avoid it because he showed me something that when you use AI, it uses, like, lots of electrical energy. We should try to avoid it, especially when everybody around the world uses it. But so far all the lights are on, so maybe it's not so bad then.

And speaking of lights on, I have solar panels on my house recently, and I have AI that monitors the electricity consumption in my house.

Karen: That's very useful. So you had mentioned that you use AI to help you with course materials and preparations, and use it in the courses. I'm curious if you've ever used AI for tasks like writing correspondence or recommendation letters?

Michael: Uh, sure. I've always been a pretty good writer, so I've never really depended on Grammarly or anything of that nature to write stuff for me. My father was a great writer, so he taught me a lot of writing and how to write letters and research papers, stuff like that. But I like to use AI as a second opinion. So I'll write something and then I'll ask AI, “Could you write the same thing?” And then I can compare notes, and then from there, see if I should revise what I have, or go with what I have. And so that's what I like to use AI for in that regard.

Karen: Have you ever tried any of the AI image generation tools?

Michael: Yes. So years ago, coronavirus was another thing that caused us to have classes online. They were totally online for a while. And so with AI and coronavirus, I had to constantly think about, how do I get the material across to students? How can I teach it to them? How can I keep their attention?

And one of the things I learned was something called “Renderforest”, which is, you enter in a statement of what kind of picture you want or what kind of video you want, and it'll make a video or picture about that for you. Like, for example, generate a lady going into the store. And so it'll give me all kinds of different options that I can choose to have.

Karen: Okay. Well that's very interesting.

Michael: Yes.

Karen: And how did you use that story of the lady going shopping? How did you use that in your course, if you remember?

Michael: Yeah, so I had the problem set, given the price of the item and the sales tax rate, compute the sales tax of this item. And so I had this lady go into the store to buy a mouse actually, a computer mouse.

Karen: So one concern - you had mentioned the environmental impact of AI, and that's definitely a consideration, although some of the data on that has been misstated at times. And so I think that leads to a lot of confusion about the actual impact on power usage and water usage and such.

But another concern comes from where these AI companies get the data that they use for training their tools. And in a lot of cases they use data that users or people like us have put into these online systems and published online. And the companies are not always very transparent about how they plan to use our data when we sign up.

So I'm wondering how you feel about companies that use people's data and content for training their AI and machine learning systems and tools. One of the principles that comes up often is a proposal: the companies should be required to follow what we call the three Cs to get consent, to give credit, and to compensate people for use of their data and their content [the “3Cs Rule”]. I'm wondering what you think about that.

Michael: Yeah. Well, because AI is really interlinking us all together, sometimes it may be hard to give credit when credit is due. But when possible, if we're like using another musical artist's passage, you know, we should ask them for consent if we can use it in our AI creation. And if not, then we need to find some alternative way. Also I have my students really, you know, cite your sources. Like if you're going to use Wolfram Alpha, please cite Wolfram Alpha or whatever, whenever you use it.

Karen: Music is an interesting case because there are a lot of lawsuits that relate to use of data. For music. The companies that have scraped books, for instance, eight terabytes of book content that Meta had scraped. Or the companies that basically stole all the YouTube videos and pulled the music and pulled the video from there and used that for training without any compensation to the musicians and the people that made the videos.

Michael: Yeah.

Karen: I think there's currently 39 lawsuits in the US alone on this, very active lawsuits with some of the very big music labels and other companies.

Michael: I remember years ago when we had a Limewire and a Napster. Oh my God, there's so many lawsuits and so many litigation problems. But now it seems to be less, I guess because what I basically do is, I see some on YouTube and you can record stuff now no problem, like with Audacity. So it seems to be easier to get away with that now, unfortunately, you know? So yes, it is an ongoing problem. It is getting worse, I think.

Karen: The one comment that I hear a lot is that, you know, if you or I steal an individual person's song or book, that's theft. But if you do it at huge scale, it's fair use!

Michael: Huge scale. It's what, again?

Karen: What they call ‘fair use’, the legal term. It's a loophole that some of the AI companies are trying to exploit to say that it's fine for them to take everybody's content and use it without getting their consent or crediting or compensating them.

Michael: Yeah, it's very interesting.

Karen: It's maybe a US legal principle, this concept of fair use. I think there's something similar in other areas, maybe in Europe, but I don't know. I'm sure it doesn't extend to all jurisdictions. There's so much variety around the world.

Michael: Yeah, I'm sure there is.

Karen: So as someone who has used AI-based tools, do you feel like the tool providers have been transparent about sharing where the data came from that they use for the AI models and whether the original creators of the data had a chance to consent to its use or not?

Michael: Well, I did ask Copilot about that, and here's what it says.

"I have several tools at my disposal to assist you better. Web search- I can look up current and relevant information from the web. That was number one. And number two, image understanding. I can analyze and understand images you share with me. And number three, image generation, I can create images based on your descriptions or requests. And then it says, feel free to ask me anything and I'll do my best to help."

And then I asked Copilot if it cites anything. And here's what it writes.

"Absolutely. I always provide citations to the sources I use, and you'll see these as numeric references, et cetera, et cetera. Is there something specific you'd like to know more about?"

And then I asked Copilot, how many groups of order 64 are there? And then it tells me the right answer, but it doesn't cite anything. So sometimes it cites it, sometimes it doesn't.

And for the first one, I wasn't clear on that, the first one when I asked Copilot what tools it's been trained on? And then it told me those three things that I gave you. Web search, image understanding, image generation.

Karen: Yeah. As you mentioned, if it's not giving you the sources, then it's not really being very transparent about that.

Michael: But yeah, like I said for math, when it told me zero to zero is one, which is incorrect, it didn't cite anything. It didn't give me any sources. But again, there's math-specific sites out there, like Wolfram Alpha and mathpapa, that are more transparent, and since it's all about math, they're more apt to give you the correct answer. So thankfully we have those.

Karen: If it had tried to give you a source for the zero to the zero answer, it would obviously have been a hallucination!

Michael: Well, yes, absolutely. Great thing with math, it's either right or wrong, you know.

Karen: Well, those are great observations about the tools and their transparency, so thank you for sharing all that.

Michael: You're welcome, Karen.

Karen: So as consumers and members of the public, our personal data and content has probably been used by AI-based tools or systems. Do you know of any cases that you could share, obviously without disclosing any personal information? Some examples might be photo screening at the airports and biometrics and social media sites and things like that.

Michael: That's a very good question, Karen. I don't know of anything. There probably has been, but I don't know of anything like that. It'd be interesting to find out though, like if there's something like that. Or maybe you know something about me there, Karen, or maybe you've seen?

Karen: Oh, no, I didn't - I didn't try to ask Copilot about you!

Michael: Okay.

Karen: No, no. I like to let people present themselves for what they know. I'd like them to be able to share what they want to share.

Michael: Understand.

Karen: Do you know of any company that you gave your data or content to that made you aware that they might use it for training the tool?

Michael: I never heard of that. No, that's a good question though. I wonder about that.

Karen: You mentioned Grammarly, but you said you don't use that, right?

Michael: I don't use Grammarly. I mean, I've looked into Grammarly to see how it could help my students. But I don't use it personally. And every time it turns on, I just turn it off.

Karen: Yep, yep.

Michael: But at the same time, when I get a second opinion, I'll ask Grammarly, “How should I word this? How should I word that?” So I'll use it occasionally just for a second opinion if I'm not sure about something.

Karen: Is there any time when a company's use of your personal data or content has created any specific issues for you? Like privacy or phishing or loss of income?

Michael: I haven't heard of that either. That'd be great to find out. So far I'm pretty on top of all my accounts and information and everything seems to be going fine. So I think I'll be okay. Hopefully nothing happens, knock on wood.

Karen: I hear a lot about GDPR and the Digital Markets Act in Europe. Do you know what the Japanese privacy regulations are? Can you say a little bit about that?

Michael: As for Japan, there's something called APPI, which is “Act on the Protection of Personal Information” that regulates privacy protection, issues of privacy protection. And there's also something called PIPC, which is “Personal Information Protection Commission”. That is a central agency that acts as a supervisory governmental organization on issues of privacy protection.

The APPI was originally enacted in 2003 with possible amendments now that include an administrative monetary penalty system in addition to the current fines, as well as the establishment of systems for injunction claims and remedies for damages initiated by organizations like qualified consumer organizations.

Karen: And those acts obviously predate AI breaking into the mainstream a few years ago.

Michael: Yeah, definitely. Yeah. And some of these acts, I feel confident that my information is held secure and held privately. I think.

Karen: That's great to hear. Yeah. So one thing that we do hear a lot is that public distrust of these AI and tech companies has been growing. I'm wondering how you feel about the extent to which you trust these companies, and if there's something that you feel they should do differently, one thing that they could do or should do to help build and keep people's trust.

Michael: As long as we use AI for the good of the world and to help each other, then I'm pleased with that. But we could use AI in the wrong way, and perhaps train things to do bad things. Like, here in Japan we have these robot waiters and waitresses that deliver food and it's very nice. But it would be really bad if the robot decided, you know, “I'm going to take that drink that you ordered and put it in your lap”, or something like that. So hopefully nobody will train something to do that. So as long as we train it for the right reasons and the right way, then it's good.

Karen: Okay.

Michael: But if we train the wrong way, I think there should be laws and provisions against that, and there should be penalties involved with that, I think.

Karen: Yeah. We've seen a lot of the robots in Japan. One case that I've seen is training them to try to help take care of elderly people who need assistance in their daily lives. There certainly seems to be some value in that. I'm not sure about making them look humanoid, but that's personal preference maybe.

Michael: Yeah. I actually have an 87-year-old mother-in-law who needs a lot of assistance. And it would be great to have those robots around. But at the same time, my mother-in-law is not into technology at all. She doesn't know anything about AI. And so she probably would just get more confused and more irritable with AI. I'm not sure with machines. But fortunately she has people helping her every week, for various things.

Karen: That's great to hear. And I think that's a good observation about how comfortable people will be with that kind of technology. The people who are growing up with it now will feel more comfortable in the future, probably.

Michael: Yes. I hope so.

Karen: Well, thank you so much for making time for this interview and for talking with me late in your evening. Is there anything else that you'd like to say or to share with our audience?

Michael: Yes. I have a YouTube channel. So if you want to learn more about math and computer science and statistics, please go to my YouTube channel. Michael Tisher. I did create this YouTube channel not to make money, but to help students and learners with these different topics. And, fortunately after the class ends, the YouTube channel is there forever, for any student, former student or current student, or anybody for YouTube. Okay, so it's there for the taking.

Karen: Awesome. I'll have to check that out and we'll include the link in your interview so that anybody who wants to can go check out your videos.

Michael: Oh, it sounds great, Karen. I appreciate that. Thank you so much.

Karen: Awesome. Alright.

Michael: And, as the Japanese say, Domo arigato”.

Karen: That is one Japanese phrase I've heard enough to recognize!

Michael: Yeah. It's in a popular 1983 hit song. Do you know that hit song?

Karen: No, I don't think I do.

Michael: There's a song called “Mr. Roboto” by Styx. You heard that song? [story about the song is here]

Karen: I know Styx, but I don't think I know that song. Isn't that funny?

Michael: Okay.

Karen: Now I have to go check it out.

Michael: Yeah. Check it out. You'll probably recognize one when you hear it. It got to number three in the chart, so it's been pretty popular.

Karen: Awesome.

Michael: But yeah, they say “Domo arigato” in this song. When I was a kid, I didn't know anything about Japan or Japanese and that was the farthest thing from my mind. You know, I never knew I would live here for 25 years. And I'm used to hearing the music and not the words. So I never learned the words until later. Then when I learned the words, I was shocked. Like, oh, there's actually Japanese in there. I didn't know that all this time. You know?

Karen: Oh, that's cool. I will check that out. Well, thank you so much!

Michael: No problem, Karen.

Interview References and Links

Michael Tisher on LinkedIn

Michael Tisher on YouTube

Leave a comment


About this interview series and newsletter

This post is part of our AI6P interview series onAI, Software, and Wetware. It showcases how real people around the world are using their wetware (brains and human intelligence) with AI-based software tools, or are being affected by AI.

And we’re all being affected by AI nowadays in our daily lives, perhaps more than we realize. For some examples, see post “But I Don’t Use AI”:

We want to hear from a diverse pool of people worldwide in a variety of roles. (No technical experience with AI is required.) If you’re interested in being a featured interview guest, anonymous or with credit, please check out our guest FAQ and get in touch!

6 'P's in AI Pods (AI6P) is a 100% reader-supported publication. (No ads, no affiliate links, no paywalls on new posts). All new posts are FREE to read and listen to. To automatically receive new AI6P posts and support our work, consider becoming a subscriber (it’s free)!


Series Credits and References

Audio Sound Effect from Pixabay

Microphone photo by Michal Czyz on Unsplash (contact Michal Czyz on LinkedIn)

Credit to CIPRI (Cultural Intellectual Property Rights Initiative®) for their “3Cs' Rule: Consent. Credit. Compensation©.”

Credit to

for the “Created With Human Intelligence” badge we use to reflect our commitment that content in these interviews will be human-created:

If you enjoyed this interview, my guest and I would love to have your support via a heart, share, restack, or Note! (One-time tips or voluntary donations via paid subscription are always welcome and appreciated, too 😊)

Share

Discussion about this episode