HatchWorks, Author at HatchWorks Your US-based Nearshore software development partner Mon, 08 Jan 2024 23:38:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://hatchworks.com/wp-content/uploads/2021/04/hatchworks-favicon-150x150.png HatchWorks, Author at HatchWorks 32 32 How Generative AI Works, as Told by a PhD Data Scientist https://hatchworks.com/built-right/how-generative-ai-works-as-told-by-a-phd-data-scientist/ Tue, 14 Nov 2023 15:00:42 +0000 https://hatchworks.com/?p=30258 On the Built Right podcast, generative AI is always on the agenda. But we thought it was time to hear the thoughts, opinions and predictions of a PhD data scientist and take a deep dive into the science behind it.   We invited Nikolaos Vasiloglou, Vice President of Research ML at RelationalAI, to share his thoughts […]

The post How Generative AI Works, as Told by a PhD Data Scientist appeared first on HatchWorks.

]]>

On the Built Right podcast, generative AI is always on the agenda. But we thought it was time to hear the thoughts, opinions and predictions of a PhD data scientist and take a deep dive into the science behind it.  

We invited Nikolaos Vasiloglou, Vice President of Research ML at RelationalAI, to share his thoughts on how far generative AI will advance, give us an in-depth look at how knowledge graphs work and explain how AI will affect: 

  • The job market 
  • The future of reading 
  • The social media landscape 

 

Plus, he explores the main differences between generative AI and regular AI. 

Continue reading for the top takeaways or listen to the podcast episode for more. 

The difference between generative AI and regular AI 

The term ‘generative AI’ is everywhere. But what does it really mean and how is it different to regular AI? 

For many years, they were separated by the depths of their language models. As things continued to advance, people found themselves with powerful models they weren’t sure how to scale. 

Out of this recent revolution emerged OpenAI. They began feeding data into a transformer (a deep learning model, initially proposed in 2017) and created a useful function where the AI can predict the next word you will type. 

Nikolaos explains that the main difference between generative and traditional AI is the focus on language. Language is the primary determining factor in human intelligence, which explains why language-based AI products are among the most used right now. 

Will AI beat the human brain? 

As generative AI progresses, people continue to ask questions around its limits. So, will AI ever match or exceed the capabilities of the human brain? 

Nikolaos believes AI is a “great assistant” and can do plenty of things more quickly and more efficiently than humans.  

He explains how, with every technological advancement, there will be fewer jobs for engineers. With each passing year, major companies in every industry rely on fewer humans to take on work as the capabilities of technologies progress. 

However, he does think there’s a long way to go until the robots revolt! Nikolaos says there are plenty of things the human brain can do that won’t be challenged by AI any time soon. 

For example, a human being can eat a pizza while performing complicated mathematical computations. If using GPT, you would need plenty of power to perform equivalent tasks. Humans are very energy-efficient and can use signals that take milliseconds to transmit; a much faster process than AI.  

What is a knowledge graph and how does it work? 

A knowledge graph is a collection of interlinked descriptions of entities, used to enable data integration, sharing and analytics. 

Nikolaos describes it as “the language both humans and machines understand” and says its bi-directional relationship with language models provide many benefits. 

Once you have a knowledge graph, you can see considerable ROI and excellent business results but, historically, there was always one caveat – they were challenging to build. An engineer would have to go through databases, finding the correct entities, relations and flows. 

But with the dawn of AI language models, things became much easier. With human supervision, the language model can speed up this menial process. 

All-in-all, Nikolaos says knowledge graphs always provide: 

  • The correct knowledge 
  • The ability to add/remove knowledge based on relevance 

 

In other words, it’s ideal for keeping things in place. 

The future of reading and learning 

AI is changing the way many people read and learn. According to Nikolaos, many people avoid reading books as the information they require only spans a few pages. 

But what could this mean for the future of publishing?  

He says publishers could take advantage of this shift and make small sections of books publicly available, so that users can consume what’s relevant to them. 

This shift can be compared to streaming, where users select specific songs, rather than buying the whole album. 

Social media and its reliance on AI 

From Facebook and Twitter (X) to Instagram and TikTok, the content is always changing. 

Now, Nikolaos believes generative AI will form the basis for the social network and video platforms of the future. 

Platforms such as TikTok already deliver content to us, based on what we watch, but Nikolaos says AI could actually create the content too. 

For more insights and predictions on generative AI, find episode 17 of Built Right on your favorite podcast platform or the Hatchworks website. 

Join the AI revolution in software development with HatchWorks. Our Generative-Driven Development™ leverages cutting-edge AI to optimize your software projects.

Matt (00:04.818)

Welcome Built Right listeners. We have a special one for you today Our guest is a PhD data scientist and he’s gonna help us make sense of generative AI and how it all works And our guest today is Nikolaos Vasiloglou and Nik. I probably butchered the name Nik the Greek I think is What you also referred to you as and he is the VP of research ML at relational AI

 

Like I said, master’s and PhD in electrical and computer engineering here from Georgia Tech and has founded several companies, has worked at companies like LogiBox, Google, Symantec, and even helped design some of Georgia Tech’s executive education programs on leveraging the power of data. But welcome to the show, Nik.

 

Nik (00:50.847)

Nice to meet you, Matt. Thanks for hosting me.

 

Matt (00:54.358)

Yeah, excited to have you on. And relational AI, for those that don’t know, is the world’s fastest, most scalable, expressive relational knowledge graph management system combining learning and reasoning. And for those of you thinking, what the heck is a knowledge graph, we will get into that. Plus we’ll get into how generative AI actually works, as told by a real PhD data scientist who’s been doing this stuff way before chat. GBT was even a thought in somebody’s mind. Plus stick around.

 

we got Nik’s take on what are gonna be the most interesting use cases with generative AI in the future. And he’s saving these, I haven’t heard these either, so I’ll hear them for the first time, so really excited to get into these. But Nik, let’s start here. What is the difference between, we hear generative AI, it’s the hot topic right now, but generative AI and just regular AI, like what is the difference? What makes generative AI special and different?

 

Nik (01:51.867)

It’s a very interesting question. You know, for many years, the emphasis, the way that we were separating, you know, what do you call it, machine learning or AI was on the depth of the models. Like when I started my PhD, we were working on something that we would call like shallow models. Basically, you can think about it, looking at some statistics, you know, the decision tree was the

 

the state of the art, which meant, okay, I have like this feature. If it is it greater than this value, then I have to take the other feature and the other feature and come up with a decision. That’s something that everyone can understand. Then deep learning was the next revolution somewhere in the 2010s. It started and it started doing, you know, more, let’s say complicated stuff that I mean, people are still trying to find out why it’s working. They cannot.

 

understand exactly the math around it. And then the next revolution was, so we had this models that they were pretty powerful, but, uh, we didn’t know how to scale them. We didn’t know how far they can go. And, uh, and that was the revolution that basically open eye, open AI broad, that they realized that, um, you can take this new cool thing called the transformer where you can feed it with a lot of data and do this cool

 

where you are trying to predict the next word and basically come up with what we have right now. It took several years and several iterations. But I think the difference between what we used to call AI and what we call AI right now is the focus on the language. I mean, if you had read about Chomsky and others, a lot of people considered that

 

the human intelligence has to do with our ability to form languages and communicate. I mean, you might have heard that, you might remember as a student, what makes humans different than other animals, the human brain is the ability to form languages. And I think the focus on that made the big difference in what we have right now.

 

Nik (04:15.059)

The previous was more like a decision system. Now we’re focusing more on the reasoning side. So I would say this is the shift that we see.

 

Matt (04:18.442)

Mm.

 

Matt (04:22.218)

And that’s, I think part of the interesting aspect of it is, you know, in the past, it’s like the models, they were trained for very specific tasks in a lot of ways. And now you have this concept of these foundation models, which that’s, you know, what a lot of the large language models are built on. But now to your point, it’s, it’s almost kind of like getting to where how the human brain works and it can tackle these disparate types of ideas. And.

 

solutions and things like that. This concept of like a foundation model, what is that, how does that start to play into like these concepts of like large language models, LLMs that we hear so much about?

 

Nik (05:01.247)

So let me clear that up first. The foundation models and the language models are basically the same thing. It’s sometimes the foundational models were, the term was introduced by some Stanford professors. They were trying to kind of, like it happens a lot in science. You build something for something specific and then you realize that it applies to a much broader.

 

uh… you know class of problems and i think that was the effort that was that was the scared uh… the rationale behind renaming language models as foundational models because they can do the same thing with other types of data not just like uh… text so you can use that for proteins you can use basically for whatever represents a sequence okay so uh…

 

As I said, in the past, a lot of effort was put on collecting labels and do what we call supervised learning. The paradigm shift here was in what we call self-supervised learning. That was a big, big plus, something that brought us here. This idea that, you know, just take a corpus of text.

 

and try to predict the next word. And if you’re trying to predict the next word, you’re basically going to find out the underlying knowledge and ingest it in a way that you can make it useful. Of course, that’s what brought us up to 2020. That was the GPT-3, where we scaled. But there was another leap that’s at GPT.

 

that in the end it did require some labeling because you had like the human in the loop. Okay, let’s, I don’t know, it’s not exactly labeling but you can think about it as labeling because we have a human giving feedback. And then, you know, that brought us to Chad’s EPT. Now the heart of language models or foundational models is something called the transformer.

 

Nik (07:24.367)

It was invented in 2017 by Google, actually. It was an interesting race. OpenAI had, there was like a small feud between OpenAI and Google. So OpenAI came with a model. All of them were language models. Everybody was trying to solve the same problem. They came up with something called Elmo. And Google came back with Bert.

 

Matt (07:54.057)

Mm-hmm.

 

Nik (07:54.131)

from the cartoon, from the Muppet Show, I think. And then, so BERT was based on the Transformer. Then OpenAI realized that actually BERT is better. That’s an interesting lesson. They didn’t really stick, oh, this is our technology, we’ll invest in that. They saw that the Transformer was a better architecture, but then they took BERT and they actually cut it in half.

 

OK, and they picked by accident. We put that way that they invented the Google invented the transformer, which had an encoder and decoder. And they picked BERT was based on the encoder architecture. They took that half. But then OpenAI came and picked, no, we’re going to work on the other half, which is the decoder, predictive text. And they spent three years. They did see that the more data you pour, the better it becomes. OK.

 

That was their bet. And they ended up with GPT 3, GPT 2 and 3, GPT 1 to 3, the sequence 3.5 and 4 later on at GPT. And it was kind of like an interesting race where things basically started from Google, but OpenAI ended up being the leader over there. The transformer is nothing else.

 

Matt (09:17.554)

And they built everything they built was open source, right? Everything Google built. So they were able to. Yeah.

 

Nik (09:22.847)

You know, everything is actually open. So I think up to GPT, even GPT-3, there was a published paper. It’s very hard if you believe that you’re going to get a secret source that nobody else knows. I’ve never seen that playing in machine learning. Okay, because scientists want to publish, they want to share knowledge. I think as the models started to become bigger and bigger, they didn’t, you know, with GPT-3, I don’t think they ever opened the whole model, the actual model.

 

Matt (09:39.382)

Yeah.

 

Nik (09:52.467)

But they gave enough information about how to train it. There’s always some tricks that over time, even if somebody doesn’t tell you as you’re experimenting, they’re going to become public. So yeah, that was never the issue. I don’t think. Yeah, they are a little bit cryptic about after 3.5 and such details. But in my opinion, they’re.

 

Matt (10:17.91)

Yeah.

 

Nik (10:22.675)

The secret sauce over there is not exactly on the model, but on how you scale the serving of the model, we’re gonna talk about that later. This is the secret weapon of OpenAI, not necessarily the architecture, but the engineering behind that.

 

Matt (10:40.162)

Nice. Yeah. Let’s, let’s keep going on the transformer side. Cause like getting under how these, you know, GPTs work. Basically you mentioned that it’s serving up the next word, the next word. It’s not looking at it like a whole entire sentence, right? It’s these, this concept of tokens, but how is it like actually thinking through that and structure of language and something you think a computer wouldn’t be able to do it’s now doing very well.

 

Nik (11:07.163)

Yes, first of all, it’s always this, as I said, the transformer has this encoder decoder architecture, which means that there’s one part that looks into two directions back and forth. Like as it’s processing, it looks both ways, like this token, you know, is affected by the other tokens. But this token, the middle is also affected by the ones before and after them, what it’s going to be.

 

Matt (11:12.972)

Mm-hmm.

 

Matt (11:17.879)

Yeah.

 

Nik (11:31.991)

So that’s like the encoded, that’s the decoded architecture. Architecture, you’re only looking back because you’re not looking at the future. Okay. We can, we can talk more into it. There’s a lot of papers and a lot of tutorials that they actually explain that. It’s not, it’s not always easy to explain it without graphics here, but the, the key thing over here is that, um, you know,

 

Let me go a little bit back. The first revolution actually came by Google was Word2Vec, where they realized that if you give me a word and I try to predict that word by looking five words behind me and five words after me, that was a simple thing, like a small window. And I tried to create a vector representation. They realized that I can take words.

 

Matt (12:02.412)

Yeah.

 

Nik (12:26.647)

make them as something like continuous vectors, put them in space, draw them in space, and I would realize that, you know, that representation would bring words that are semantically similar together. Okay? And there was this other thing that if I, you know, if Paris is here and France is here and London is here, then I can take the same vector, put it here, I can find, you know, England. So they realized, for example, that…

 

Matt (12:39.071)

Hmm

 

Nik (12:53.519)

If I place all the capitals and all the countries, I can just take the vector that connects the first and the other, and it’s translated to the next one. Or if I take the distance of the vector between man and the woman, take that, then take the word king, add that to the king, it’s going to take me to the queen.

 

Matt (13:19.459)

Hmm

 

Nik (13:20.487)

So basically people started realizing with a simple word to vec that, um, you can take words, represent them as vectors. Let’s think about two dimensional vectors like in on the plane, but it’s not two, it’s like 512. Depth is just, it doesn’t really matter. The concept is the same that the, the distances in space, the way that they’re placed in space has, sorry, semantic meaning.

 

Matt (13:47.736)

Bless you.

 

Nik (13:49.211)

Now, the next problem was this is great, but we do know that words actually change meaning based on their context. Okay. So, uh, yeah. So for example, an example will be, uh, um, you know, when you say, uh, you know, when you say flower, well, let’s, let’s pick a now I’m a little bit stuck, but, um,

 

Matt (13:59.246)

Hmm. What’s an example there?

 

Matt (14:19.318)

You had one about boiling the, a person’s boiling a what? And then if it was like an engineer, it had different context of, that one was kind of interesting.

 

Nik (14:19.435)

because I have like…

 

Nik (14:28.767)

Yeah, you can boil an egg or an engineer is boiling, I don’t know, a substance. But it could be like, yeah. So when you say, for example, the bed, it can be something different when you talk about a house, a bedroom. But if you talk about geology, it means something completely different. So what they realized was that,

 

Matt (14:35.062)

or boiling the ocean, they’re trying to boil the ocean, right? Ha ha ha.

 

Matt (14:49.624)

Hmm

 

Matt (14:53.23)

Flowerbed, yeah.

 

Nik (14:58.635)

That vector that represents the word shouldn’t be universal. It should really depend on the surrounding words. So this vector representation, when the surrounding words are this one, it has to be this, and it will also have different relationships. And it should be different when it’s around different words.

 

And that was basically ELMo, that was this idea, it’s called contextual embedding. So this vector representation, like they say these two-dimensional representations called them embedding. So this was actually one of the biggest revolution of deep learning that we’re taking discrete entities and we could place them in space as a continuous vector. So we’ll take something that was discrete and putting on a medium.

 

Matt (15:51.502)

Wow.

 

Nik (15:53.631)

that it’s continuous, okay, continuous and multidimensional. So the first idea of the, before the transformer, ELMO, which is an area version of a transformer, the first idea was that, ah, okay, if I see a text, I will be placing these words, you know, on different places in space based on what is around them, okay? And…

 

And the next thing that, so basically what happens is you are taking the words on the first level, you, you look left and right and you create, you know, embeddings, you know, you create, you put them in space, then you take that. And you apply that again and again. So the transformer actually starts in levels, one level after level, the second level, it has multiple, I don’t know exactly the numbers, but it has different levels, so you can think about that as basically a rewriting.

 

Matt (16:30.935)

Mm-hmm.

 

Nik (16:49.191)

Okay, so that’s why it’s called transformers. So you have a sequence of words and you start, you know, rewriting to something else, something else that else, you know, so when people, actually people have done this experiment, they’re taking that the transformer and they decompose it and they see what are these things that you, you know, the transformer does in different levels. And they’ve actually realized that it starts inventing grammatical rules. It starts like identifying.

 

Matt (16:56.983)

Wow.

 

Nik (17:18.127)

what is the subject, what is the object, what is the verb. Okay. He starts identifying that something is an adjective or not an adjective. Um, it starts, you know, taking, you know, words and converting them to something which is a synonym, maybe, you know, something else. And that’s how the reasoning starts. Like I can give you, if I give you a sequence of words, you know,

 

Matt (17:24.503)

Mm-hmm.

 

Nik (17:49.32)

Nik, I don’t know, lives in Atlanta. You know, he knows that Nik, I don’t know, is Greek. Okay, so he can say, the Greek lives in Atlanta and that can affect the fact that, you know, and then you can say, he goes to the store to buy and because now, you know, that he’s Greek, he lives in Atlanta, you say, Fetatsis, for example. Okay, because now he starts.

 

Matt (18:13.912)

I… Yeah.

 

Nik (18:18.127)

the transformer starts taking different paths. Like it starts exploring, you know, what are synonyms? And, you know, if he leaves, it means he goes to the store, he goes to the supermarket, if he lives there. So it starts, all this information is ingested in the transformer after seeing, you know, endless pages of text and, you know, where basically there’s the reasoning paths. Like it does this on your own.

 

Of course, because there’s so many reasoning paths that can happen, sometimes it can hallucinate. Okay, so we can say Nik buys, I don’t know, souvlaki because he is Greek, which is possible. But there might be somewhere else, some other information that says Nik hates souvlaki and, you know, the language model doesn’t, but it’s a probable event since, you know, Nik is Greek. Anyway, I’m just giving a simple example over there. But that’s kind of like the power of the transformer that at every stage.

 

Matt (19:01.673)

Yeah.

 

Nik (19:14.183)

it starts rewriting things again and again and again, and it explores possible, very possible, very likely paths, highly likely paths.

 

Matt (19:23.73)

And correct me if I’m wrong, what you’re talking about here is this kind of the difference in evolution from structured data to unstructured data. Cause in the past we had very like defined tables, columns, associations to things. Is this kind of getting to that concept of unstructured data where it’s like the vectors and

 

Nik (19:39.652)

It does.

 

Well, the problem with the structure of the systems before that, everything was like, it was very discrete. And unless you had seen before the word Nik, OK, followed by that exact word, it was, you know, if you think about all the variances, like Nik spelled with K, Nikolaus, Nik, Vasiloglou, I don’t know, think about it, all these things. Because now they’re in a continuous space, OK, that’s what makes the difference.

 

It’s possible for the system to create an internal rule if you want, or internal path, about things that are kind of similar. So it doesn’t have to be Nik, it could be Vasiloglu instead of Nik. Or it could be the guy who lives at, I don’t know, say my address, you know. It’s the same thing. Because all these things, I think it’s public, you can find it. Because…

 

Matt (20:22.318)

Hmm.

 

Matt (20:33.258)

Don’t say that.

 

Nik (20:38.187)

All these things that they are semantically equivalent, and before you had to express them in, I don’t know, 100 different discrete things, and you had to see them exactly in that order in order to find a common path. It says, OK, this class of entities that they can be represented with this vector, they are very close, can be followed by this class of entities that can all compress them in a constellation of vectors, can lead me to something else.

 

Matt (21:07.17)

Mm-hmm.

 

Nik (21:07.743)

That’s why you see the language model when, when you go to open AI and you say regenerate what it does, it can generate the same thing, the same reasoning path by using a little bit different words or, you know, words that they’re semantically equivalent. Okay. And now the thing is that it can do that in this incredible memory of like, I don’t know, up to 32,000 tokens. So even if, you know, you’re saying that, you know,

 

Nik is going to buy something from the store and it will predict that it’s FETA. It’s because it has seen, you know, 10,000 tokens before that, you know, Nik is Greek, okay, he’s hungry, I don’t know, he’s having a, I don’t know, a dinner party and get you over there. Okay, so because when it was trained, it has seen sequences that in the span of 10, 20,000 tokens.

 

Matt (21:49.299)

Mm-hmm.

 

Nik (22:05.415)

You know, Nik associated with party food restaurant, you know, leads you to Feta. Okay. So.

 

Matt (22:13.29)

Yeah, and when you say a token, that’s basically either a word or a couple characters, some like small variation that it’s breaking it down into. Is that correct?

 

Nik (22:21.883)

A token is basically a trick. You know, we could have used, it’s like, you know, the thing all these models have about 30,000 tokens. So they realize that we can break all possible, like, you know, with 30,000 tokens, you can, I mean, you can use character level, okay? Every word can be decomposed to characters, but that would have made, that would have made, you know, the,

 

Matt (22:31.521)

Yeah.

 

Matt (22:41.313)

Mm-hmm.

 

Nik (22:50.111)

The language model is extremely big and inefficient. So it’s like a trick because we kind of like trying to find out, it’s a compression that we’re doing. We could have gone with syllables because syllables are also finite and make all the words. Now we said, you know, look, because there are some combinations of letters that they are so frequent, we don’t really need to decompose them all the time. We know exactly what they mean. So it was a clever engineering trick.

 

Matt (22:53.311)

Yeah.

 

Matt (22:59.734)

Mmm.

 

Nik (23:16.411)

It has to do with the language. It’s related to the language. It was like a statistical, a better statistical analysis of the, um, uh, of the language, I mean, to put it that way, if we were inventing a language from scratch, um, we would start with tokens, you know, and maybe not necessarily letters, you know, it’s a.

 

Matt (23:39.078)

That’s interesting. And so we’ve talked a lot so far about language as the thing at play here, but like you can use this generative AI and all this new technology and advancements with different modalities like images, whether you’re generating images or whether you’re understanding what an image looks like and voices, all kinds of different things at play here. How does that work different when now language isn’t necessarily the output? Is it?

 

looking at the pixels in a way and then association there.

 

Nik (24:11.087)

There is a visual language over there. You know, there’s the visual transformer which tries to predict blocks of the image. There’s also the diffusion models which is something completely different. But so for example, diffusion models, we see them only in images. We don’t see them in text that much. Although there’s been some efforts, but the transformer, it turns out that it behaves equally well for images. But…

 

Matt (24:25.193)

Mmm.

 

Nik (24:40.015)

You know, when you talk about a token in vision, that’s kind of like a block of pixels, I don’t know, 16 by 16, 32 by 32. You know, this is something we knew from before, like even in the days of image compression, they could take parts of the image and compress block by block. But I wanna…

 

Matt (24:59.913)

Mm-hmm.

 

Nik (25:07.711)

make something clear for your audience that language is a way of expressing knowledge, but it’s not knowledge. Okay. The fact that I can come and tell you something, you know, I can go and read quantum mechanics, I can take a passage, I can recite it for you. It doesn’t mean that I know what I’m saying. Okay. And

 

And that’s where the hallucinations are coming into play. So we don’t really have direct access to knowledge. Okay. It’s a language model. It’s not a knowledge model. Okay. So, and there’s been some efforts right now to do the same thing. Like, you know, if we could start, if there was a universal knowledge graph, okay, that I could take and say that from this token of knowledge, I can go to that token of knowledge through that relation.

 

Matt (25:39.507)

Mm-hmm.

 

Nik (26:05.499)

and do reasoning, maybe we could train a knowledge model, let’s call it, or a foundational model that we know that whatever it says, it’s accurate and correct. But language is a possible path over knowledge. It doesn’t mean that it’s correct. Okay? So…

 

So it doesn’t have to do that. So language models are always going to hallucinate and make mistakes, not because there are errors into what they were been training for. The data sets are pretty well curated. Obviously, they will contain misinformation and errors, but the reason of hallucination is not really the errors in the raw text, but it’s on the fact.

 

that this is a possible expression, you know. The same way that, you know, like you are a fiction writer, author, and you can write, like you see things in life and you write a different version. Like take one of my favorites, like Da Vinci Code, okay? Like when you read, that’s what I like about Dan Brown. Or take about Game of Thrones, for example. If you think about Game of Thrones, it has elements of truth from the…

 

Matt (27:01.62)

Yeah.

 

Matt (27:11.58)

Mm-hmm.

 

Nik (27:25.895)

human history. You can see the, let’s talk about it because that’s probably what most of the people know, you know, there’s like the Persian Empire or you can see, you know, the English history or the Greek or there’s some of them, like you can see elements of that in a completely fictional way. So that’s, in my opinion, Game of Thrones was the first generative model, you know, George Martin. Great. Okay. So it could generate something like that, which is completely, it looks…

 

Matt (27:28.459)

Mm-hmm.

 

Matt (27:38.317)

Mm-hmm.

 

Matt (27:48.838)

Ah, there you go.

 

Nik (27:55.771)

you know, Modulo the Dragon. So it could look real, okay, realistic, but it’s wrong. The same thing with Dan Brown, you know, Da Vinci Code. It looks like a real, it could have been a real story about what happened after, you know, this was crucified in the story. It could have been, but we don’t have evidence that it is. Some people follow conspiracy theories, they think that Dan Brown is the real story, but that’s what I’m saying. So yes, it’s a possible truth.

 

Matt (28:07.339)

Mm-hmm.

 

Matt (28:26.646)

Do you think we ever get to that ability where it is true knowledge? You get into this concept of like, you know, your AGI and all that type of stuff. Do you ever think we get to that level of advancement? Or, you know, I always go back to like how the human brain works and like, are we, do we have true knowledge to an extent or are we just doing this same kind of computational thing in our head with probability of what’s, you know.

 

Nik (28:47.975)

Oh.

 

Nik (28:53.419)

Yeah, one of the things that we know is that the transformer architecture and the language model is not how the brain works. This is an engineering, it’s not how the brain works. No, no, no. There are some commonalities, and there are some kind of analogies. But I think it’s wrong to think about or to try to, you know, like when you’re working with language models and you’re trying to tune them or you’re trying to explain or debug them.

 

Matt (28:59.243)

It’s not, okay, yeah.

 

Nik (29:22.623)

to have in your mind how the brain works. Don’t do that. If you are a prompt engineer, if you’re trying to build a model, try to understand how the system is built and use that knowledge. Don’t use the cognitive neurology here. No, unfortunately we are very, the human brain is still much more powerful given the fact that you can eat a slice of pizza.

 

Matt (29:26.359)

Hmm.

 

Nik (29:49.591)

and do very complicated mathematical computations. While if you were trying to do the same thing with GPD4 you need the power of a village or something even for inference. Okay so we are very energy efficient. We use signals that take milliseconds to transmit, not nanoseconds, whatever it takes for a GPU, and we still do things faster.

 

Matt (29:53.099)

The energy consumption, yeah.

 

Matt (29:59.969)

Yeah.

 

Matt (30:12.558)

Mm-hmm.

 

Nik (30:19.359)

There’s a completely different world. Even if we could make an electronic brain, like simulated, I think it would be very different. Biology comes into place. It’s still a mystery, but whether we’re gonna reach AGI, you probably hear that. I leave that to people who have enough money and time to think about it. Okay.

 

Matt (30:29.73)

Hmm.

 

Matt (30:44.4)

There you go.

 

Nik (30:46.943)

So, I mean, yeah, in theory it is possible. I hear like Hinton and Benzio and what’s his name. I think Lacoon is on the other side. And Elon Musk that they say it’s possible for, you leave the language model start free writing the code and unplugging other systems. I don’t know why. I think not to worry that much about it. I worry more about the fact that it’s having right now on the job market.

 

Matt (31:13.055)

Mm-hmm.

 

Nik (31:17.868)

That’s more imminent and more real than the economy, than whether the robots will revolt against us.

 

Matt (31:23.17)

Hmm.

 

Matt (31:28.454)

And what do you mean by that in terms of it taking away jobs and tasks? Or do you think this unlocks new opportunities? Yeah.

 

Nik (31:33.627)

I think it does, yes. You know, as with everything, it happens all the time with the high tech. As technology progresses, the next generation requires less engineers. You can see about this example, I don’t know how many million employees Ford has when the car came.

 

Matt (31:45.46)

Mm-hmm.

 

Nik (32:03.867)

And when you compare that with Microsoft, we came later, compare that with Google, compare that with Twitter, compare that with OpenAI now. That it’s a big chunk of the market they’re getting, like their capitalization, and the small number of engineers that are scientists that they need. OK.

 

And yeah, it’s pretty clear to me that a lot of jobs now can be done with less people. And even for us, the data scientists, for the moment, if you want the work is becoming a little bit boring, you know, in the sense that you have to do what people call like prompt engineering. I don’t know, I find ways to find more to make it more interesting. But yeah, it’s becoming, it’s becoming an issue.

 

Matt (32:36.84)

Mm-hmm.

 

Nik (33:00.455)

I feel like we saw this tech layoff wave for the past two, three years. I think a lot of these jobs will not come back again. Okay. They will need less people for that. And of course, for things like customer service or administrative work, all of them will be done with, I mean, it’s already pretty obvious you can do things with GPT much faster than before. It’s a great assistant.

 

Matt (33:04.971)

Mm-hmm.

 

Matt (33:10.423)

Hmm

 

Matt (33:30.814)

So two more topics I want to hit for you wrap the one you think of these models. There’s this element of it being a black box and we touched on it earlier with relational AI, having this concept of a knowledge graph. What is that? How does that work? And like that kind of gets into the value prop of relational AI to an extent, but we’d love to kind of hear, uh, like how, how the benefits of that, that concept.

 

Nik (33:44.843)

Mm-hmm.

 

Nik (33:51.562)

Yeah.

 

Nik (33:55.839)

So the knowledge graphs and language models have a bi-directional relationship. First of all, this is very simple definition, which I really like about knowledge graph. It’s the language that both humans and machines understand. It’s a way of expressing knowledge in a way that anyone can read it and the machine can consume it. If I write C++ code, it’s very easy for the machine to understand.

 

Matt (34:12.374)

Hmm. I love that.

 

Nik (34:24.927)

but it’s not easy to show it to your executive or to your business analyst. Yeah, so a knowledge graph has the right level of information, it’s complete, and both systems can understand. Now, the problem with knowledge graphs has always been is, it’s great, but where can I find one? Like once you have it, it’s great. It empowers a business. You see, the ROI is huge.

 

Okay. It’s like, you know, you are in your house, you go to your library, to your room and you tidy it up. You know, once you tied up and label everything and you know where everything is, then you’re alive, you’re very efficient. Okay. Um, but you know, who has the time to do that? So, and that was always a barrier for us. Now what happens is with language models, you can automate that. That very easily, because in the past, how did you build the language model? So how did you build the knowledge graph? You had somebody going through documents or databases.

 

Matt (34:53.643)

Mm-hmm.

 

Matt (35:07.935)

Mm-hmm.

 

Nik (35:23.047)

and was trying to find global entities and relations and how things are flows and all these things. Now the language model can do that for you with a human in the loop with supervision. So it accelerates that process very quickly. Now, the other thing is once you have a language model, as I said, you need to inject knowledge and you need to teach it stuff. So the way that I’ve seen it is that, let’s take some simple examples.

 

Matt (35:26.206)

Hmm

 

Matt (35:48.159)

Mm-hmm.

 

Nik (35:53.627)

something which kind of like the Holy Grail, you want to answer a question. You go say, well, tell me all the sales from last month where the people bought more than X, Y, Z. And that translates to a SQL query. So in order to do that translation, like from natural language to SQL, for example, if you have a knowledge graph, we have evidence that this can become faster. In some other cases,

 

The knowledge graph, because the knowledge graph can afford really long and complicated reasoning paths. You have your knowledge graph. You can go and mechanically generate, you know, let’s call them proofs or reasoning paths. And you can take them and go back to the language model and train it and say, you know, when somebody is asking you this, this is what people call the chain of thought. It can be a pretty lengthy. Okay. So

 

The end of course is the hallucination thing where you can think, you know, the knowledge graphs always has the, the correct knowledge and it’s very easy to add and remove, you know, knowledge that it’s valid or invalid anymore. So that’s another part that, you know, helps you keep things in place. So, yes, so knowledge is, language model helps you build a knowledge graph, tidy up your room, tidy up your knowledge.

 

And then the other way, having all that knowledge, you can go and retrain, fine tune, control your language model so that you’re getting, you know, accurate results and better results. Okay. So that’s kind of like the synergy between the two.

 

Matt (37:22.507)

Hmm

 

Matt (37:32.814)

No, that’s really interesting, interesting thing. Evolution there. So as promised, we talked about, you had some use cases in your mind of where you think Gen.AI is gonna like, the most interesting, viable, disruptive, whatever it may be. So curious to see what some of those are.

 

Nik (37:43.164)

Oh yes.

 

Nik (37:52.023)

So let’s close with that. I mean, these are things that, let’s call them historians of technology have observed over the years. So we know that whenever new technology comes, people are trying to use it in the obvious way, which might not really give them the big multipliers. So I think when we met, I mentioned this example of the electric motor. So when it was invented,

 

Matt (37:57.399)

Mm-hmm.

 

Matt (38:14.165)

Mm-hmm.

 

Nik (38:21.803)

Those days, the industry was using the steam engine. And the way that they had, they had a big steam engine in the middle, and they had mechanical systems that they would transmit the motion to other machines around that in order to produce, I don’t know, something. It was an industry. And now somebody comes and says, okay, take this electric motor, which first of all, is not as powerful as a steam engine, by definition, because the steam engine…

 

will produce electricity, something will be lost, and then a motor will use it. And the steam engine was there for centuries before the, at least 100 years, I don’t know, centuries, before the electric motors was more optimized. And all of a sudden now you needed to buy electricity to fit that, while for the other one you had, I don’t know, fossil fuel to use, and you knew where to find it. So people rejected the electric motor at the beginning. They couldn’t know why it was useful.

 

until someone said, well, wait a minute, we don’t need one electric motor for the whole factory. What if we create, because that’s so easy to manufacture, what if we made like 100 electric motors spread in vertical space, I don’t know, so take the whole production and spread it over a bigger space, okay? And all we need is an electric generator that can feed 100 motors.

 

So the big benefit wasn’t by just having one stronger motor. The big benefit was by having, you know, a hundred motors in different levels and making the production, you know, a multi-level and expanding it to bigger space because the problem with the steam engines is that motion couldn’t be transmitted too far away and everything was cramped and limited. I think if I remember that took about 20 or 30 years, okay, to figure that out. And kind of like the same thing with

 

Let’s think about Amazon. In the beginning, the e-stores, they were basically trying to take a brick and mortar store and run it the same way they were running it before running it on the web. And Amazon realized that there’s other things, like there’s recommendations, there’s A-B tests, there’s other things that I cannot do in a brick and mortar store. The tailor, the personalization that brought the big boom.

 

Nik (40:45.451)

of again, it took several iterations of failure. I deal, you know, Amazon and Alibaba and others kind of like dominated the market. Think about Snowflake when Snowflake came and say, we’re a cloud database. I said, what do you mean? I can take my database and put it on the cloud. But the thing is nobody thought about designing a database that it’s going to, you know, you can’t download Snowflake and run it on your machine. It was designed.

 

to be completely cloud-based. Use infinite compute and infinite storage. So it’s a very different thing. People were confusing the cloud hosted, which means that I build something that when I take something that I build it, thinking that I’m constrained by the memory and the compute of a single machine, and I’m just like running it somewhere else on the cloud, versus no, I’m building a system that is going to rely on, you know.

 

Infinite machines and S3, whatever blob storage, which is infinite and available for scratch. So I’m trying to scratch my head here and see what is that. Yeah, there’s the obvious application of Gen.AI, which is, use it as a new UI. So chat bot, we know about that. But I was thinking, I was trying to make this exercise, like we’re looking for these businesses that they only exist.

 

Matt (42:01.196)

Mm-hmm.

 

Nik (42:12.895)

They cannot exist without Gen AI. So I think the very, the one that we’re going to see soon is there’s already a legal battle about that, which is going to blossom and give the new thing. I think it’s gonna change completely the way we’re reading, okay? So you might have seen the fights between authors and OpenAI about infringement, and I think it’s gonna end up in a beautiful relationship over there. So right now,

 

Matt (42:36.695)

Mm-hmm.

 

Nik (42:42.219)

There’s a problem. People don’t read because they have to go and buy a 200, 300, 400 pages book where they only need to, they’re only interested in four or five pages or even a summary of 20 pages that nobody’s providing for them. Okay, they don’t know where it is. So what I’m envisioning over here is, think about Random House taking all their books or all the publishers, training a language model. And they’re saying,

 

I’m asking a question and they’re basically coming up either with two pages and say, you know, actually this thing, you can find it in that book. And you know, here’s a summary and these are the three pages. And I can actually take these three pages and put half a page that has all the information that you might need to read these three pages. Okay. Because that’s another problem. Sometimes you can browse a book and find that chapter. But then as you’re trying to read it, you realize that you need to go and visit others. So basically what’s going to happen is…

 

Matt (43:39.089)

Mm-hmm.

 

Nik (43:42.203)

You know, you’re going to buy pages from books or a summary that was produced based on, you know, 10 pages. So now you will pay, I don’t know, 10 pennies or a subscription or something like that. I think it’s exactly the same thing that happened with streaming. If you remember the legal battles of YouTube and Viacom where people started uploading videos on YouTube and they saw, no, it’s mine, it’s ours, it’s yours. And eventually they came out an agreement that changed completely the way that we

 

Matt (43:47.406)

Uhhhh…

 

Nik (44:11.839)

Listen to music as Spotify was another thing. Okay, but it took some friction So we don’t buy CDs 12 songs or 16 however they had you know, we We listen to you know one song at a time We don’t own the songs anymore You know, we just stream them and all these things So I think that’s one of the applications one Now I have a reservation

 

Matt (44:38.454)

That’s like, as I say, that’s like spark notes on steroids almost. One question, though, I guess, if you’re reading for fun, do you get the same pleasure and benefit from that type? Or is that a different use case where you’re wanting to sit down and enjoy a book? I guess that may be a different type of thing versus getting the learning.

 

Nik (44:57.735)

I think it can help everyone. It can help the bibliophiles, you know, because I often, I like, I have about 2000 physical books, another 2000 electronic books. I like, but I’m always frustrated. You know, audio books was another thing that seems the way that we just know less. But it’s always frustrating when, you know, sometimes it takes like you, if the book doesn’t stick with you for the first, I don’t know,

 

Matt (45:02.295)

Mm-hmm.

 

Matt (45:07.104)

Wow.

 

Nik (45:23.179)

20, 30 pages, then you give it up. And it’s very likely that then if you’re a little bit more patient, maybe after page 50 will become more interesting. But how many people give up before that? So as a bibliophile, it’s going to help me discover more books. But I think the biggest thing is for people who want to learn something, but they don’t want to read the full book. I read somewhere that they said that

 

Matt (45:25.143)

Mm-hmm.

 

Matt (45:48.694)

Yeah.

 

Nik (45:52.639)

Are we out of time? Yeah, so there was this theory that 100 years ago when you were writing a book, you had to make it very big because people didn’t have to do anything else. So they were buying a book to fill their time, because they wanted to spend, I don’t know, a month reading it. Now these days, they say that a book shouldn’t be more than 200 pages, because don’t try to fluff around, because there’s so much information and people don’t have the time to.

 

Matt (45:54.302)

No, keep going, keep going. I was gonna add a point, yeah.

 

Matt (46:15.576)

Mm-hmm.

 

Nik (46:22.439)

They need the essentials and don’t want to spend too much time on other irrelevant stuff. The same thing happened with TikTok. Again, it was a victory of machine learning over there and recommendations trying to narrow the span to a few seconds to what you’re going to consume. Of course, it’s a great commercial success. I personally don’t like it. I don’t let my kids.

 

Matt (46:33.163)

Mm-hmm.

 

Nik (46:50.059)

spend time. I realized that it’s so addictive. You know, YouTube search, you can spend hours just going one by one. It’s it’s dopamine injections. But we’re definitely going to see social networks based completely on Gen.ai and videos. Okay, that’s kind of like another one. The same thing that we found, you know, we had TikTok. And yeah, I don’t know. I mean, we if you are a founder, you have to start thinking about

 

Matt (47:06.539)

Mm-hmm.

 

Nik (47:18.015)

How can I take a sea of content and serve it much better with a language model in a way that people wouldn’t have consumed that before?

 

Matt (47:31.423)

Yeah.

 

Matt (47:36.21)

Yeah. The book example you mentioned, I have the same problem. I do audiobooks and I’ll kind of like try to save the clips of the things that make sense and at that point in time it’s like you have this light bulb moment and then you forget about it but there’s a point in time in the future where, man, that would be super applicable if I could pull that out of my knowledge base. So it’s almost like, to your point, getting those points that are applicable at that point in time but resurfacing them because they’re somewhere in my…

 

memory that I can’t necessarily always retrieve.

 

Nik (48:07.251)

Let me give you a recent example. And that’s why I think this open AI has a big advantage right now over Google. So with all the unfortunate events happening in the Israel-Palestine conflict right now, I remember that I had watched the documentary 20 years ago at Georgia Tech about the whole history of the area. But I couldn’t remember the title of it. So I knew that it was a French production.

 

Matt (48:30.722)

Hmm.

 

Nik (48:36.303)

I remember that it was released somewhere in the nineties because it was right before the Oslo agreement. And I think basically that’s what it was. I can remember it was a documentary. So I was trying to find on Google, I was trying to find on Amazon, I couldn’t find it. But I went on OpenAI and I said, well, I was a documentary, I think it was released early nineties. I know that it was a friend’s production. And you know,

 

Matt (48:54.795)

Wow.

 

Nik (49:03.279)

It had the history from the 1900 until 1990. Can you tell me which one is? Because you know how many, they’re not really that many. I mean, okay. It’s so that I thought that someone should have been able to. And it actually found it. It gave me the title in English and in French and I went to Amazon and I found it. So I think it was remarkable. It was remarkable.

 

Matt (49:18.241)

Yeah.

 

Matt (49:24.054)

Wow. That’s cool. Yeah, and just to wrap on the points you made about the TikTok and everything like that, and just that type of social media, like you wonder to a point, does it get so advanced to where you literally cannot put your phone down? It gets you so zoned in with like the dopamine hits. Like is it engineered to a point where the recommendation of what’s coming next, like it’s kind of scary to think about, you know, in the future.

 

to where it becomes you literally, it’s like a drug in essence.

 

Nik (49:55.603)

Oh yeah, that is going to have, I agree with you. Like if TikTok is a problem right now, when it’s basically trying to find existing content that you’re going to like, think about if it knows exactly what you like and you can give it, you know, feedback, like, you know, so it knows more and more, like you say, what you want. And it really, you know, personalizes things for your content. Then it’s going to…

 

Matt (50:16.876)

Yeah.

 

Matt (50:23.678)

I’d take it a step further too, like what if it’s not just random users generating the content? What if it is a GPT or something like that that’s actually generating the content? Okay, yeah, so that’s, wow, I hadn’t even thought of that.

 

Nik (50:34.471)

Yeah, yeah, that’s what I mean. Yeah.

 

Yeah, generating and it can’t, you know, like think about, you know, like when you were raising a kid where you say, well, you know, we have this inherent thing of, of going to the, taking the path of lead, least resistance and basically, um, things that they’re not good for you. Okay. So that’s why you have to say no to a kid. Imagine now that, uh, also think about it.

 

Matt (50:54.537)

Mm-hmm.

 

Nik (51:06.595)

like society has created like this moral boundaries that, you know, prevent you from doing things that maybe they’re in your mind, but you say, you know, that’s the, I shouldn’t really take that path because that’s, that’s immoral. But what if you are, you know, in your screen, and nobody’s looking and there’s somebody else that says that, oh, okay, tell me what you thought. I can actually, you know, create this for you.

 

Matt (51:16.863)

Mm-hmm.

 

Nik (51:33.119)

and a lot of people are going to get tempted. That’s like a really bad spiral. I mean, these are fears before AGI taking over and leaving the matrix. I think these are bigger fear and we do see it in some applications, in the deep fakes and things like that. I think it can become a-

 

Matt (51:35.434)

Wow.

 

Matt (51:42.593)

Yeah.

 

Matt (51:55.083)

Yeah.

 

Nik (51:58.963)

People have said that this is this kind of addictions like drugs, you know, the same thing, like the screen addiction, especially when it takes parts that they are, you know, problematic. So I would worry about that. We need some strong resistance in that. So, yeah, like I will give you an example. I don’t know, for example, OK, like, you know, let’s take one of the.

 

Matt (52:03.784)

Mm-hmm.

 

Matt (52:18.25)

Yeah. Well, yeah, not.

 

Nik (52:27.447)

most horrifying thing which is like child pornography which is I know that by law even possessing child pornography is a felony okay I don’t know if possessing a deep fake you know of child pornography is a felony so there might be gaps in the in the in the legal system that we have to

 

Matt (52:35.194)

Mm-hmm.

 

Matt (52:39.956)

Yeah.

 

Matt (52:47.966)

And that’s the crazy part about it is it’s this whole, like to your point, our legal system, it’s a whole type of paradigm that we haven’t even really had to encounter. And how do you build laws and it’s, yeah, it is crazy to think about how that’s going to change how we live, how we work, how we, you know, our morality as a species even to a certain extent, right?

 

Nik (53:13.659)

Yeah, so I think the moral issues coming before the, you know, whether we’re going to lose our jobs or, uh, you know, computers taking over control. Yeah. Terminator. Uh, is it Terminator or Matrix? Which one is more scary? Terminator or Matrix?

 

Matt (53:23.498)

The Terminator, yeah. Yeah, yeah. Well, not.

 

Matt (53:30.806)

I don’t know. That’s maybe I’d say maybe the Matrix or at least that’s the more interesting one to me at least. What about you?

 

Nik (53:38.331)

Yeah, I think it looks like because in the Matrix, there wasn’t really any mechanical part. It was purely everything was, you know, there’s a computer running, you know, computers were running. The Terminator was mixing the reality with robots. Okay. Which I think it’s more difficult. It’s an interesting scientific question because if the machines can take over, and you know, and basically

 

Matt (53:44.666)

Hmm.

 

Yeah.

 

Matt (53:52.183)

Mm-hmm.

 

Nik (54:07.891)

control the universe, why do they need the mechanical part? Why do they need to go out in nature and do things? Maybe some of you would say because they need to synthesize energy, so they need the mechanical component. So it looks like, so it might be the case that evolutionary, we will not take that into consideration. They will try to eliminate their creator, but then they will actually.

 

Matt (54:14.07)

Mm-hmm.

 

Nik (54:35.435)

face some type of extinction or shrinking because they will be missing the mechanical component to get energy and all that stuff versus the other which is the hard way. I think in the Terminator you need to create the robot to fight the humans and then you have the mechanical component that can help you. Because at some point even if they could eliminate humans and let’s say they had solar panels

 

Matt (54:43.211)

you

 

Nik (55:05.183)

they would need to manufacture new solar panels. They would have to go and extract minerals to, the chips will go bad after some years, like create new chips, new stuff. Interesting science fiction stories here.

 

Matt (55:08.275)

Mm-hmm.

 

Matt (55:22.514)

Yeah, I think the scarier thing is not the machines taking over, but the humans and bad actors using this stuff in negative ways, at least for me, that’s scarier in my mind. But yeah, so this has been one of my favorite conversations so far, so many interesting topics. I really appreciate you coming on to the Built Right Podcast, Nik. But where can people find you? Where can they find relational AI and learn more about either you or the company?

 

Nik (55:32.458)

Yes.

 

Nik (55:51.832)

I think you can find us on the web. You know, we are a remote first company even before COVID. I think we do have an office somewhere in Berkeley. I’ve been there a couple of times. But our people are all over, I want to say the world. The sun never sets or never with the thing at really simply. We have people all over the world. Yes. You know, I’m here in Atlanta.

 

Matt (56:11.714)

Mm-hmm. Follow the sun, yeah.

 

Nik (56:19.443)

to go to our website, read our blogs, see about our products. Our product, I think we have announced a partnership with Snowflake, so people can use it through there. It’s a limited availability through there, which is going to become a general one, I think sometime probably this summer. It’s coming up, so I don’t have a date. So yes, you can find me on LinkedIn. I’m not really big on social media.

 

Matt (56:32.406)

That’s awesome.

 

Nik (56:48.875)

Plinkton is probably the only one that I spent some time, not much. Yeah, that’s it. Thanks for hosting Matt. Excellent.

 

Matt (56:52.654)

Yeah. Nice. Well, great, Nik. Thanks for joining us today. Have a good one.

The post How Generative AI Works, as Told by a PhD Data Scientist appeared first on HatchWorks.

]]>
Generative AI: Augmenting or Replacing Research? https://hatchworks.com/built-right/generative-ai-augmenting-or-replacing-research/ Tue, 31 Oct 2023 10:00:47 +0000 https://hatchworks.com/?p=30139 Generative AI is making an impact on every aspect of digital product building. But we wanted to delve deeper into how it’s affecting user research and interviews, so we invited Nisha Iyer, CTO of CoNote, onto the Built Right podcast to share industry insights and predictions.  Nisha shares the story of CoNote, an AI-empowered platform […]

The post Generative AI: Augmenting or Replacing Research? appeared first on HatchWorks.

]]>

Generative AI is making an impact on every aspect of digital product building. But we wanted to delve deeper into how it’s affecting user research and interviews, so we invited Nisha Iyer, CTO of CoNote, onto the Built Right podcast to share industry insights and predictions. 

Nisha shares the story of CoNote, an AI-empowered platform helping transcribe and organize user research. We hear her thoughts on GenAI skepticism and how CoNote builds on customer feedback to improve its efficiency. Plus, Nisha tells us her predictions for GenAI in user research and whether it could eventually manage user interviews entirely. 

Read on for the take-home moments or tune into the podcast episode below. 

How GenAI can help user research today 

In Nisha’s previous work in data science, the slow process of performing user interviews, transcribing them, analyzing them and acting on the relevant insights became tedious.  

After Google-searching for an AI solution and creating a few shortcuts herself, Nisha realized no-one was providing quite what she needed. There was a market for an end-to-end generative AI tool that streamlined these processes. That’s when CoNote was born. 

CoNote allows you to: 

  • Upload hours of user research interviews 
  • Transcribe and synthesize them 
  • See the key themes and keywords

Building a moat in the age of AI hype 

Right now, it seems like every day brings a new generative AI tool. With democratization in full flow and more people able to access larger language models, how do CoNote build a moat and stand out from their competitors? 

Nisha says user experience has always been their watchword while it often falls by the wayside for competitors. Development teams may integrate APIs and build their tech but, if you’re building a SaaS product and don’t have an intuitive front end, interest could dry up. 

CoNote’s moat is that they’re not simply consuming APIs. They have other pieces of infrastructure to keep them one step ahead of their competitors. 

Another thing at the core of what they do is a deep understanding of their users. Nisha believes CoNote provides a “simplistic flow” for the user to reach the solution to their pain point. 

How customers shape CoNote’s roadmap 

When building a brand-new tool, product development teams tend to devise a roadmap. But how much of that roadmap is pre-determined and how much is changed along the way, based on customer feedback? 

Nisha says CoNote’s ever-evolving roadmap is made up of around 70% user feedback and 30% CoNote’s own decisions. 

This is evident in the launch of their new feature, Action Items, which is stemmed from repeat customer feedback and highlights the next steps users can take after using the product. 

When running the first round of CoNote interviews at the prototype stage, many of the themes and action items that arose resulted in relevant features being built into the product, such as the use of audio recordings and Zoom integration.  

Nisha says the fact they use their own product as part of their work gives them an even better insight into the changes and features that need to be added. 

Overcoming AI skepticism 

A recent User Interviews survey found that 44.3% of UX researchers are tentative about using AI in their work, as well as 5.5% of CX professionals, 9% of data scientists and 21% of product managers. 

But, in 2023, generative AI is almost inescapable. So how can product development teams fight their fears and use AI in ways that augment their processes – without taking them over? 

Nisha says, rather than fearing its potential, it’s important to focus on generative AI as a way to enhance the tedious parts of your work and do what you could do in a week in a matter of minutes. 

CoNote is a prime example of this. It takes you 85% of the way through the user interview process, leaving you with the materials you need to pull the most useful insights. 

Predictions for GenAI and user research 

Nisha believes there’s still a way to go before AI is taking on interviews all by itself. She sees a future where AI can replicate human-led experiences but says real, personal interaction is still the most efficient way to perform user interviews. 

CoNote has no plans to create AI-led interview experiences, instead focusing on augmenting the cycle and making development teams’ lives easier. 

To find out more about CoNote’s story and how generative AI is changing the face of user research, listen to episode 16 of the Built Right podcast. 

Get ahead in software development with HatchWorks’ AI-driven strategies – learn about Generative-Driven Development™ now.

Matt (00:01.944)

Welcome, Built Right listeners. Today we’re chatting with Nisha Iyer, CTO of Conote. Conote makes qualitative research fast and easy with its AI-empowered platform, helping transcribe, analyze, and organize user interviews. It’s a tool built for user researchers, which at Hatchworks, that’s a big part of what we do, so we can definitely sympathize with that type of tool. But welcome to the show, Nisha.

 

Nisha (00:26.498)

Thanks, great to be here.

 

Matt (00:29.1)

Yeah, excited to have you on. And today we’re going to get into how generative AI, and more broadly, just the democratization of AI will fundamentally change user research and more broadly user experience. Uh, but let’s, let’s start off there. Like Nisha, why, why user research? Why this problem? What part of user research is broken or needs help? And how, how’s Coneaut looking to solve it? What gap do you see in the market right now?

 

Nisha (00:58.31)

Um, yeah, great question. So just real quick intro. I, uh, my background is data science. I’ve been in the industry for about a little over 10 years. Um, and my last company, I was working at a startup. I’ve been there for five years and was, uh, had built a tech team and, um, had come to a point where we were doing product development.

 

So with product development comes user research, right? Like to build a good product, you need to understand your users. That’s how you get to product market fit. That is how you really build what people are asking for versus what you’re building in your own head. So we did a lot of user research there. And I worked directly with, you know, like a small group that did the product development. One person was a UX designer and then engineer and a data scientist and myself.

 

Matt (01:28.745)

That’s right.

 

Nisha (01:49.978)

Um, and we did a bunch of user interviews and went through the process of distilling them and really pulling out insights. And it was tedious. It took a long time. It, um, it took a lot more time than I had expected, you know, just from my technical background. And, um, I was pretty overwhelmed with like the amount of information that we had to consume. Like, you know, you do the interviews first, record the interviews.

 

Matt (02:00.605)

Mm-hmm.

 

Matt (02:13.874)

Yeah.

 

Nisha (02:16.878)

transcribe them and by the time you sit down to really distill what’s what has been said like the important themes the important takeaways You have to pretty much go through the interviews again and go through every transcription, you know, like the basic Affinity mapping technique where you’re taking post-its and grouping themes and it just takes a long time Like it took, you know a week to two weeks because you don’t have like that set aside time to just dedicate to the distilling of research

 

Matt (02:32.49)

Mm-hmm.

 

Nisha (02:46.878)

And so what I found myself doing with my little team was just taking shortcuts, being like, okay, I remember this, I remember that, and being like, and then internally thinking this isn’t the right way to do this. I’m 100% biasing my findings by thinking, hearing the things that I really wanted to hear, obviously, that’s just human nature. So what actually happened is that I had a…

 

Matt (03:06.374)

Yeah.

 

Nisha (03:15.726)

project come up where there was like some kind of commitment to do 20 interviews in a period of two weeks and then distill the research. And I was like, this is insane. Like from my experience with research, I was like, this is a crazy requirement. And I, and I thought like there must be some tool, like there must be some AI platform that does this. Like we, you know, we’re at the age where this should be available. So I started Googling for it and I couldn’t find anything. I was like, this is insane.

 

Matt (03:25.062)

Oh wow.

 

Nisha (03:46.042)

So I called my friend at the time, my coworker, and now my co-founder, one of my two co-founders, and I was like, hey dude, we should build this. We can do it together. Called my third co-founder and we all talked about it and all agreed that it was a huge pain point of not being able to synthesize research in a speedy amount of time. And then also just that unbiased synthesis.

 

So that’s how this came about, honestly. It’s just from personal pain points, which I think is a great way to build a product because you’ve actually experienced it and you’re like, I wanna use this to solve my problems.

 

Matt (04:25.712)

Yeah, that’s a great explanation. And you’re bringing me back to my product days where we would do user research interviews and I would always schedule like an hour after the user interview to like debrief, go through it again. And it’s like, you know, that’s a two hour block there. And then to your point, you got to synthesize the whole thing. You forget stuff, you mentioned bias, but there’s also recency bias where I’m gonna remember the most recent interview more so than the other one. And then you have like for us, we would have these Miro boards.

 

Nisha (04:42.347)

Yeah.

 

Nisha (04:51.446)

Exactly.

 

Nisha (04:55.979)

Yeah.

 

Matt (04:56.132)

were just huge with all these insights and it’s like you’re trying to connect the dots. It’s it can get messy so like I can I can feel that pain. It’s it’s uh bringing back some memories from those days.

 

Nisha (05:00.128)

Yeah.

 

Nisha (05:08.254)

Yeah, exactly. 100%. It’s just like, how do we and then and so like, just to continue on, it was just like all like this journey has been quite serendipitous. Honestly, I ran into my upstairs neighbor, and she now also works for Coneo and are with us and she was a user researcher and I told her the idea and she was like, Oh my god, like this is gonna make my job so much easier. Right. And I and like, I like

 

Matt (05:19.102)

Mm-hmm.

 

Matt (05:29.626)

Oh, perfect.

 

Matt (05:35.004)

Yeah.

 

Nisha (05:37.31)

I’ll stop there, but I just want to like touch on that as well because it’s not like oh my god It’s gonna take over my job. It was more like this is gonna make my job so much easier

 

Matt (05:46.256)

Yeah, and I love the point too, like you’re hitting on the pain points of the speed element, but there’s also the quality piece with the bias. So there’s some core like value points you’re starting to hit on. But I was digging through your LinkedIn and your CEO, James, I’ll mispronounce his last name, but he had this like interesting quote that was out there, a survey by user interviews and said, UX researchers were the most tentative.

 

Nisha (06:06.146)

Prisha.

 

Matt (06:16.44)

of all roles to use AI with 44% saying they’ve tried to use AI, but avoid it in their research. But by comparison, CX professionals at 5%, data scientists 9%, product managers 21%. What do you think is the reason behind that? Why are user researchers in particular less likely to adopt this technology that could potentially make things easier for them?

 

Nisha (06:44.542)

I mean, honestly, I think it all boils down to like fear of the unknown. Um, if you look at like 9%, right? Data scientists are 9%. Like we, most data scientists understand exactly what’s going on at the bottom level. Right? Like it’s, we’re, it’s mathematical. There’s no like magic. It, there’s a lot of, um, inference, um, based on similar words and.

 

Matt (06:58.237)

Yeah.

 

Matt (07:02.931)

Mm-hmm.

 

Mm-hmm.

 

Nisha (07:09.75)

Um, and words transformed into number numeric representations, and that’s where like it all stems from. So I think like the number one thing is fear of the unknown. And, and then it just goes into like, I don’t want this to take away my job. Like it’s not do it. And then like, so then I feel like P I would get on the defensive of saying like AI cannot do my job the way I’m like better than me, it’s not going to replace me, so I don’t trust it. Um, I think instead, like where we could go with this is.

 

Matt (07:16.195)

Mm-hmm.

 

Matt (07:31.589)

Yeah.

 

Nisha (07:37.758)

AI is augmenting my job. Like I can actually focus on the important pieces versus like the tedious nature of things that I could actually like bring to the forefront using a tool that does what I would be doing over a week or two weeks in a matter of minutes, right? And then I can spend the time taking those insights and making more inferences and pulling more information out of it.

 

Matt (07:41.428)

Hmm.

 

Nisha (08:06.002)

I can also speed my research cycles up. So I think that like that fear, like we’ve heard it, we do our own user research with Conote and I think it’s just like what’s going on under there. Like it’s a black box. And I think that like the way that I would talk to people who had that fear is that it’s not a black box. It’s just, it’s like something that I can help explain and walk through. I think that would just get boring though because it’s super technical. But.

 

Matt (08:19.217)

Mm-hmm.

 

Nisha (08:37.34)

It is all related to similarities and semantic understanding, and AI is also not here to take your job. I will end with that.

 

Matt (08:47.46)

Yeah, and that’s an interesting theme we’ve had across several episodes we’ve done lately, is there is that fear of the unknown, that fear that it’s going to take my job, that it’s going to replace me. But this idea of a co-pilot, it’s enhancing my skills, it’s making me better, is a theme we’ve continued to hear. I was chatting the other day with, and I’m trying to define the episode, it was Brennan at

 

I think maybe that’s episode 15. We’ll see where it is what it launches, but it’s hyper context They’re solving this tool for one-on-ones and performance reviews and they the same kind of idea with HR was like, you know AI cannot solve this for me. There’s no way but what was interesting is like with the latest, you know Just craze over the past year, you know chat GBT and all that kind of stuff

 

They were able to play around with it and get a sense of how things can work. And it kind of opened up their minds a bit. I don’t know. Has some of that happened with user researchers as of late where we’ve had this crazy hype cycle with generative AI, where people see some of the power with it, because I think with user research, it’s, it’s so qualitative. I think that’s one of the big hiccups there as well. It’s like, you know, this is, this is qualitative stuff. It’s not ones and zeros.

 

Nisha (10:04.448)

Yeah.

 

Matt (10:04.764)

But with generative AI, it adds that kind of semantic piece to it, to your point.

 

Nisha (10:09.982)

Yeah, yeah, no, I think that there is a growing acceptance and, you know, like, people want to use this when they start seeing the way that generative AI can augment their research versus takeover. I think people are more accepting. Like, I think we just actually spoke to someone recently that’s getting on the platform at a large corporation, and they were a little skeptical at first and then

 

Matt (10:37.053)

Mm-hmm.

 

Nisha (10:37.43)

we introduced Conote as it gets you 85% of the way, right? It’s not doing all the research. It’s just getting you to a point, jumping off point where then you can take those findings and build your own insights. And that helped her feel better. She was like, oh, okay. So it’s not like just giving me this output. It’s more so like giving me stepping stones to get to that output. And I think when put like that, researchers seem to be more open to using the tool.

 

Matt (10:41.372)

Yeah.

 

Matt (10:54.812)

Mm-hmm.

 

Nisha (11:05.506)

are using products like this, like any kind of generative AI products. You know, there are a couple out there in the market that seem to be getting some kind of traction. I can talk to those later. But like I do think that like it’s like and still in like the early adopter phase. Right. Like people are still like weary. And we have to show people at Conote that like the reasons why they should be using it. And I think that’s like, you know, like what we’re doing for that is building a lot of user education.

 

Matt (11:22.915)

Mm-hmm.

 

Nisha (11:35.45)

showing people how they can use the tool to augment their research and giving examples like within Conote of how you can do that.

 

Matt (11:43.568)

Yeah, it’s an interesting kind of product. And then getting into the marketing problem where they may be problem aware, but not really solution aware and trying to migrate them down that path. Let’s get into like kind of short term evolution about AI can impact user research versus like longer term. And in the short term, I’d be curious, and this may be even be like functionality within Codenote or stuff on y’all’s roadmap. Like what’s the short term view of how

 

generative AI or AI in general is helping the user research process? I mean, is it simply just churning through this long interview and it’s spitting out the insights? Like where do you see it today? And then like, what’s like the crazy, like, you know, utopian future of what it could be in the future.

 

Nisha (12:33.09)

So right now, the power of Conote lies within, you know, like we are actually moving pretty fast. We released our initial beta live June, July 18th, and we’ve already had a couple of releases since. The big powerful generative AI piece right now, so like I just wanted to take a step back. Like we, I don’t think Conote is 100% generative AI. We have layered models. We do use traditional machine learning,

 

Matt (12:45.959)

Mm-hmm.

 

Nisha (13:03.146)

like large language models. And I think to that extent, like there’s already like power there. And that’s why we call it an AI engine versus like just like gen AI, right? Um, and what we’re doing, like the big powerful piece right now is that you can upload hours of research, so multiple interviews, and then you can synthesize. So not only can you synthesize, you can transcribe the interviews and see the transcriptions, uh, see the diarization by speaker and then highlight key quotes.

 

Matt (13:12.596)

Mm-hmm.

 

Nisha (13:31.766)

You can then synthesize your interviews, and then under 10 minutes, you will get the key themes, and then the keywords within each theme, and those keywords directly relate to sentences within the transcripts. So let’s say like I get four themes. I can click into those. I can then see where each speaker, like if I had five interviews, I can see where each of those speakers said that, mentioned that theme. I can then click into detailed view, where I can actually hear.

 

Matt (13:38.025)

Mm-hmm.

 

Matt (13:53.768)

Hmm.

 

Nisha (14:01.302)

the speaker saying it so I can get sentiment. And I can also bookmark this and build a presentation that I can send out to a stakeholder that may be interested in some of the key quotes that were said over eight hours of interviews, which is usually, as you know, usually would take so much more time. So yeah, I’ll stop there. That is our current big bang of…

 

Matt (14:17.948)

Mm-hmm.

 

Nisha (14:27.762)

our AI engine and we definitely have some other plans ahead, but just wanted to stop for any questions.

 

Matt (14:34.832)

Yeah, it’s an interesting point too. You mentioned generative AI is the hyped up word right now, but machine learning, and you as a data scientist knows some of this stuff’s been around for a very long time. This is not necessarily a new thing, right? And there’s so much power just in machine learning and a lot of the things there as well. And I’m curious too.

 

Nisha (14:52.555)

Yes.

 

Matt (15:04.616)

It seems like every day there’s another gen AI product coming out there. Like, how do you see differentiating when, you know, I feel like a lot of this, the tools with AI have been a bit democratized to where people have access to these large language models. You kind of mentioned that’s not the only core point to your, your tool, but how do you build a moat when it’s so much easier now to integrate some of this technology into a tool? Like, how do y’all think about that?

 

Nisha (15:33.11)

I think we have to really think about the user, right? Like, sure, everyone can access these APIs and build and integrate them into their product. Are they actually thinking about the UI and the UX? Like that is a key piece of Conote. Like you wanna have, and as you know, like also being in product, like you want to have a really intuitive like journey when you get to an app, right?

 

Matt (15:36.968)

Mm-hmm.

 

Matt (15:47.06)

Hmm.

 

Matt (16:00.082)

Yeah.

 

Nisha (16:00.942)

Uh, so you could integrate an API and build like all the tech and be amazing and stacked and everything. And if you’re building a SaaS product and don’t have like an intuitive front end, people are just going to stop there. Like they’re not going to know how to use get from point A to point B and what co-note my, uh, so I have James Frisha as a CEO and co-founder. And then my third co-founder is Cameron Ridenour and he’s a, he’s chief design officer and so his background is UX. Right.

 

Matt (16:20.285)

Mm-hmm.

 

Matt (16:26.237)

Mm-hmm.

 

Nisha (16:28.914)

And so not only like we live and breathe these problems, we get in touch with people that live and breathe these problems. We have people that also work with Conote that do. And I think that our moat is a like that we’re not just simply consuming APIs, right? We have other pieces of infrastructure around them on the AI side that actually enhance and empower us to be a little ahead of

 

Matt (16:50.494)

Mm-hmm.

 

Nisha (16:56.298)

not a little, a lot ahead of some of these Gen.AI companies that are just simply consuming and using prompts for some of these APIs. And then secondly, just the fact that we have such a deep understanding of the user and are focusing on that when building our product, right? The experience, the interaction with the app. And if you’re listening to this and are curious, please go check out conote.ai. It is live and free and…

 

Matt (17:02.738)

Yeah.

 

Nisha (17:24.798)

Matt, I’m not sure if you’ve checked it out, but when you’re on the application, compared to many other competitors that we’ve checked out and tried out, there is a very simplistic flow to get to the pain point that we’re solving for, which is really being able to speed up your research process.

 

Matt (17:47.332)

Yeah, I think part of the benefit there is you’re very focused in on a particular type of user, which is user researchers, right? I think so many folks, and we see this with a lot of clients too, they’re trying to serve too many different people. And, and then you get into back to user experience. How can you build, you know, not simple, but just intuitive user experience when you’re trying to serve different groups, do you have even within user researchers?

 

a persona within that you’re targeting or is it user research is kind of the core? Is there a type of user research you’re even more granularly focused in on?

 

Nisha (18:23.438)

Um, maybe not a type of user research per se, but definitely a type of user researcher that is, um, you know, uh, interested in synthesizing multiple interviews and has a research cycle they can’t keep up with. Um, or potentially, you know, like where I’m trying, where we’re trying to drive people is the fact that research is more important than people give it. Like it takes so much time. The research cycles are longer than development cycles, right? Like

 

Matt (18:29.671)

Yeah.

 

Matt (18:46.454)

Mm-hmm. Yeah.

 

Nisha (18:52.146)

I like if I’m thinking about dev, I think of CI CD and DevOps in CI CD and just in general agile principles, a sprint is two weeks. There is in no way that like researchers think they can finish a research cycle in two weeks. However, with Cono, you could do a week of interviews and then synthesize and be done and ready with new with new findings for the next sprint. And I think that is a missing piece in the entire like end to end process.

 

I have tirelessly worked with development teams and like led engineering and data science teams. And the missing gap is that they don’t get the user, they don’t understand thoroughly like the user research part, right? Like they, it’s like a game of telephone, like 10 people have spoken to each other before the engineering team hears what they need to build. And they can get so in like, uh, you know, like just in deep in the rabbit hole of like, Hey, this is how we’re going to do it. Technically.

 

Matt (19:31.964)

Yeah.

 

Nisha (19:50.93)

and not be thinking of like the actual user problem. And that’s where I really want to like, that’s where Kono comes in, right? Kono gives you the ability to add continuous research to CICD. So like in my mind, it should be CRCICD. Like that should be instilled in the development process.

 

Matt (19:53.736)

Mm-hmm.

 

Matt (20:11.12)

No, I love that. And you’re speaking our language. When we talk about built right, we talk about building the right digital solution the right way. And building the right one, it’s a key element. It’s user research. And I love the concept you’re talking about, where it has to be continuous. And this is what we preach as well. Like so much of the time, it’s like, all right, let’s go do our research. All right, discovery’s done. Let’s go build the thing. But it has to be.

 

Nisha (20:26.4)

Yeah.

 

Matt (20:36.712)

built into the process. So I love that idea of it’s, you know, you think of CI, CD, same type of thing. You need that feedback loop built in as you evolve the product. It seems like y’all are kind of, you know, dogfooding it a bit by using the product yourself. I’m curious, like how much of the roadmap are y’all like defining as you use the product versus feedback from customers?

 

Nisha (20:47.842)

Right.

 

Nisha (21:01.294)

And we try to definitely take more of the customer’s feedback just because they’re using it as like as their customers. But like I do have to say when I like I listened to a podcast recently and I was like, started to listen to it and I was like, let me just put this through Coneo and see what happens. And it just was able to distill like the key points so fast and going back to like the roadmap ahead you were asking about in our next release actually October 13.

 

Matt (21:06.845)

Mm-hmm.

 

Matt (21:18.182)

Yeah.

 

Nisha (21:29.75)

There’s a really cool feature coming out that’s called action items. So now not only do you get themes like that have been synthesized during the process, but you actually get the items that to action on, right? So like this is what your users have talked about. These are the actions to take that came from us using it and from feedback. Like I think like I wouldn’t say 50 50. I’d want to say like more like 70 users 30 us if we had to put a ratio to it.

 

Matt (21:38.716)

Mm-hmm.

 

Nisha (21:56.138)

But I think we end up all seeing like the great thing is we I think we all end up like coming up with very similar pain points and One of the main pain points we heard is like this is great, but it doesn’t give me items still like where I need to go next So so I ran the initial like the first round of Kono interviews We did before user interviews we did before we had started building our product, right? We had just like a prototype

 

Matt (22:11.037)

Hmm

 

Nisha (22:23.418)

I ran those interviews on dev through the action items feature to see what the action items were. It actually gave me action items that were the features that we ended up building, which is crazy, right? It told us users want audio recordings, users want ability to integrate with Zoom and Google Meads. I think that’s…

 

Matt (22:39.812)

Wow. Yeah.

 

Nisha (22:52.502)

That’s like, I kind of got off on a tangent, but that’s what happens when I get excited. I think that’s something that we’ve heard from users and from we’ve also experienced that we’re really excited about. And then, yeah, like I think it’s cool that we get to use it as we do our process as well, because it definitely makes us realize like, what is like, you know, like sometimes you can just be drinking from the fire hose. Like you think of really cool ideas, but we use it and we’re like, this is annoying. We need to change it. Like we spot the little things too.

 

Matt (23:10.248)

Mm-hmm.

 

Matt (23:20.564)

Mm-hmm. Yeah.

 

Nisha (23:22.923)

So yeah, it’s a good mix.

 

Matt (23:25.252)

Yeah, that’s interesting. And, you know, getting into the, um, let’s get into like the future state, like, you know, way in the future, you know, where do you see the practice of user research going if things continue to evolve where AI is continuing to evolve? Like, is there a future where it’s not even a real person doing the interview? And like, do I, at some point in the future have a, an agent or a bot that’s, you know, collabing with somebody else? And.

 

Performing this research like you ever see a future where it looks like that Like what is where does your mind go when it starts to wander of where things could be way in the future?

 

Nisha (24:05.262)

I mean, I don’t know about like, yeah, like great point. And I think people like wonder about that. But like, for me, I think there’s like a degree to personal interaction. Like if you’re a bot interviewing me right now, I feel like, sure, maybe like in some years, there will be AI that’s able to replicate each of us very well. But I do think like that human to human interaction is important in being able to, you know, like what? 94% of…

 

cues or like communication is nonverbal, right? Like I think there’s a lot to process that’s outside of just like a conversation that I’m sure AI will be able to replicate, but I don’t know if I’m like, yes, like we want to make everything computer like, you know, that like in the age of AI and just take away the human element. I think more so like the way I see Coneaut evolving in the future is being able to scale across, right? Like not like becoming so.

 

Matt (24:59.879)

Hmm

 

Nisha (25:01.262)

I’m so focused on like automating the entire user research process, but being able to scale to all types of research. So like as like to be able to reach product market fit and to really understand our target audience, we want to focus on user researchers right now. But to be able to like scale, I think where we go is just redefining all types of research, right? Like how do we help in the political space? How do we help in academia? How do we go into like?

 

Matt (25:17.064)

Mm-hmm.

 

Matt (25:23.717)

Hmm.

 

Nisha (25:29.662)

specific types of research. And I think that’s where I see Kono moving. That’s where we’re going in the future. Not like, I don’t see us adding a component where we’re gonna build in AI bots that interview people. And so once again, that’s why I feel we’re not taking away anything. It’s more just like, let’s augment the cycles so that people can be more productive and be up to speed with development teams. Just like, I just read.

 

Matt (25:43.642)

Yeah.

 

Nisha (25:58.81)

someone posted today about copilot, like the code AI, right? And just telling engineers that copilot is something people should lean into. They can automate so much of what they’re currently doing, like some of the like tedious, like granular code writing that like you don’t necessarily need to spend as much time on and can focus on the bigger picture. I see that exact parallel to.

 

to co-note with user research.

 

Matt (26:29.636)

Yeah, that’s a great connection point there. We’re using Copilot a lot at Hatchworks, and it kind of just gets the mundane out of the way, so you can think about the bigger problems. But I want to pause here, like, for all the listeners, when you’re thinking about product strategy and product in general, the way Nisha and team are doing it is a perfect example. They’re solving a specific use case for a particular user and user researchers, but you can also see where her mind’s going in terms of like, Tans Gentle,

 

other areas where they could move into in the future, but you kind of build this, uh, you know, this base with user researchers first. And that allows you the opportunity, uh, to expand further out, but you got to do that first. So that’s a great way to think about it. Don’t try to boil the ocean, solve something specific first, but is there an area you mentioned a couple, is there one that you think is like, Oh, this is definitely where we’re going next.

 

uh, from you mentioned like the political side, like these different areas, is there one that excites you outside of just traditional kind of, you know, product technology solution, user research.

 

Nisha (27:39.555)

Yeah, I don’t think I can say that like there’s one, like I think there’s multiple, right? Like this, people have already been using Kono for marketing use cases. So I think that’s probably like the next place to really go, right? Like, hey, we want to distill all of these, uh, these interviews or these, uh, podcasts and find the key quotes. Um, and this is going to help us be able to like.

 

Matt (27:51.004)

Mm-hmm.

 

Nisha (28:00.746)

make our marketing campaigns faster, just being able to pull these quotes out and having people saying them. So I think that’s a place that we can really like either be used right now or expand to immediately. I think political campaigns could be really cool because as we’re coming up also into a big year, like I think just hearing a lot of like if people, you know, like if there’s campaign interviews, being able to distill those and, and then once again, like play clips depending on whoever we’re working with.

 

And then I think that academia is close to my heart and also a really great space to be able to use this. Like, let’s say you’re a master’s student working full time, which I was, and you have like multiple lectures, right? Like that you have gone to and then are recorded. Imagine being able to use Kono to upload these lectures and then to just be able to find the most important themes and use this to study.

 

Matt (28:41.5)

Yeah.

 

Nisha (28:55.87)

Like I think this basically is with some tweaks, of course, right? Like we’re like, once again, like you said, like we have focused on user research for a reason, um, and I see this being expanded into like a line of products potentially, or, you know, Kona academia, Kona marketing, that kind of thing, but, um, just imagine being able to like take your notes and be able to like have like an easy way to, to like search across like hours and hours of lectures.

 

Matt (29:10.563)

Mm-hmm.

 

Nisha (29:24.706)

That would have made my life so much easier, honestly, when I was doing my masters. So I just think, like, yeah, those are like some key areas that I’m excited to focus on. I don’t know if like one will come before the other. I think we still have to like really nail this initial product market fit and group down. But I think that there’s like, the exciting thing about Conote is I feel like there’s so much room to grow. And there’s like so, there’s so many things that I want to act on, which makes me feel excited about it.

 

Matt (29:54.764)

Nice. Well, I think that’s a good stopping point, Nisha. Thanks for being on the Built Right podcast and where can folks find you, find Connaught? What’s the best place to go?

 

Nisha (30:04.546)

You can email me at nisha.conote.ai. And you can also just check out Conote. It’s live, you get five free credits. Go test it out, email me, let me know what you think. So our website is conote.ai. And then from the website, you can log into the app. So it’ll take you straight there and it’s pretty easy. So yeah, we’d love to hear from you all.

 

Matt (30:31.888)

Awesome, great having you on Nisha, thank you.

 

Nisha (30:34.55)

Yeah, thanks Matt.

The post Generative AI: Augmenting or Replacing Research? appeared first on HatchWorks.

]]>
The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management https://hatchworks.com/built-right/how-emotionally-intelligent-ai-is-reshaping-management/ Tue, 17 Oct 2023 06:00:20 +0000 https://hatchworks.com/?p=30045 As more and more companies bolt AI onto their products, the importance of being intentional with our use of it is growing fast.  So what are the best ways to embed AI into management products in a purposeful way?   In this episode of the Built Right podcast, we speak to Brennan McEachran, CEO and Co-Founder […]

The post The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management appeared first on HatchWorks.

]]>

As more and more companies bolt AI onto their products, the importance of being intentional with our use of it is growing fast. 

So what are the best ways to embed AI into management products in a purposeful way?  

In this episode of the Built Right podcast, we speak to Brennan McEachran, CEO and Co-Founder of Hypercontext, an AI-empowered tool that helps managers run more effective one-on-ones, leading to better performance reviews.  

Brennan shares how he and his co-founder created the tool to aid their own work and later found how it could enhance multiple areas of performance management for leadership teams, creating faster, more streamlined processes. 

Read on to find out how AI can improve EQ in the workplace or listen to the podcast episode below. 

The story behind Hypercontext 

Hypercontext has a unique, organic backstory. Brennan and his co-founder, Graham McCarthy, worked together at a previous company, gaining enough experience as builders and sellers of products to become managers. 

As they focused on being manager-operators, Brennan and Graham concluded that their strengths still lay in building great products. They began building small-scale desk tools to make their work easier and, as COVID struck and everyone became a remote manager overnight, they made this their main focus. 

Brennan shares that one of the products that steadily took off was what would later become Hypercontext, helping managers become the best bosses their team has ever seen. 

Initially guiding managers through one-to-one and internal meetings using the superpowers of AI, Hypercontext has branched out into providing useful tools for performance reviews too. 

 

How AI is quietly revolutionizing HR 

Brennan remembers first taking demos out to HR managers and receiving a mixed response. 

Despite loving the concept, these managers were sceptical because of its use of AI and feared that it was too forward-thinking.  

However, the boom of ChatGPT and other AI tools in late 2022 caused a change of heart. Many HR professionals also realized that their managers had been using AI tools for their performance reviews for a while and warmed to the idea that it could be used to enhance their meetings. 

Brennan notes that cultural reservations can stand in the way of progress. With the AI wave tending to hit tech first: “If we’re not ready for it, we’re in for a world of hurt.” 

 

How AI is transforming EQ  

One of the main concerns surrounding AI is its lack of human touch. But Brennan suggests that, used in the right way, it can actually enhance the things that make us human. 

All managers, in HR or otherwise, have those tasks that are regularly cast aside in favor of more ‘pressing’ jobs. If they had “infinite time”, maybe things would be different. Brennan suggests that AI can take these tasks on and streamline the working processes of managers. 

He also explains how Hypercontext can provide the information that makes us more human. From its conversation starters, to the data it gathers about team members, it can actually make reviews and meetings more empathetic. 

Brennan says: “I think a lot of people have fears about AI taking jobs or removing the humanity in certain things. Done right, AI has that potential to make you more human, in and of yourself.”   

 

The future of developers

Did you know that, of the users who use Copilot, around half of the code committed to GitHub is AI-generated? It’s no secret that AI will impact software development, but this fact begs the question – what does the future hold for developers? 

Brennan thinks this is the time for software developers to pivot to a new focus and suggests those doing “wildly different things” could be setting themselves up for success.  

 

Using AI to write performance reviews 

When Brennan realized that AI could be used to write performance reviews, he was hesitant to fight big-name industry players to find a solution. However, he was determined to be the person to do it the right way. 

He explains that he didn’t want to see a bolt-on tool created that “generates superfluous text around nothing” and was eager to make something that genuinely made managers better in their work. 

Brennan explains how Hypercontext allows managers to compile findings from multiple peer- and self-reviews, identify key themes and tee up the conversations to build upon these themes, all in a minute; something a human just couldn’t do! 

He adds: “80% of people feel like our process is both better and faster than their previous one. Who wouldn’t want that?” 

Fuelled by the desire to make this tool the right way and prove that AI can enrich HR management, Hypercontext built a one-of-a-kind tool and set the HR-AI standard in the process.  

For more insights into using AI intentionally to become a better manager, head to episode 15 of the Built Right podcast.

Step into the future of software development with HatchWorks – see how our Generative-Driven Development™ can transform your projects.

Matt (00:02.067)

Welcome, Built Right listeners. Today, we’re chatting with Brennan McEacher, and CEO and co-founder of Hypercontext, an AI-empowered tool that helps managers run more effective one-on-ones, which leads to better performance reviews. And it’s trusted by 100K managers, companies like Netflix, HubSpot, and Zendesk. Welcome to the show, Brennan.

Brennan (00:23.414)

Thank you for having me. So excited to be here.

Matt (00:25.547)

Yeah, excited to talk today. So the topic we have for our listeners is one, everyone really needs to stop what they’re doing and listen to. And today we’re going to get into how you should be strategically thinking about embedding AI into your products in an intentional way. And with the current, you know, AI hype, hype cycle we’re in, or everybody and their mom is bolting AI onto their products. And I don’t mean that in a good way necessarily. This is a conversation worth.

worth having, but, but Brendan, as a way to kind of set context, I love getting into our guests with, you know, what problem they saw in the market. They kind of triggered them to start their, their company, give some good context for the background.

Brennan (01:08.806)

Awesome, can do on the contact side.

I think the story of Hypercontext, the story of us founding it is sort of an organic one. Myself and my co-founder, Graham, had a previous company. We’ve been working together for a little over a decade and that previous company was successful enough or maybe we were successful enough at building and selling product that we ended up becoming managers. We had enough employees and staff around us to help us build a bigger and better business. And as we stopped…

building and selling, we realized that we sort of accidentally fell into a new career of managing and that this new task of being manager-operators is

Matt (01:51.201)

Mm-hmm.

Brennan (01:54.358)

completely different and very hard. So we did what we knew best, which was build. We built some little side of desk tools to help us be better managers and be better bosses. And as a long story short on that business, as COVID kind of came around and wiped out industries temporarily, we were sort of caught up in that mess and that business disappeared almost overnight. But these little side of desk projects that we had built.

Matt (02:18.698)

Yeah.

Brennan (02:23.966)

Uh, exploded everyone in the world became a remote manager overnight in the middle of a crisis and, uh, felt the pain of being a manager, uh, and being a remote manager and all of the problems that come along with that. And these little tools that we put out on the internet went from, you know, a couple of signups here and there to, uh, in some cases, thousands of thousands a week. Um, and so we, you know, made some tough choices, but otherwise we’re able to

Matt (02:29.215)

Mm-hmm.

Brennan (02:54.36)

pivot almost all of our energy towards what you see today. Hypercontext, which is as you mentioned, building tools to make managers the best bosses their team has ever seen. So we start with one-on-ones, we start with internal meetings, team meetings, add goals to it all the way through to now just recently launching performance reviews and I think that sort of leads into trying to build the performance review piece the right way.

Matt (03:18.616)

Hmm.

Matt (03:23.071)

Yeah, I love the, it’s kind of like the Slack story, right? Where you kind of built this thing on the side and powering it like, oh, this thing actually has legs. And I was just chatting with a friend yesterday, same kind of thing. They had like this side thing they had built and people were asking about it. And it’s like, well, maybe this is the thing. And it’s kind of an interesting story when, you know, something like a pandemic just changes your whole business model, right?

Brennan (03:47.626)

Yeah, I think the saying right of like scratch your own itch is sort of relevant here. We, um, we definitely started it as like, uh, something to scratch your own itch and, um, as early as we possibly could try to get, um, external people’s input on it. Um,

Matt (03:51.805)

Mm-hmm.

Brennan (04:07.858)

One of the things that I think I learned in the first business is like what you want and you know, what helps you is not always the exact same thing as what helps other people. So we tried to look for the general solutions, um, to some of these problems instead of a specific ones that would work for me being, you know, uh, you know, a tech guy, a product guy, whatever, we wanted to look for something that had sort of that broader appeal. And that’s actually how we landed on one-on-ones. We, we initially thought, Hey, there’s maybe more meetings that we could.

Matt (04:17.732)

Mm-hmm.

Brennan (04:37.992)

And when we went out and tried to talk to people and figure out a general solution, the amount of build we would have to do was just so big. We ended up looking for, like, what are some commonalities? One-on-ones ended up being really appealing.

Matt (04:47.817)

Yeah.

Brennan (04:53.934)

for a variety of reasons, but one of the main ones is that an engineer having a one-on-one with their manager is very similar to anyone else at any other company having a one-on-one with their manager, almost by definition, right? You’re not supposed to talk about the tactical day-to-day stuff, and so you talk about more of the meta conversations, which can be similar. So that sort of led us down some of these pathways.

Matt (05:04.087)

Mm-hmm.

Matt (05:21.463)

Yeah, I think that’s an interesting point, like just to pause there for anybody in product, you know, we talk about building your solution, the right building, the right solution and building it the right way. Building the right solution, start with a smaller use case. That’s a critical piece. Like you could have boiled the ocean and tried to figure out every meeting under the sun and all this stuff. And then your head would just explode with everything under HR. But you started with the one-on-one because it was one that, you know, universal.

It needed help, right? So you identified this problem in the market. I just love that. And now it’s turning into more as you’ve built proof behind it.

Brennan (06:00.114)

Yeah, you know, we started with just exploring the, the exploration sort of led us to say, you know, and especially coming from the last business where it was a lot of change management to kind of sell the product. Um, we wanted to avoid some of the change management. So we’re like, what is already existing? Um, and the only thing that I could kind of point to as proof was like the calendar. So like

When we’re building some of these products, it was what it already exists on the calendar. Let’s not make people do something new. Let’s look at their calendar first and see if there’s anything we can do on that calendar to make it 10 times better. And so, um, you know, the one-on-one was there. Um, so was the team meetings was the board meeting. So was that, you know, the QBRs, all these other types of meetings. The interesting thing, there’s so many things that are interesting about, um, uh, one-on-ones for us as a business, um, you know, almost every manager has one. So there’s lots of entry points into the organization.

Matt (06:29.495)

Mm-hmm.

Matt (06:54.881)

Yeah.

Brennan (06:55.212)

Which was a key piece of what we thought our strategy would be Very easy to Try because you can try it with one person. You don’t have like a town hall is tough to try You have to do it with your whole company

Matt (07:09.615)

Hmm.

Brennan (07:10.938)

Um, so you, so with a one-on-one, you can pick the most open-minded person on your team or try out the product. If it works well, you sort of get into some of the other things. It’s, it’s, um, very replicable, right? If you have something that works with one person, it should work for other people. Um,

So many other things it can spread, right? Like you have a one-on-one with direct reports. You also have one with your boss, right? You’re all the way up to the CEO and the CEO all the way down to a different department so you can spread it exists on the calendar. So many things that led us to it. And just because you have seven direct reports, seven one-on-ones, um, plus your boss, plus maybe a peer one-on-one just by frequency of meeting it’s, uh, it’s a very high frequency meet meeting. Um, there’s way more one-on-ones than there are team meetings at, at businesses. So.

Matt (07:32.803)

Mm-hmm. Yeah.

Matt (07:56.629)

Mm-hmm.

Brennan (07:57.424)

All of these things as we sort of bumped into it, we said, you know, hey, maybe there’s something here. What would it look like to do a 10x better job? And sort of honed in on that use case. What are people already using for it? What are they, you know, what are the good, the bad? Who are some of the competitors? And for a long time, the only people building tools for this space were the big boring HR companies, right? And like, no one wants to open up.

Matt (08:22.689)

Mm-hmm.

Brennan (08:24.53)

SAP, you know, or ADP and go into like this tiny little module to fill in a text box when you can have Apple Notes or something like that. So, zoomed in on that for sure.

Matt (08:36.241)

I still have nightmares from using one of those, but the time entry system, I’m like what button do I click? Who designed this thing? But they can’t get out of their own way because they have so much legacy, just what’s the word, technical debt that exists there, right?

Brennan (08:53.422)

And like they have to cover so many things or they have to do payroll for globe for, for every culture and company type and all that stuff. And you’re just one tiny module on there. So.

Matt (08:56.895)

Mm-hmm. Yeah. Oh my god. Yeah

Matt (09:05.059)

Yeah, but a lot of great, you know, PLG type of motions there, like you mentioned, the high frequency of using the product, building the habit. I think we talked about the book Hooked, which if anybody has not read that, check that out. It’s great. And there it is, the yellow blur and the background that stands out like a sore thumb, which is another great way of standing out. But I want to, let’s get into now AI, right? So your company was started post pandemic. This was pre.

Gen AI, you know, large language model craziness, even though they’ve been around for a long time, the crazy hype there. And you had AI integrated into the tool, but I’d love to get into this evolution. Cause one thing that struck me, uh, when we talked earlier, it’s like somebody’s going to do this talking about competitors, embedding AI.

but they’re just not going to do it the right way, right? And we want to do it the right way. But talk about that evolution, because so many folks, they just bolt on AI. It’s from a marketing perspective. They just want to key into the hype. But it’s such a bad way to do it from a strategic standpoint.

Brennan (10:04.118)

Yeah.

Brennan (10:10.57)

Yeah, it’s funny the place where we have AI the least right now is actually on the marketing side of something that we’re trying to fix. It’s definitely pretty heavy on the product. No, you’re exactly right. We wanted to build, um, the best, you know, for, for lack of a better term, we wanted to build the world’s best one-on-one tool for managers. Right. Um, and.

Matt (10:18.391)

Yeah.

Matt (10:34.648)

Mm-hmm.

Brennan (10:39.586)

that mission will never truly be accomplished because the market moves so quickly and we always have to serve the managers in that use case. But like largely, you know, quote unquote, mission accomplished. We sort of have the best hyper-connected workspace for one-on-ones for managers, whether it’s, you know, just one-on-ones or you wanna bring that team in once we have it for team meetings.

Matt (11:01.281)

Mm-hmm.

Brennan (11:03.978)

We added goals to it. So if you’re working on professional development goals, you’re working on team goals or OKRs. We have the largest library of goal and OKR examples on the internet built into the product. Like, you know, largely anything a manager, a team lead would need out of a platform for leading their team, sort of built it out of the box, PLG, go try it for free. And…

Matt (11:28.062)

Mm-hmm.

Brennan (11:31.182)

I mentioned some of the benefits of one-on-ones and some of these team meetings and that we get this organizational spread. Well, that started to happen, right? We would start to spread across these organizations through calendar invites. As people sort of discovered our tool and shared our tool, if COVID taught us nothing else, it’s that, you know, look for the super spreaders, right? Like we were sort of looking for the people who would spread our tool internally. And they did. And then

Matt (11:45.067)

Hmm.

Brennan (12:01.09)

Because we say the word manager so often, because we say the word one-on-one so often, when it came time for the organization to look at this tool and say, well, what is this tool used for? It’s often used for one-on-ones and for gold. Managers love it. It sort of fell on the desk of HR. It fell into the budget of HR. And HR looked at it and said, this is great.

this actually might be a sign that our organization is maturing. Maybe we need some more of these HR.

big HR tools, right? Maybe we need a platform for performance management, which it can do all of these goals and can do all these one-on-ones, but can also do surveys and can also do performance reviews and can also do all these other things. And the managers are like, no, don’t get in our way. Don’t ruin our thing. And often they would use us almost as an excuse to buy their tool, to buy the big, boring HR tool, consolidate the money that the company is spending on us and, you know, double it, triple it,

Matt (12:34.18)

Hmm.

Brennan (13:02.192)

something else and the management revolts and stuff like that. We would try to fight back as best we could, but ultimately when we started talking to the folks in HR, they were like, well, I need performance reviews or something like that. We didn’t want to build it, but we started looking at building it.

and sort of taking that fight on. You know, what would it look like if we did round out our platform to incorporate some of the more traditional aspects of performance management? Hate that word, by the way, performance management. That’s like a micromanage-y word, like HR is gonna perform. I think that performance enablement, I think that the goal of HR getting involved in…

Matt (13:27.191)

Mm-hmm.

Matt (13:40.64)

What do you prefer? Is there another word you prefer?

Matt (13:48.707)

Hmm

Brennan (13:52.546)

performance management is to help people be performed. They’re not really there to micromanage performance. They’re not getting fired if marketing misses their KPIs or sales misses their KPIs. So why are they in charge of performance management? Doesn’t even make sense. But enabling performance at the company, that makes sense for HR to centralize. So we looked at.

Matt (13:55.192)

Yeah.

Matt (14:08.163)

Yeah.

Brennan (14:18.454)

you know, what were the other people doing? Maybe there’s integration plays that we can do, et cetera. And one of the first things that popped into our mind was just the quote that HR kept bringing to us, which is, well, if people are doing their one-on-ones properly, if they’re doing their one-on-ones right, then come performance review season, there should be no surprises. No employee should be surprised. So we sort of thought, well, not, you know,

Matt (14:42.327)

But how many managers do that though, is the thing, right? Like I’ve been through that experience. It’s like, I have great intentions come January. I’m gonna document everything that happens. I’m gonna have this great thing at the end of the year. And I’m okay at it sometimes, but I’m not great at it, right?

Brennan (15:01.13)

Well, and that’s like a huge piece of what our core product tries to solve, right? Like, how do we make a one-on-one tool so good that you prefer to use it over something else and can we build in some of these workflows where you can follow through on those great intentions? Um, I think most managers with those great intentions try to, uh, implement them with like a Moleskine notebook, right? They get like a new Moleskine notebook and they’re like this year it’s going to be better and that Moleskine notebook has like four pages and then it’s, it’s tossed to the side. And HR said,

Matt (15:11.461)

Mm-hmm.

Matt (15:22.66)

Hmm.

Matt (15:26.877)

I like that analogy. Yeah. Yep.

Brennan (15:31.164)

well, we’re gonna make it better by giving you like a Commodore 64 and you’re like I’m not gonna use a Commodore 64 for my like You know

notes, that’s insane. I’ve got modern tech over here. So we wanted to build, what would the Apple version of this look like? And you’re exactly right. If we did the daily habits right, things would be much better. We’ve spent so much time on the daily habits that we legitimately help managers to the point where they spread the word internally. So when we went to HR, it was like, well, if they’re doing everything right,

Matt (15:39.808)

Yeah.

Matt (15:47.268)

Mm-hmm.

Brennan (16:07.094)

then there should be no surprises in performance reviews. Can we actually make it so that it’s not just that there’s no surprises, that it’s effortless? What would that look like? And we started exploring around with AI just sort of making like proof of concept demos of, can we take the notes from your one-on-ones? Can we take the goal updates on your OKRs?

Can we take some of the stats our platform can generate and integrate that with your HRIS system? And maybe you calibrate the AI with a couple of quick questions. Maybe the AI can actually write the review for you. Could that actually work?

Tech demo sort of proved that it could. And to the point where, you know, I’m sitting there looking at it being like, I don’t know if I wanna build this. I don’t know if I wanna enter this battle and fight some of these big name players.

Matt (16:45.518)

Mm-hmm.

Brennan (16:58.838)

but someone is gonna do it, and those people are not gonna do it with the right intentions. They’re gonna do it as a marketing play, as a bolt-on thing, they have performance reviews where no one uses the one-on-one functionality, and they are just gonna have an AI blindly, dumbly generate some, you know, supple first text around nothing. And people are gonna, you know, feel wowed temporarily until the gimmick sort of wears off.

Matt (17:03.998)

Mm-hmm.

Matt (17:16.94)

Yeah.

Brennan (17:26.798)

And in order to accomplish, I think, using AI the right way and implementing this sort of AI and HR the correct way, you need the daily use. You need the use from the manager every single day, documented, properly categorized, in order to build on the everyday, to write that end of quarter, end of biannual or annual review.

And we had just so happened to have spent, you know, an extreme amount of energy over years working on those daily habits that we sort of felt uniquely able to build this the right way in a way that it seemed like no one else was even able to attempt. So we sort of threw our hands in the air and said, like, you know, we got to get this out first so that people know, you know, the right way to do this. And then that’s what we launched so far. It’s been amazing.

Matt (18:20.743)

Now, take me back to when that time happened, because if I recall, y’all were trying to do some of this pre having, you know, open AI and others kind of opening their APIs. And then they have that and it kind of just democratize things in a lot of ways where you get access to these large language models that you could then apply to your data, correct? And then it becomes differentiating because it’s unique to you, even though you’re leveraging something that’s, you know,

Brennan (18:33.423)

Yeah.

Matt (18:51.395)

I guess you get into a whole debate of open AI, not technically open source, but it’s all another discussion.

Brennan (18:57.206)

Yeah, that’s right. No, you’re exactly right. We’ve been using machine learning AI for quite a while on the how do we make the meetings better, right? So from categorizing what you’re talking about in a one-on-one, using those with AI into an engagement framework. So if you’re not talking about certain things in an engagement framework, the system’s aware of that. It’s able to use that information to suggest.

Matt (19:05.836)

Yeah.

Brennan (19:23.95)

Content to cover your blind spots So if you haven’t checked on someone’s motivation in a while, we’ll sort of recommend here’s a conversation starter that checks on motivation Because you haven’t checked on motivation with this person a while things that like busy managers Have all the right intentions, but they’re just busy, right? They’re not gonna be able to keep track of like when was the last time I checked on this person’s motivation It’s more like, you know, if things are silent I’m gonna assume things are good and until I get sort of punched in the gut a couple weeks down the line

Matt (19:26.761)

Mm-hmm.

Matt (19:34.723)

Yeah.

Matt (19:51.997)

Mm-hmm.

Brennan (19:53.47)

So we’ve been using some of those things, same with our next steps. You type a next step out, we would automatically figure out the date with machine learning, we automatically figure out who to assign it with machine learning, all that stuff. When it came time to sort of think about using that to a greater extent,

Uh, in, in writing the written formal, um, feedback for the managers. Um, obviously there’s way more data we wanted to pull. It wasn’t just, you know, what have you typed recently? It was like six months of meeting notes. It was six months of goal updates. Um, on top of, you know, data from a reporting on top of a whole bunch of other stuff. So it could generate a lot more, um, high quality feedback, but there’s also little things about like, you know, coaching and training this model.

and when we first took these sort of tech demos out to Folks in HR the reaction was like wow. I feel like I saw the future I just don’t believe it will work like I just don’t believe the techs there yet. I’m like, what do you mean? I just showed this to you. They’re like, yeah. No, I see it. I’m looking at the future, but I Just don’t think the world is there yet Like and I think they were more reacting like culturally like this wouldn’t be

Matt (20:50.456)

Don’t believe it, yeah.

Matt (21:00.792)

Yeah.

Brennan (21:05.962)

You know, they feel like they’ve seen it but like they’re not sure if they’ve just you know, if they’re being tricked or what’s going on and Until the reaction was like overwhelmingly positive Yet very reserved and then when chat GPT came out was like November December and You know obviously took off exploded never went on their uncle was using chat GPT for a bit

Matt (21:12.776)

Mm-hmm.

Brennan (21:34.154)

come January when everyone did their annual reviews, most people in HR found out that like half of their managers had, you know, chat GPT writing reviews for them. And so there’s a few times or some of the HR folks I, you know, talked to in maybe November or October, the prior year, were like, you’re completely right. I got it completely wrong. The world is ready for this. We’re already doing it. The issue is like, obviously the chat GPT is, you know, sending private data to chat GPT, it’s, you know, obviously biased in its own way.

Matt (21:43.147)

Yeah.

Brennan (22:04.088)

have information about it, doesn’t have all this knowledge, sort of came back to say, all right, we should check this out and earn it. So a lot of the stuff I think around AI is sort of like a cultural reservation around are we ready for it. And I think what’s interesting for tech companies to sort of catch up on is like,

Matt (22:17.593)

Mm-hmm.

Brennan (22:25.494)

we sort of have to be ready for it, right? Like the wave hits tech first. And if we’re not ready for it, then we’re in for a world of hurt. So I think playing with some of these things internally feels a lot more palatable than playing with some of these AI tools with like your prospects or your customers, right? That’s a little bit more scary, so.

Matt (22:30.295)

Yeah.

Matt (22:42.678)

Mm-hmm.

Matt (22:46.319)

Yeah, that’s true. I mean, we’re doing this right now at Hatchworks, right? The generative AI, one of the big areas it will impact is software development. So we’re leaning into it, almost trying to disrupt ourselves before competition or somebody else does. So we’re taking a similar approach where, OK, we have this new tool and functionality. How can we leverage it and empower our teams with it, ultimately our clients, at the end of the day?

Brennan (23:13.078)

Yeah, I heard the stat the other day. I think it was the CTO of GitHub was saying of the GitHub copilot users, which is sort of auto-complete within your development editor, about half of all code committed into GitHub is written by an AI. So of the users who use copilot or OpenAI’s code AI.

Matt (23:24.303)

Mm-hmm.

Matt (23:33.771)

Wow, I have not heard that yet.

Brennan (23:41.33)

about half of the code checked in is written by AI. So, I don’t know, if you chart that curve a few more years into the future, some of this stuff is like a year old. Will we have developers in the way we’ve always known them as or we’ve known them recently as, or will developers be, I think they’ll still be around, but will they be doing just wildly different things, right? And obviously the people who are…

Matt (23:53.646)

Mm-hmm.

Matt (24:04.876)

Yeah.

Brennan (24:09.398)

The developers who are doing wildly different things first will have a leg up quite a bit on those who aren’t, or the companies who have armies of developers like that. But for us, it’s even more nuanced in that we’re building an AI tool now, in that we want to use the AI tools to understand what are the interfaces that work for AI right now.

Matt (24:27.997)

Mm-hmm.

Matt (24:35.32)

Yeah.

Brennan (24:35.53)

So a big part of like us building it right is like, we actually have to artificially inflate how much AI tools we use so that we get a sense of like, oh, this pattern, this UI pattern really, really works. This UI pattern.

Matt (24:47.715)

Hmm

Brennan (24:49.334)

um, doesn’t right where we had years of understanding the UI patterns of search. We’ve had years of understanding the UI patterns of like top bar sidebar navigation, how do you interact with an AI? No one knows, right? Like, um, we’re in early, early days of just understanding how you interact with it and obviously the first breakout interface has been chat.

Matt (25:02.626)

Yeah.

Brennan (25:11.606)

like surprise, but there’s a lot more. And so, you know, just rolling these out, you know, even some basic things and getting not only customer feedback, which has been really helpful, but us using tools like GitHub Copilot to understand the auto-complete UI using AI is like a really powerful interface, right? Like it can sort of predict a paragraph of text at a time, which is an incredible time.

Matt (25:39.74)

Mm-hmm.

Brennan (25:41.74)

I mean, half of code checked in and sort of accepted AI code. So if it can autocomplete code in your code base, like imagine what it could do on some of the more monotonous tasks at your workforce.

Matt (25:54.619)

Yeah. Yeah, the QA aspect of it becomes ultra important. But then again, you can leverage AI for that as well. And I think the UI element you mentioned was interesting. One of the best explanations I’ve heard is CEO of HubSpot. He talked about we’ve lived in this kind of imperative approach of like point and click, and that’s how we interact with technology. But it’s potentially move into this more of a declarative approach, which can really change how we interact with technology at a fundamental

level, so it’s really interesting there. I want to get your take here to kind of round out the episode. Your product’s in HR. It’s innately kind of this intimate human thing, right? You’re talking about people’s careers, their goals, what do they want to do? It’s this human thing. Does AI degrade that experience in any way? What’s your view on how…

Brennan (26:46.946)

Yeah.

Matt (26:52.711)

AI impacts that either for the positive or the negative.

Brennan (26:57.598)

Yeah, such a good question. When we, you know, pre AI, when we were first starting out, people used to ask like, you know, using an app for one-on-ones, that seems silly. Management is sort of like looking at people, you know, face to face, eye to eye. And obviously with remote, that becomes a little bit more challenging.

Matt (27:16.024)

Mm-hmm.

Brennan (27:18.95)

Um, and I used to always say like, this was, this sort of feels like the same thing that like the older generation would say to the younger generation about almost every new technological advance, right? Like people used to like to read newspapers, you know, feel books and read newspapers and, um, uh, you know, have, have journalistic integrity and these bloggers, what do they know? And, um, uh, or dating, right? Like, shouldn’t you, you meet people in real life versus like an app. And obviously we know the apps.

Matt (27:35.17)

Mm-hmm.

Brennan (27:49.524)

taking care of the majority of marriages, I think, in North America for a few years now. Why not the workplace? Why not some of these management practices as well? But AI is a whole new angle to that.

Matt (27:53.433)

Hmm.

Brennan (28:04.394)

because if the AI is doing it, then what are we doing, right? Especially when it comes to the things that we think of as innately human. If the AI is writing performance feedback, then what the heck am I doing as CEO, right? And I think that’s where people can get weirded out or scared, et cetera. But I think that the first thing is that the goal, at least the way we’re trying to build it, is to allow

Matt (28:17.043)

Mm-hmm.

Brennan (28:34.082)

the humans to be more human, to have more EQ, to have more time to spend with each other face to face. And so you look at, well, what can AI do? And I think the current state of AI, and this is obviously gonna be out of date, even if you publish it tomorrow, but the current state of AI is if you can sort of train an intern to do it, you know, in their first couple of weeks on the job.

Matt (28:37.017)

Mm-hmm.

Matt (29:02.518)

Mm-hmm.

Brennan (29:03.082)

you can get an AI to do it right now. So the first task is sort of breaking down these little things into small enough tasks that an intern could do it in the first couple of weeks on the job. And most tasks we do at work can sort of be broken down in that way into these repeatable steps. But the difference is when you have AI, you can kind of scale that to the almost like infinite dimension. So…

Matt (29:06.467)

Yeah.

Brennan (29:30.802)

most managers could look through six months of one-on-one notes for all seven people they have to do a performance review on. They could do that, but they don’t zero percent well because they don’t have the willpower to do it. They don’t have the discipline to do it. They don’t have all of these little things that are needed and they don’t have time. Truthfully, they don’t have time. They’re dealing with a fire and that fire is happening in their functional department and HR is like, by the way, you have to get your reviews done.

Matt (29:38.871)

0% will.

Matt (29:44.121)

Yeah.

Matt (29:58.168)

Yeah.

Brennan (30:00.676)

So they’re pretty busy. They could look through six months of gold data. They probably won’t. So biases creep in and some of those biases are okay to have. Some of those biases are less so, and people often talk about biases in AI. But the AI can actually reduce other bias, like can severely reduce recency bias because it can read all of this data. It can severely reduce other sets of biases because you can withhold information

about is this person a male or a female? Is this person named John or some other name, right? That otherwise would lead to bias. You can sort of take some of those things out and the AI doesn’t know about it, so it’s just going to treat everyone the same. And you can inject bias of, you know, making this be harsher, universally harsher or universally softer, and put everyone on a unique playground. But what’s…

Matt (30:36.333)

Hmm.

Brennan (30:58.794)

Further to that is in many of these companies, you’re doing a 360 review. So you have a person you’re reviewing, the manager’s got to do that review, but they’re doing a self-evaluation, peer evaluation, et cetera. So again, the manager for all of those seven people they’re doing these reviews on, they could look at all three peer reviews that they, you know, received on this person and the self-review. And they could analyze the different scores and notes of feedback between these various different peers. And they could, you know, group those,

into themes and psychoanalyze that and understand if there’s a confidence issue happening with this direct report. They’re just not going to do that. They just don’t have time. But all of the things I sort of mentioned there, the AI does in under a minute. So it will analyze what everyone else submitted about a person. It will try to understand if there’s a theme in any of these peer responses that are different from the themes in the self-eval, that are different from the themes in the self-eval.

in your avowl, it will highlight those differences, what are some of the common causes of it, help you frame some of your responses to better tee up a productive conversation instead of like a frustrating conversation, give you prompting conversation starters for what to talk about in your next one-on-one that could help resolve some of these issues. All of these things the world’s best manager would do.

Matt (32:19.311)

Hmm.

Brennan (32:27.014)

Um, if they had infinite time, um, and they don’t, what’s neat about AI is you can give those, those people, all of the people sort of infinite time in certain directions and all of the directions that AI wants to go are the ones humans don’t want to go. And so in a way, bringing the AI into some of these tasks allows you to do the things that are innately more human. Do that way more. Like because you have all this knowledge.

Matt (32:54.326)

Mm-hmm.

Brennan (32:56.688)

can go and be more empathetic with this person, right? Because you now have the notes needed and some of the questions needed to be more empathetic. So yeah, I think a lot of people have fears about maybe AI taking over jobs or AI, you know, removing some of the humanity in certain things. And I think often the stuff that AI might end up doing is the things we knew we should always do, but we got too lazy, right? And now that we’re,

have this, you know, most infinite willpower source to pull from with AI, what are we now able to do knowing that we’re doing the best job ever in some of those places we were previously lazy and often I think that’s being more human, being a better person in many ways.

Matt (33:45.555)

Yeah. And AI has that potential to, if we do it the right way, to actually make you more human in and of yourself. And I love the EQ tie. It’s like AI done the right way enhances EQ for the individuals using it. It’s kind of like this co-pilot, good name for GitHub, right? But it’s like a co-pilot mindset of how AI is used.

Brennan (34:03.73)

Yeah. I’ll give you like such a good example of that. Cause this is something that’s universally come back from our customers is sort of that, right? Like, um, we take your notes, we take your goal updates, et cetera. But we also ask before we sort of do the written feedback, we, we asked for some calibration questions. Um, and those calibration questions might be the same questions from yourself, a valve from your peers, evaluation from your managers, evaluation in there, you might get different scores. You might get different jot notes from your peers and your manager or whatever.

and AI will just go in there. We show these steps.

um, to our users, the AI can be a black box, right? So what we’ve tried to do is like what we were taught in math class, instead of just spitting out the answer, we sort of show the long division step-by-step show your work. So, um, you know, one of the areas we show our work is, is in analyzing that the peer responses or sorry, analyzing sort of those, those calibration assessment questions. We give them to the manager. So the manager can do all of the analysis themselves if they want to.

Matt (34:43.666)

Mm-hmm.

Matt (34:49.931)

Show your work, yeah.

Brennan (35:08.464)

for them and just sort of summarize the insights out of it. And almost universally, everyone who’s seen that has been like, that’s the most valuable thing. I’ve…

Matt (35:17.123)

Mm-hmm. Yeah.

Brennan (35:17.166)

had in my management career, right? Someone, something to read this, analyze it, talk about the surprising, the interesting, the confusing. You know, and like some of the stuff that it gave me and others is like, you know, the person rated themselves low here, their peers rated them high, you rated them, you know, mid to high. And the commentary was sort of overwhelmingly positive. The fact that they’re rating themselves low either suggests that they might have

there’s a misunderstanding of expectations or something else. Like maybe you want to, you know, bring up, introduce this or ask about these, these types of things in this type of way. And every manager is like, holy shit, right? Like that’s incredible. And, and the truth is like, you know, obviously I’m biased in saying, saying that, you know, our tool is incredible and it can kind of present incredible, but like legitimately is this is like what other people have been saying. So

Matt (35:49.767)

Mm-hmm.

Matt (36:01.449)

Yeah.

Brennan (36:16.394)

You know, if you do these things right, if you kind of show your work, you break the steps out, you kind of break things into these tiny steps that AI can do a great job of on, you can sort of build into some pretty incredible stuff. And that’s where we’ve been getting some of the latest stats. I think I shared with you earlier, right? 80% of people feel like our process is faster than their previous one, if not significantly faster. And 80% of the people receiving feedback say it’s better

Matt (36:45.348)

Hmm.

Brennan (36:46.388)

prior, right? So 80% better, 80% faster. Like who doesn’t want this in their work life? And I think, you know, we’re doing that for HR, we’re doing that for performance reviews, but you can sort of tackle any functional area, any pain point and say, all right, how do we make this 80% faster and 80% higher quality as well? And what do you do now as a functional person in that role with that much free time back? And I think the answer is do more human things.

Matt (36:47.82)

Yeah.

Matt (37:16.511)

Yeah. And quick plug for a Bill Wright episode in the past, episode 8. The intern comment you mentioned triggered me. So we had Jason Schlechter, founder of AI Empowerment Group, on. We’re talking about how to identify and vet winning use cases with generative AI. And the struggle, a lot of the times, is framing and how folks think about how they can use AI. And one thought exercise he likes is pretend you literally have an army of interns that you can put to work. Now what could you do?

Brennan (37:38.527)

Yeah.

Matt (37:46.239)

It’s that reframing of how you can empower things with AI, just to start thinking through use cases. So I wanted to do that quick plug, but Brendan, this has been an awesome episode, great chat. Where can folks find you?

Brennan (37:56.258)

Absolutely.

Brennan (38:02.178)

So you can find us hypercontext.com or we’re hypercontext app on Twitter, you know the LinkedIn, similar words. And then myself personally, I’m on LinkedIn, find me Brennan McCackren or Twitter, I underscore am underscore Brennan. Should be pretty universal across there.

Matt (38:20.897)

Awesome.

Well, Brendan, thanks for joining us on Built Right Today.

Brennan (38:27.21)

All right, Matt, thanks for having me. Thanks everyone.

The post The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management appeared first on HatchWorks.

]]>
Mastering Marketing Strategy with Matt Paige https://hatchworks.com/podcast/mastering-marketing-strategy/ Thu, 05 Oct 2023 15:35:02 +0000 https://hatchworks.com/?p=29849 In this episode, Matthew Carnevale sits down with Matt Paige, a marketing maven who has climbed the ranks from project management to VP of Marketing & Strategy. Matt opens up about his career journey, sharing invaluable insights from his experiences in both telecom giants and dynamic startups. Whether you’re an aspiring marketer or a seasoned […]

The post Mastering Marketing Strategy with Matt Paige appeared first on HatchWorks.

]]>
In this episode, Matthew Carnevale sits down with Matt Paige, a marketing maven who has climbed the ranks from project management to VP of Marketing & Strategy. Matt opens up about his career journey, sharing invaluable insights from his experiences in both telecom giants and dynamic startups. Whether you’re an aspiring marketer or a seasoned professional, this episode is packed with actionable advice to help you elevate your marketing game.
Key Takeaways:
  • Career Transition: Matt Paige didn’t start in marketing; he transitioned from a role in project management. His journey is a testament to the power of adaptability and continuous learning.
  • Lessons from Diverse Experiences: Working in both large telecom companies and nimble startups has given Matt a unique perspective on marketing strategy. He shares the lessons he’s learned from these contrasting environments.
  • The Art of Positioning: Matt emphasizes the importance of intentional positioning in the market. Learn how to craft effective messaging that resonates with your target audience.
  • Strategic Internal Reporting: One of Matt’s keys to success is mastering internal reporting to drive growth. He provides insights into how strategic reporting can align teams and guide decision-making.
  • Actionable Advice: Matt offers practical tips for aspiring marketers and future leaders. From understanding the nuances of your industry to mastering the art of communication, his advice is both actionable and inspiring.

The post Mastering Marketing Strategy with Matt Paige appeared first on HatchWorks.

]]>
The 6 Adaptive Leader Behaviors with Ebenezer Ikonne https://hatchworks.com/built-right/6-adaptive-leader-behaviors/ Tue, 03 Oct 2023 06:00:28 +0000 https://hatchworks.com/?p=29967 It’s so easy to get change management wrong. Whether it’s sweeping organizational changes or tweaks to your systems and processes, leaders need to adapt and manage the challenges that come with any change.  When bringing big changes to your team, there could be differing opinions, resistance to change, or anxiety over changes. We call these […]

The post The 6 Adaptive Leader Behaviors with Ebenezer Ikonne appeared first on HatchWorks.

]]>

It’s so easy to get change management wrong. Whether it’s sweeping organizational changes or tweaks to your systems and processes, leaders need to adapt and manage the challenges that come with any change. 

When bringing big changes to your team, there could be differing opinions, resistance to change, or anxiety over changes. We call these “adaptive challenges,” and it’s important that leaders know how to balance these tensions in the right way.  

In this episode of the Built Right podcast, we speak with Ebenezer Ikonne, AVP Product & Engineering at Cox Automotive, and author of the book ‘Becoming a Leader in Product Development.’ Ebenezer explores the six most important adaptive leader behaviors to adopt and why. These are the behaviors that you as a leader can adopt to encourage your team to work with you and to reassure them. 

Read on to learn the six adaptive leader behaviors or tune in to the full discussion below.  

1. Get on the balcony 

When you’re right in the middle of things, it’s hard for leaders to get that outside perspective. But sometimes, it’s important to step back and “get on the balcony” so you can reflect on what’s happening around you. This can help you see the challenges faced by your team more clearly and objectively, as well as any possible solutions.  

It’s also important that leaders don’t get on that metaphorical balcony alone. Bring people that you trust who can offer a different perspective. 

 

2. Identify adaptive challenges 

With change comes challenges. If you’re changing something that people need to adapt their daily habits and processes for, there’s an adjustment period to be aware of.  

As a leader, you need to be able to make the distinction between a challenge with a straightforward answer versus a set of challenges that are either paradoxes or require huge paradigm shifts in the way you do things.  

There’s no easy answer to the best way to handle adaptive challenges, but leaders should continually engage with them to manage them better.  

 

3. Regulate distress 

One of the adaptive challenges you might face as a leader is team distress. Some people on your team might be ready and eager to make changes, others not so much. However, when it comes to changing day-to-day processes and responsibilities, there can be an element of stress for everyone. 

However, not all stress is bad stress. When you’re lifting weights at the gym, your body and mind are under stress. It’s hard work. But we know it’s also a good thing to exercise anyway. 

It’s the same with making changes to your development team. Those changes may be positive, but leaders should be aware of the anxiety and stress that can come with them. Try to recognize that everyone adapts differently and may require different levels of support.  

 

4. Maintain disciplined attention 

It can be tough to keep focus when we have to do things differently at work – even more so when we’re the ones responsible for managing those changes.   

Some in your team may be more resistant to change and may be hesitant to adopt new processes. They may lose focus, and this can cause friction when adopting new team changes.  

We know that people look to leaders in an organization and try to emulate what they’re doing, so leaders need to lead by example and direct everyone on what needs to be done. Try to maintain disciplined attention and keep focused on the end goals if you want to lead your team to success. 

 

5. Give the work back to the people

A common bad habit of leaders is trying to do everything themselves. You may think you’re doing your team a favor, but it’s more likely to just cause problems. If your team is left in the dark with little control over the situation, this can cause tension and anxiety.  

The best way to make changes as a leader is to involve your team in the why, how, and when of making changes. Your team is ultimately responsible for making and adapting to these changes, so give the work back to the people. Don’t try to shut them out. 

 

6. Protect leadership voices from below 

Everyone has a voice and an opinion in a company. Not all of those opinions are popular, but that doesn’t mean they’re wrong or not valuable.  

When making changes, some of your team may be onboard immediately. Others might take time, and some might come up with dozens of reasons why it won’t work. 

It’s important not to shut those people out of the conversation though. Listen to their concerns and most of the time, you’ll find there’s important wisdom in their opinions. Cognitive diversity is important in a business. Everyone thinks in a different way, so make sure you’re not overlooking those differing opinions.  

 

There are no easy answers 

Managing change can be a tough challenge to overcome for leaders and their teams. There’s no one-size-fits-all solution to dealing with adaptive challenges. But if leaders can adopt the above behaviors, it can reassure your team and help them adjust so that any changes you make are in line with your goals. 

For more insight into the six adaptive leader behaviors, check out the full podcast with Ebenezer today. 

[00:00:13] Matt Paige: Welcome Built right listeners. Today I have, I’m joined by Eb Ikonne, a VP of Product and Engineering at Cox Automotive.

[00:00:28] Matt Paige: I. And he is also an accomplished author of the book, becoming a Leader in Product Development. He’s got 20 years of experience in product development and is, uh, passionate about fostering joy at work, even as a YouTube channel by the same name. But welcome Ed to

[00:00:43] Ebenezer Ikonne: the show. Thanks for having me, Matt. Glad to be here.

[00:00:47] Matt Paige: Yeah, and a local Atlanta guy over at Cox, uh, where, where HatchWorks is located as well. Uh, but today we’re gonna deep dive into what you’ve coined, the the six adaptive leader behaviors, um, from your book, becoming a Leader in Product Development. So we’re gonna go through some of those and like just, you know, checking out the book, a big thing you kind of hit on, it’s becoming increasingly challenging for product development leaders to effectively lead.

[00:01:14] Matt Paige: As workplace demands continue to increase, rate of change, as we all know, is getting crazy. Whether it’s technology, society, business, there’s a lot of pressure on leaders to ensure their groups are moving in a direction towards a common goal. And this is what some of this stuff’s gonna start to hit on, like how do you become a, an effective leader?

[00:01:32] Matt Paige: But to kick us off, uh, let’s, let’s go into the, the first adaptive leader behavior and can I give us an overview there?

[00:01:40] Ebenezer Ikonne: Yeah, sure. So maybe you should set, I should set some context for this as well. Yeah. Let’s do to the, uh, to the, uh, behaviors that we expect leaders who create a situation where adaptation happens.

[00:01:55] Ebenezer Ikonne: I. Often to put in place and I should give some credit to the, there are several individuals whose research I worked on or, or read on to really come up with this. But one of them was, uh, Ron Hefe, who has written a number of books on adaptive leadership. And his thoughts and perspectives are, were quite insightful and to a large degree, these behaviors are.

[00:02:18] Ebenezer Ikonne: Lifted from his work, but applied specifically in a product, um, development sort of context. Mm. And like you rightly said, the, the challenge today is we have a lot of. Situations that face us and we go about trying to solve them sort of the wrong way. And so the basis for a lot of this, this adaptive approach is to really recognize that in the workplace we’ll have a bunch of challenges.

[00:02:44] Ebenezer Ikonne: Some challenges are, are really straightforward, but some challenges are not straightforward. They might be wicked problems, they might be paradoxes. There might be situations where. We have to balance tensions, right? We want to deliver quickly. Mm-hmm. But we want to deliver high quality, right? So, and these kinds of challenges or problems just require a different approach.

[00:03:05] Ebenezer Ikonne: But it, it so happens that, uh, for the most part, and I kind of, I. Blame, if you will, our academic and institutions for this. We are raised as problem solvers and so we generally approach every challenge thinking like there is a specific solution and answer to this question and everything becomes a math problem.

[00:03:28] Ebenezer Ikonne: But there, there just challenges that are, don’t lend themselves to that kind of situation, so. Yeah, it’s really first identifying that not every situation is a math problem that can be solved. There’s problems that we need to dance with, and those problems that we need to dance with, you know, are referred to as adaptive challenges.

[00:03:48] Ebenezer Ikonne: There are other names for them, but in this context, they’re referred to adaptive challenges. And then as leaders, Our teams need to dance with these problems as we do, and so these are sort of behaviors you as a leader can adopt and encourage that can help your team. I. Dance, if you will, with the challenges that confront them.

[00:04:07] Ebenezer Ikonne: So I hope, I hope that’s con good context before we get into the behaviors. I don’t know if you Yeah. Anything you wanna, that’s poke at

[00:04:13] Matt Paige: there’s Yeah, no, that’s perfect context. I love the, you you coined it like dancing with the problems. There’s like, there’s some rhythm to it, right? And I think like, I love how you said too, there’s never like one right answer, right?

[00:04:25] Matt Paige: Like the core, like I’m a strategy nerd. And that’s one thing with strategy is there’s no perfect strategy. There’s no one right strategy. There’s multiple strategies you can take, and it’s really all about increasing your odds of success with an effective strategy. Right. Yeah, no, that makes a lot of

[00:04:42] Ebenezer Ikonne: sense.

[00:04:43] Ebenezer Ikonne: Right. So let’s, so let’s dump, uh, let’s jump, uh, into the, uh, six, uh, behaviors that we are looking at here. So the first one is referred to as like getting on the balcony. And that’s really just attaining some distance. Um, if you’ve ever been to a program or a show and you’re kind of in the middle of things, you might notice that.

[00:05:07] Ebenezer Ikonne: Mm-hmm. You don’t have a great perspective on what’s going on. Right. You don’t necessarily see the forest for the trees. And so getting on the balconies, really creating the space to almost step back and reflect on what’s going on and, and happening around you, and really not. Always making decisions in the heat of the battle, if I can use that metaphor, but just attaining some distance and trying to gain some perspective.

[00:05:33] Ebenezer Ikonne: And I would add. That. I think it’s also important for leaders to not, you know, get on the balcony by themselves all the time, like find people that, that you trust, people that bring maybe a different perspective and have them join you on the balcony, the metaphorical balcony, so that they can kind of give you a perspective of what they’re seeing and what they’re observing.

[00:05:57] Ebenezer Ikonne: As we all know, we all have our. Our biases, we all have things that stand out to us that might not stand out to somebody else just because of our experiences, our history and life. So the more we can get diverse perspectives on what’s going on, the better that we have, the better opportunities we have for really identifying what’s in front of us and and, and what we need to go and address.

[00:06:24] Ebenezer Ikonne: Yeah.

[00:06:24] Matt Paige: So eb, let me ask you this. So you are at Cox Automotive and for those that are not familiar with Cox, there’s Cox Automotive, Cox Communications, like, there’s so many different, uh, brands and things within that. So is there like a, you know, when you’re in a large company like that, is there like, A balcony using the, the metaphor within kind of your specific group.

[00:06:45] Matt Paige: And then there’s another balcony that’s an even higher level kind of strategic one. ’cause all the different groups within Cox, I’m sure kind of play together and there’s uh, you know, things that you have to consider throughout that.

[00:06:57] Ebenezer Ikonne: Yeah. Yeah, so everybody, everyone as a leader is responsible for scope and you need to get on the balcony, you know, that overlays the scope you’re responsible for.

[00:07:09] Ebenezer Ikonne: So my scope is not the scope of Cox Enterprises, so I’m not supposed to get on the balcony and reflect on scope, uh, on Cox Enterprises as a whole. But I have a scope within the product development group within Cox Automotive, and my job is to on, on occasion with people, take a step back and reflect on what’s going on.

[00:07:31] Ebenezer Ikonne: So leaders at every level in the org for whatever scope they’re assigned to. Mm-hmm. Have to like get on the balcony and identify the challenges that are, you know, they have to address and, and lead through for their immediate context. And everybody’s supposed to be doing this at every level within the org for sure.

[00:07:52] Ebenezer Ikonne: Yeah,

[00:07:52] that’s

[00:07:52] Matt Paige: great. Context is the, the forest through the trees, which is so difficult. I’ve struggled with that in the past too. You know, trying to get out of the tactical day to day and, and get to the, why am I doing this right? 

[00:08:03] Ebenezer Ikonne: Yeah. Yeah. And I think that, I was just gonna say, I think that that’s why

[00:08:08] Ebenezer Ikonne: It’s important that we get multiple perspectives. You know, sometimes we think as leaders that I gotta do it alone. I gotta figure it. It out by myself. I need to see the entirety of the forest for the trees when it comes to something. Uh, re regarding my, my org, and this is where I think the idea of distributed leadership is really important.

[00:08:28] Ebenezer Ikonne: And this view of leadership is not what a single individual does. It leadership is actually produced through the interactions of a people on a team. And yes, I might be the. Designated leader and have certain expectations, uh, that are, uh, ex expectations that people have of me for sure. But when it comes to leadership as a whole, you know, everyone needs to participate in leadership.

[00:08:53] Ebenezer Ikonne: And again, getting on the balcony. We should have more people really together. Let’s, let’s talk about what we’re seeing, let’s talk about what we’re perceiving as a group, and then that helps us understand how we should go forward. This is, you know, offsites and like, sorry, offsites and things like that.

[00:09:13] Ebenezer Ikonne: Yeah. You can, you can kind of think about those things as like tangible examples of when people are trying to get on the balcony, right? Yeah. One of the challenges sometimes with things like offsites is that people get distracted with. Pressing emails in the moment, and so they don’t mm-hmm. Give their selves the time to be on the balcony to do that reflection.

[00:09:32] Ebenezer Ikonne: But when you change your space, when you step out of your normal routine, that’s, you know, when you get off the down dance floor, it’s kind of a way to think about it when you’re no longer dancing and you get on the balcony to, to watch what’s going on. That’s, that’s what it’s like.

[00:09:47] Matt Paige: Yeah. I wish, I wish answering email was as fun as dancing, but.

[00:09:52] Matt Paige: No, that’s great. I love, I love that as a starting point there. So let’s, let’s jump to number two.

[00:09:57] Ebenezer Ikonne: Yeah. Identify adaptive challenges. And I talked a little bit about that in the intro, and it’s really making the distinction between what is a challenge that has a straightforward, easy answer. Maybe there’s a established pattern and recipe for that.

[00:10:13] Ebenezer Ikonne: For that, uh, that challenge. And really all you need to do is adopt it or maybe go hire an expert who can show you and teach you how to do those things versus a set of challenges that are really, um, either paradoxes or, uh, tensions within the org. Or require like just wholesale paradigm shifts, um, in the way you do things.

[00:10:38] Ebenezer Ikonne: Maybe you had a sales strategy that was one way, but now you want to adopt a new sales strategy, and that’s a big change for everybody in the org. And adaptive challenges are tricky sometimes because, They often require that we make changes to the way we think about the world, to the way we kind of act in the world.

[00:11:01] Ebenezer Ikonne: There isn’t necessarily expertise that we can just go by and apply and, and just have it work for us. We really need to come up with our own. Um, Our own solutions in that situation, and leaders in that context need to help people understand like there are no easy answers here. There are no easy answers here.

[00:11:24] Ebenezer Ikonne: We all of the easy answers, we’re all dying for easy answers, but there are no easy answers. In this situation, we’re gonna have to kind of dance our ways and maybe not solve the problem, but being a continual like engagement with that problem and, and manage it better.

[00:11:41] Matt Paige: Yeah. It’s, it’s that whole idea of shifting your, your mental model and our human, our own human nature and human behavior, like acts against that so many times.

[00:11:51] Matt Paige: But you hit on the topic of paradox is I, I love a good paradox. Is there, is there any para like a paradox out there that you’re like, oh, that’s, that’s the one that I find the most, uh, compelling or interesting that you’ve, you’ve encountered?

[00:12:04] Ebenezer Ikonne: Wow. I mean, we deal with paradoxes, you know, every day. Like an obvious one if you’re in product development is finding that balance between building the right thing, if you will, like in, in the right way.

[00:12:17] Ebenezer Ikonne: Mm-hmm. And. Yeah, we want to get something out as soon as possible, but we also want it to, you know, stand up to scrutiny and be solid and, and, and, and have the quality that it needs to have in it as, so that’s just a very obvious one I think we figure out, we deal with in, in product development. Even if you get more tactical, there’s always this tension between how much do we need to know before we start versus starting and learning as we go.

[00:12:44] Ebenezer Ikonne: Right. And you know, if you just start without any kind of, Trying to understand what you’re going after. You waste a lot of time. But then if you spend all this time trying to fully understand what you’re going after, you also waste a lot of time because you can’t do either. So paradoxes show up very tactically.

[00:13:03] Ebenezer Ikonne: They show up more strategically and more organizationally as well. Yeah, that’s a good one. I like

[00:13:09] Matt Paige: the nods of built right there. Build the right. That’s right. The right way. Yeah. As well. Yeah. Uh, all right, number three. What, what, what’s number three? 

[00:13:18] Ebenezer Ikonne: So number three is regulate distress. And this is an interesting one because, you know, we often view stress as being completely negative, but there’s good and healthy stress.

[00:13:28] Ebenezer Ikonne: And if you. You, uh, I, I use a sporting metaphor here. Or if you are exercising or playing your favorite sport or doing something like that, you know, there’s a stress that comes at a certain level if you’re doing it. Uh, but because you’re enjoying it, you know, we don’t recognize it as stress. We generally recognize stress, negative stress as being stressful.

[00:13:53] Ebenezer Ikonne: And the point here is that, When you take on adaptive challenges that require changes to mental models, maybe require changes to belief systems, require people learn new ways of doing their job, maybe having to put away, try to, uh, true practices that people have developed over the years, that can lead to a lot of anxiety, can lead to a lot of nervousness, and then distress interns.

[00:14:19] Ebenezer Ikonne: So as a leader, you. Want to make it sure that you just foster an environment where people, it’s not like people won’t get stressed because what you’re doing is a change, so people will be stressed, but you want to regulate that and pay attention to that and be sensitive to that and, and just recognize that everybody is.

[00:14:41] Ebenezer Ikonne: Going through this change probably at a different pace. Like one of the things I tell leaders that I talk to is remember that whenever you are leading a group through the kind of change we’re talking about right now, it’s quite likely that you’ve had days, weeks, and months to process it emotionally. So they’re behind you.

[00:15:02] Ebenezer Ikonne: And so I’m always. It surprises me that leaders expect, like the people on their team to be right there with them emotionally and forget that they’ve had time to like deal with the emotions and they need to allow for people to sort of catch up emotionally, if you will, as they try to embrace the new change that’s they’re about to embark on.

[00:15:25] Ebenezer Ikonne: Yeah, I wanna

[00:15:26] Matt Paige: go deeper there. ’cause that’s an interesting point. I, I’ve encountered this too, where, to your point, you’ve been thinking about it for months, you know, talking to other leaders and working through the change, and you have this great plan and then you roll it out to the team and they’re seeing it for the first time.

[00:15:41] Matt Paige: So they’re, they could be blindsided, they could be caught off guard, they could be excited for the change. How do you help foster that in a. A, uh, positive way to where, ’cause really, at the end of the day, it’s, it’s your people that are gonna execute the change. And if they’re not clear and on board, it’s probably not gonna be successful.

[00:16:00] Matt Paige: Any, any thoughts on how to make that successful pass from like idea strategy to execution?

[00:16:08] Ebenezer Ikonne: I really love the, the point that you made there, that change only succeeds because the people who have to do the change actually do the change. Mm-hmm. So my first thought there is, Uh, we should always be preparing the conditions for change and, and what does that mean?

[00:16:24] Ebenezer Ikonne: Like mm-hmm. If people trust you, they know you generally have their best interest. If you follow through on what you say you’re going to do, then you are creating an environment in general that is conducive to change. And I think what happens is, We often make change exercises very eventful in nature, and, and we don’t, we just don’t have an environment that’s like conducive to change.

[00:16:51] Ebenezer Ikonne: We haven’t created organizations where people trust, they believe that what the leadership’s doing is generally in the best interest. And when leaders make mistakes, they actually say, Hey, I made a mistake. And so. People are, for the most part, like suspicious, like, you know, do they really have our best interests at heart?

[00:17:13] Ebenezer Ikonne: You know, it’s like people walk around thinking about that. And so when a change now shows up, people are already starting from a place of, you know, distrust as you said, or suspiciousness and wondering what’s going on here. So I think as a leader, you need to, we need to ask ourselves what kind of environment am I like?

[00:17:32] Ebenezer Ikonne: Like just fostering every single day. Mm-hmm. And then when it comes to the moment where I’m introducing a change, To recognize that people are gonna respond to this differently, like you said, and I need to give time for people to kind of go through the emotions and that will differ depending on what’s going on.

[00:17:51] Ebenezer Ikonne: Like, look, if it’s a change that needs to happen in the next 24 hours, because if we don’t make the change are we’re gonna go out of business, then we’re gonna have to, you know, like people are gonna have to get on board quickly. But if it’s a change, that’s gonna be something. That’s gonna go on for a while, and we really need people to be engaged, then we need to give people an opportunity to express themselves remaining firm, that we’re going to go through this change.

[00:18:16] Ebenezer Ikonne: But also respecting that each individual’s probably going to go on the change at a different pace. And also understanding like the change might not be for everybody as well. Like that’s the, that’s maybe swinging the pendulum too far sometimes. Where we, we, we feel we need to get everybody on board when you’ve, when.

[00:18:33] Ebenezer Ikonne: You’ve done your work to create an environment that’s generally trusting and all those other things, and it’s time for a change and you’ve given people the opportunity to express how they feel and kind of work through their emotions. Then if there are people who you know that change, you know, I’m big on joy at work.

[00:18:50] Ebenezer Ikonne: If there are people who that change basically, Erodes all the joy at work, then you probably, even for them, want them to find something else to do too. So, yeah. No, that’s a great

[00:19:02] Matt Paige: point. I don’t know if this is, I think a good de great reference, but it’s like the, the, the main job of a leader. Pointing the bus in the right direction, getting the right people on the bus and then getting ’em on the right seats.

[00:19:14] Matt Paige: Like that’s such an important piece, especially when you’re changing course and doing something, something different. The other thing too, it was interesting, you talk about like good stress, like, and just as like a side net, I feel like you need to coin a term for good stress because I feel like stress has such a negative connotation.

[00:19:32] Matt Paige: Like it’s gotta be a better word. ’cause like you mentioned the, the sports example, the same thing with like, Working out. It’s painful in the, in the time, but it, it has benefits over that. So we need, we need a new word for it. That’s, uh, maybe a, a takeaway.

[00:19:46] Ebenezer Ikonne: Well, there, there’s actually a word for it. I think I might have mentioned it in the book.

[00:19:50] Ebenezer Ikonne: Okay. But it’s, I think it’s eustress, eustress or something like that. EU S ah, I like that. T r e, sss, I’ve never heard of that generally considered as beneficial stress. But you are right on point that the fact of the matter is, you know how some words are. Ultimately defined by how they’re used in society.

[00:20:07] Ebenezer Ikonne: Yeah. Stress is a word that’s primarily considered or primarily defined as a negative right now. Mm-hmm. So, while there is a positive version of it, I think you and I both be challenged to get people to, to take it as adopted. Yeah. People, people find it hard to think positively about stress, but there is positive stress and like you said, Working out and enjoying the workout, the di, it’s difficult.

[00:20:31] Ebenezer Ikonne: It’s stressful, but it’s a beneficial stress for sure. Mm-hmm. Yeah. Yeah. All

[00:20:36] Matt Paige: right. Let’s go to number four.

[00:20:38] Ebenezer Ikonne: Maintain disciplined attention. It’s probably, uh, straightforward when we’re asking ourselves to do things that are different, that are difficult, that challenge. Challenge our, our way of being. Uh, it can be hard to keep focus on those things.

[00:20:56] Ebenezer Ikonne: I don’t know if you’ve been part of, uh, initiatives where you’re trying to adopt something new and it just, it seems like people are not doing it and they’re coming up with a whole bunch of reasons why they can’t do it, and, and so this is exactly what. What this, uh, behavior is about. As a leader, you need to keep pointing people to the work that we have to do.

[00:21:18] Ebenezer Ikonne: And I think it’s Edgar Schein that says, one of the ways you transmit culture is by signaling what matters most to you when you’re a leader and you know. There, there are evolutionary explanations for this. There are other kinds of explanations for this, but we know that people look at the leaders in the organization in large degree to a large degree, and then emulate what they’re doing.

[00:21:43] Ebenezer Ikonne: You know, you, you may wonder why this is the case, but it is the case. And so if my leader, if something’s come from on high that we’re gonna do x. But I observe that my leader’s not paying attention to it. I probably won’t pay attention to it, except I’m very passionate about that thing, right? That’s the exception.

[00:22:03] Ebenezer Ikonne: But when I’m not passionate about the thing or the things going to introduce some stress, some negative stress into my life, I’m probably not going to be. As, uh, attentive to what I need to do. And so a key point here is when we’re dancing with these adaptive situations, we need to really, uh, keep people focused on the task in front of them, the adaptive work.

[00:22:29] Ebenezer Ikonne: If you need, we need to set up the right incentives. We need to talk about it all the time. We need to let them know, help them see that this matters to us, and that we want them to stay focused on it.

[00:22:41] Matt Paige: Yeah, and I think I, I draw a comparison in marketing and re really in leadership like you’re talking about.

[00:22:46] Matt Paige: It’s that idea of repetition. Uh, and it’s tough ’cause like, you know, in, in the marketing side, we may talk about something over and over again and just about the point where we’re just like completely dead tired of talking about it. That’s when it’s just starting to catch on. It’s the same thing in leadership too.

[00:23:02] Matt Paige: And I, I think back to previous experiences, you know, People, leaders would keep harping on the same thing over and over again. But that was intentional, right? ’cause you’re kind of signaling to your point, like where the priority is, where the focus is and it’s so critical. ’cause you, you’re trying to get a large group of people focused in the same direction.

[00:23:23] Matt Paige: Yeah.

[00:23:23] Ebenezer Ikonne: Like they say, you know, repetition deepens the impression. And so that’s just important. You need to keep talking. Talking about it, keep, uh, highlighting it, keep tying it back to why it matters for sure. So you repeat it, but you also tie it back to like, this is why this is so important at this time, and if we don’t do what we’re trying to do here, this is, this is how it negatively potentially impacts all of us here.

[00:23:49] Ebenezer Ikonne: So we need, we need to really take this on. And, and I just want to say like sometimes, uh, In, in orgs. We don’t want to let people, we don’t want to tell the truth, right? We take this kind of parent child relationship, you know, as a parent, you know, there, there are things you, you don’t think your children are mature enough to handle that are important and you, you keep that, that information away from them and, you know, um, I can totally relate to that.

[00:24:15] Ebenezer Ikonne: As a parent of three kids, they’re just information I wouldn’t share with my kids at this particular point, but in the work. Place. We’re not dealing with children. These are mature men and women. Many, you know, making big time decisions every single day. And so if their actions can impact negatively or positively the org, I think it’s important.

[00:24:38] Ebenezer Ikonne: Uh, we need to be, it’s healthy transparency, but I think too many orgs err on the side of trying to protect their people by not giving them important information they need to know. No,

[00:24:51] Matt Paige: that, that one, that one hit. To my core. That’s a, that’s a very good point there. I like that. Alright, number five. 

[00:24:58] Ebenezer Ikonne: Give the work back to the people.

[00:24:59] Ebenezer Ikonne: Hopefully this is a bit self-evident. This is really saying like, the adaptive work needs to be done by the people. And, uh, you, you, you hit on this I think when you were making the comment that, you know, you need the people who, who the. Change needs to be done by the people ultimately. Mm-hmm. And so given the adaptive work back to the people, is making sure as a leader in this context, you don’t step in and say, Hey, I got to make all these changes by myself in a sense, it’s not really practical, but I don’t know if you.

[00:25:32] Ebenezer Ikonne: You’ve worked for a, a product, uh, micromanaging product person in the past who wants to do everything by themselves. That’s just not a scalable approach. People need to have a sense of ownership. This is a part of like, almost, you know, be let letting people be citizens in a way, right? Just having people be active in shaping the outcomes that, uh, we all desire.

[00:25:58] Ebenezer Ikonne: And so giving the work back to the people. Another maybe important new nuance here is, uh, often leaders think that it’s their job job to come up with the solution and then have other people implement the solution. And there are kind, there’s certain challenges where that might be the case where it’s a, a, a challenge with a clear technical solution and the leader might be the expert and all they need.

[00:26:26] Ebenezer Ikonne: Do is tell people what to do, but these adaptive challenges, in many cases, there isn’t a defined solution and we need to figure it out together. And so you want your teams to really own figuring it out together, uh, and really taking ownership of the work and not becoming like order takers or just, um, following instructions on an instruction sheet.

[00:26:51] Ebenezer Ikonne: Yeah. Any examples

[00:26:53] Matt Paige: in your past experience, whether that’s Cox or other places where either that you’ve seen that done effectively or maybe not, not so effectively kind of giving the work to the people and where it either worked or didn’t? Yeah,

[00:27:05] Ebenezer Ikonne: I think product development is a great example of this because the whole thing of.

[00:27:11] Ebenezer Ikonne: About product development is you’re trying to provide a product of some sort that helps somebody do something they want to do, right? So every product in the world, uh, is really an enabler. It helps somebody meet a need of theirs, ultimately, like the product in of itself is not what they desire.

[00:27:31] Ebenezer Ikonne: Ultimately, what they desire is something else. And that product to use, uh, jobs to be done turn, you know, framing here enables them. To do the job they want to do, but in understanding what the right product is. You know, there are really two kind of big schools out there. We have a school where you have some very senior product people and they make all the decisions on what the product’s going to be, and then they have.

[00:28:00] Ebenezer Ikonne: Have a team that just implements all of their decisions. And that’s an example of not giving the work back to the people because a lot of product development is adaptive work. We, we don’t know exactly what’s needed. We need to talk to people and learn and iterate over what’s done. So the great examples that I’ve seen, these leaders who connect their teams with people that have problems, give them all the support, all the resources, all the funding, and say, You own developing and delivering a solution for this customer and this situation.

[00:28:35] Ebenezer Ikonne: This is your adaptive challenge to own team. I’m here to provide support, provide guidance, provide resources, maybe contribute where it makes sense, but this is yours to own, versus I’m going to decide maybe in an ivory tower what the solution is and you’re gonna implement it. So

[00:28:56] Matt Paige: many parallels in the product development space there.

[00:28:59] Matt Paige: Alright, let’s finish it up, number six.

[00:29:02] Ebenezer Ikonne: Number six is protect leadership Voices from Below and. Um, all this is getting to is they’re always those individuals in our groups who have, uh, uh, dissenting opinions or who say things that might rub people the wrong way. Not that the thing itself is what they’re saying is bad, and this is not, uh, condoning, like people speaking, just not being nice.

[00:29:28] Ebenezer Ikonne: This is really about people who, uh, Have unpopular opinions, making it safe for them to do so and giving them airtime as well. Uh, you know, we talk a lot about cognitive diversity or we talk about diversity in general. And a big ex uh, type of diversity is cognitive diversity. You have people who, um, Whenever a new idea comes up, the first thing they see is every reason why you can’t implement that idea.

[00:29:58] Ebenezer Ikonne: Right? Yeah. You know, and then you have people, the idea comes up and they see every reason why the idea will work. Uh, if you’re in an, an environment that has, uh, that has a bias towards like, let’s get things done, people who. Kind of press on the brakes and say, slow down, become unpopular. But you wanna ensure as a leader that you allow these people’s perspectives to come out.

[00:30:24] Ebenezer Ikonne: Because in many cases, in my personal experience, having worked with, you know, people on both, Aside, everyone’s got something to bring to the table that should be considered and you just don’t want to overlook it. Even if it’s unpopular, there’s often some wisdom in what they’re sharing and that you want to take into consideration.

[00:30:43] Ebenezer Ikonne: So that’s an important aspect of keeping an environment, of keeping a healthy environment where people are taking on challenges.

[00:30:52] Matt Paige: Yeah. Something we’ve adopted at Hatch works over the past couple years is, you know, the, the whole team does what’s called like a DISC assessment. So if you’re familiar with like Myers-Briggs kind of a personality test.

[00:31:03] Matt Paige: But what’s been beneficial there is, I know that, you know, Trent on the team is this type of, uh, personality versus, you know, uh, Kathleen’s this type of personality. And it just gives, uh, better enablers for that type of conversation. ’cause somebody may be more reserved, may need to think about something deeper before voicing an opinion versus somebody else may be like, all right, I’m, I got, I got perspective right away.

[00:31:28] Matt Paige: And I think too, It’s like, you know, I think you mentioned in a previous chat, like we’re both self-proclaimed introverts, right? So it’s, it’s giving those other folks the, the voices that may be quieter on the team, uh, freedom to speak and voice their opinion. And I think that’s been an interesting evolution for us at, uh, Hatworks over time.

[00:31:49] Matt Paige: Yeah,

[00:31:49] Ebenezer Ikonne: I think one technique that I’ve really come to appreciate over the years that. That’s, uh, in the same vein of understanding people’s thinking styles, whether it’s DISC or H B D I or, or some of the other things that are out there today. Mm-hmm. Um, is the, Bono’s six thinking hats. And I don’t know if you’ve heard about it before, but I haven’t.

[00:32:12] Ebenezer Ikonne: Yeah, it’s Deb Bono. He’s a, was a great creative thinker, wrote a lot of stuff on, on how groups can work better and how innovation can occur. But he has this idea of, of six thinking hats and I’ll just go over them quickly ’cause they’re interesting. But the, yeah, what, what the six thinking Hats do is. To your point in making it safe for people to, to, to raise things, it makes it less about the person and really challenges the group to wear particular hats as they go through a problem that way.

[00:32:44] Ebenezer Ikonne: Mm-hmm. You get an opportunity to hear all the various perspectives. So the, the six hats are white, yellow, um, I believe gray, red, green, and I think blue, if I’m not mistaken. Uh, Yeah. Yeah. So the white hat is really, if I, if I remember serves you right, is like just state what? The facts are, the Yellow Hat is all about, uh, positive.

[00:33:08] Ebenezer Ikonne: You know, like the go-getters people who are all about like, yeah, we can make this happen and give us the reasons why. You know, there’s another hat that’s all about what are the risks with this. Uh, the Red Hats all about emotion, you know, how do you feel about this? Mm-hmm. The Green Hat focuses largely on creativity and whatnot, and then the blue hat.

[00:33:30] Ebenezer Ikonne: Hat, sort of the person who’s making sure that if we’re all wearing our white hats right now and stating the facts, you know that that’s what we’re doing. Like no one’s wearing a green hat when we should be wearing a white hat. Mm-hmm. So that’s another way of, if you think about a protecting voices from below, by sort of asking everybody to move from maybe their preferred or default thinking approach to.

[00:33:57] Ebenezer Ikonne: Different thinking approaches while talking about a problem or doing anything. Okay. When you brought

[00:34:04] Matt Paige: up the colors that, that rung a bell. ’cause I do remember, I forget when it was sometime at at t in a past life. Um, we were in a room and I remember like we moved by the colors. It was some kind of exercise.

[00:34:16] Matt Paige: That I think was tied to this methodology, so that that’s triggering back some memories now. Yeah. Yeah. I like that

[00:34:21] Ebenezer Ikonne: one. I think it should be used more. I’m often in a lot of meetings where I think we’d benefit from, uh, using the thinking hats in this meeting. Mm-hmm. Because, you know, causing asking people to actually adopt, uh, a different stance.

[00:34:37] Ebenezer Ikonne: It’s actually very healthy for us, you know, if. If I’m always thinking about the risks associated with things and that’s sort of my default, my go-to, that’s natural for me. But maybe what I need to begin to grow is the way I think about opportunity and kind of putting on that hat can help me develop that part of my, uh, my thinking as well.

[00:35:01] Ebenezer Ikonne: That’s perfect.

[00:35:01] Matt Paige: I think that’s a great place to, to wrap it up there, eb, so where, where can people find you, uh, find your stuff? Where, where’s the best place for people to go? Yeah, so, uh,

[00:35:11] Ebenezer Ikonne: I’m on LinkedIn, Ebenezer. You can find me on there. I’m on Twitter as well. Well, even though Twitter’s, uh, or is it X these days?

[00:35:21] Ebenezer Ikonne: X now? Yeah. Yeah.

[00:35:22] Matt Paige: The artist, they still got some mixed branding too. Yeah. The,

[00:35:25] Ebenezer Ikonne: the artist formerly known as Twitter is probably what it is, you know, so I’m still on X, as you mentioned. I have a YouTube. Channel called Joy at Work, where, where I, uh, talk about experiencing, creating joy at work for yourself.

[00:35:40] Ebenezer Ikonne: Mm-hmm. And with others. And yeah. And then my book, becoming a Leader in Product Developments on Amazon or a press, and you can get it from there as well. Yeah.

[00:35:49] Matt Paige: The book’s great. I encourage everybody to check out the joy work too, like nice little tidbits, um, tied, tied to your thinking and approach there.

[00:35:58] Matt Paige: Lots of good stuff there. I appreciate that. Thanks for, uh, thanks for being on the show. Thanks

[00:36:03] Ebenezer Ikonne: for having me, Matt. It’s been fun.

 

The post The 6 Adaptive Leader Behaviors with Ebenezer Ikonne appeared first on HatchWorks.

]]>
From 79 Cent Tacos to Technological Milestones https://hatchworks.com/built-right/beta-testing-product-development/ Tue, 19 Sep 2023 06:00:48 +0000 https://hatchworks.com/?p=29870 Beta testing and usability testing are crucial elements of the product development process. We wanted to learn more about the beta testing space, so we welcomed Luke Freiler, CEO of Centercode, to the Built Right podcast to talk about his journey and his insights as a CEO and Founder.   Luke takes us back 22 years […]

The post From 79 Cent Tacos to Technological Milestones appeared first on HatchWorks.

]]>

Beta testing and usability testing are crucial elements of the product development process. We wanted to learn more about the beta testing space, so we welcomed Luke Freiler, CEO of Centercode, to the Built Right podcast to talk about his journey and his insights as a CEO and Founder.  

Luke takes us back 22 years to how he first got interested in the space and eventually became the CTO and then CEO of Centercode. We hear his thoughts on agile working, AI, building a framework into the testing stage, and what his company looks for during tests.  

Keep reading for the top takeaways from this episode or tune in below. 

The difference between QA and usability testing 

How Luke looks at beta testing is beyond the general QA that products go through. They dig into the code and try to make sure everything is performing as expected to start with. But on the UX side, it’s more about the experience and how people use the product. Centercode’s approach is to look at how users interact with a product over a period of time, rather than basic UX testing, which is very observatory and less of a realistic test.  

As Luke explains, this leads to quality issues that wouldn’t immediately become apparent in a lab test.  

Beta testing is about more than hunting for bugs. While that’s an important part of it, Luke is really looking for three things when testing: 

  • Issues – is the product doing what it’s supposed to? 
  • Ideas – what could be improved? 
  • Praise – what do people love about the product? 

Those three things together are paraphrased as “fix, improve, promote,” which is a key philosophy at Centercode. 

The importance of having a framework 

Centercode got to the point where it had a central team that ran one of the most diverse beta tests in the world. But one issue that Luke noticed is that there’s no framework for beta testing – everyone does it a little differently and is recreating it from the ground up. This means the opportunity cost is huge. 

So, Luke started to think beyond the technology itself and more about the process. Centercode already has a lot of information about products and usability testing, and this led them to design a framework and certifications to try to improve the beta testing space.  

Luke’s goal was to “get rid of the first thousand hours that every company was going to use to reinvent their own wheel.” 

Agile development 

Centercode was agile in the early days, and Luke loved that you could finish a feature and get it into people’s hands quickly. Rather than doing a major release where half the features are done until they work on the other half, you could get it out sooner and develop as you go.  

The main thing was that people are paying for something and want persistent value, and the agile way of working became a no-brainer for Luke.  

However, it didn’t go well at first. The company and even the customers didn’t necessarily want releases and changes that often. So they eventually walked away from the agile approach on the customer front, but still practiced agile development internally.  

However, over time, Centercode built its team up and built better project management structures in place to do rollouts more efficiently. Now, they’re back to doing agile development, and it’s been working well for the last two years.  

AI is usertesting 

There’s a lot of hype around AI in the product development world, and the beta testing arm of it is no different. After being impressed by the content that ChatGPT could produce, Luke said to his team, “AI is not going to replace you. Someone who uses AI will. So let’s be that person.” 

Centercode is now experimenting with different ways it can use AI both for products and internally. There’s currently a backlog of great ideas for ways to use AI. However, Luke is quick to mention that AI is still in its early days. We’ve yet to see a mature AI product, so we can’t say what AI tools will be like in the future. But they need to be reliable and predictable, most of all.  

However, it’s clear that AI is here to stay for the foreseeable future, and it’s important to experiment with it or get left behind. 

For more insights into Centercode and beta testing as a whole, check out the full episode of the Built Right podcast.  

[00:00:00] Matt Paige: Welcome Built Right listeners. Today we’re chatting with Luke Freiler, CEO of CenterCode, and Luke’s Journeys is a bit unique in the fact that he’s been running Center Code for the past 22 years, starting as CTO been moving into the position of CEO. And we’re gonna get into his journey and hear the type of stories you only get from a founder, like how he got into product, market fit, insights on pricing, go to market, and really everything in between.

[00:00:32] Matt Paige: But welcome to the show, Luke.

[00:00:33] Luke Freiler: Thank you sir.

[00:00:35] Matt Paige: Alrighty. So instead of like, you know, Luke, you’re telling us what Center Code does. I kind of want take us back 22 years, like go back to. When you first realized this problem existed, it’s a really cool story we were chatting about in advance. I want to get into that, um, and, you know, kind of take us there to kind of get into what Centercode does

[00:00:55] Luke Freiler: Absolutely. So I, I started my career from a, a pretty young age in tech. I, I started out at Samsung, uh, quickly moved with a, a big group of people over to Ericsson, which at the time was a a hundred year old, a hundred thousand person tech company. And at the time the internet was still pretty fresh and new and, and I was young, but excited and, and pretty invested in it.

[00:01:16] Luke Freiler: And, and specifically the thing that really lit me up was just the idea of, of usability and, and what eventually became user experience. You know, back then tech products were pretty much used by tech people and, and anything that broke that mold was, was pretty novel. So, uh, I, on the other hand had low patience despite being a, a techie person and, and kind of grew up supporting.

[00:01:37] Luke Freiler: You know, the school network and printers and whatnot, and, and always thought there should be a better way. So that’s kind of where my head was. And, and where I was at, Ericsson was running a, a web development team that handled any sort of web oriented functionality for the product that we were bringing to market.

[00:01:53] Luke Freiler: So firmware of the product itself, internal sites, public marketing sites, everything. So I had a group under me who were all, again, sort of young web developers and a product manager came to me and he said, Hey, I need you to run a beta test for our product. Hmm. And I, I kind of lit up and said, okay, well what does that mean?

[00:02:11] Luke Freiler: And, and he said, well, what is a beta test? It’s, yeah, it’s a, it’s a, it’s when you get a bunch of customers. And, and I was like, no, no, I, I know what a beta test is. I don’t know what that means at Ericsson, right? We, we have a process for everything. I have a four inch guide on how to use the logo. Um, there, there’s gotta be a path for me here so I don’t have to just create it.

[00:02:29] Luke Freiler: And he was like, no, man. Best I can tell there, there isn’t you. You’re gonna have to just figure it out. And I said, okay, can I outsource it? Can I go find a vendor? And he said, sure. And I said, okay, I’ll, I’ll do that. Give me a budget. So I started looking around, I started researching and, and I quickly realized that it, it was a gap in the market.

[00:02:43] Luke Freiler: It, it, what I discovered was this rare, necessary evil in business. Something that everybody sort of universally agrees is important is, is positive, but it’s hard. It, it’s, it’s challenging the, the idea of. I’m going to get a unfinished product and, and give it to a bunch of strangers and, and expect meaningful feedback that I can transform into a successful product.

[00:03:06] Luke Freiler: It’s just a tall order. So at that point, I, I saw that opportunity and, and like I said, I was already kind of borderline obsessed with the, the future of usability and user experience and technology being more useful and, and it all just kind of came together. And from there, I, I decided after talking to many, many people and asking everyone I knew and everyone within their networks how they did this, I, I kind of realized that the challenge was, it was something that happened.

[00:03:31] Luke Freiler: Episodically e every once in a while somebody would run a beta test for a product. Yeah. Uh, back then everything was waterfall development, so it was a good couple years between products. Product managers were heads down for a very long period of time. They would run a big beta test for eight or 12 weeks At the end of that, But then there’d be another two years before they did it again.

[00:03:49] Luke Freiler: So as a result, they kind of forgot everything they learned. And that led me to a, a lot of different paths that that all turned out to be. Yeah, there, there’s an opportunity here. There’s there’s an amazing business opportunity that not only fits what I’m passionate about, which is incredible, but also it’s sort of universally positive.

[00:04:06] Luke Freiler: The, the idea of taking a company and helping introduce them to their customers and then, Sort of orchestrating the, the communication between them and, and helping them ultimately just produce a better product for those customers is just a universal win for everyone involved. So I never expected to spend most of my career in any one place.

[00:04:25] Luke Freiler: That certainly wasn’t something I I set out to do. I also didn’t ever expect to be a c e O. Um, I just kind of fell into something I loved and, and invested heavily in it, you know, of my own personal time. And I, I don’t see myself going anywhere. I, I absolutely love it and we get to work with many of the greatest brands ever.

[00:04:41] Luke Freiler: Uh, to bring products to, to billions of people. So it’s awesome.

[00:04:45] Matt Paige: Yeah, and it’s a lesson in, we talk about the, the built right method at Hatch Works and it’s centered in building something that’s valuable for the customer, viable for the business, and feasible like the Marty Kagan kind of school of thought.

[00:04:58] Matt Paige: And you know, we’ll see clients a lot of the times they’ll come to us with an idea, but they’re kind of missing like, what’s the problem behind it? And you know, so that’s a key piece that often. It gets overlooked. But I, I wanna get into, you mentioned this usability testing. A lot of people when they think of testing, they think of just pure kind of QA testing within a solution, which is different though than when you talk about usability beta testing.

[00:05:22] Matt Paige: Uh, for those that may maybe not know that kind of difference there, like what, what’s the difference between those and kind of purpose of why you’re doing these usability tests? 

[00:05:29] Luke Freiler: So, so traditional just QA testing is, is pretty contained to professionals doing it. Now there are sort of crowdsourced options nowadays, but ultimately you have people who are walking through test scripts and doing very explicit things to basically make sure the product is performing as it was expected to.

[00:05:48] Luke Freiler: Um, on the other side of that, you have UX testing where it’s much more focused on the experience and do people understand it? Is it producing an experience that that actually gives them the outcome they want? Um, what we do is a little bit different and the sense that it’s all about real users. The, the people who actually have the problem, the product solving that will buy the product eventually.

[00:06:09] Luke Freiler: In their real environments and then over a period of time. So whereas a UX test is often, um, observatory, we’re, we’re watching somebody do something and taking notes and taking action from that. And it’s very qualitative in that sense. Um, what we’re doing is bringing something a little broader and it’s more about, okay, how do these real people adopt the product over time?

[00:06:28] Luke Freiler: How do they use the product in their real environments, which then naturally leads to a lot of quality issues that you’re not necessarily going to find in a log, uh, a lab and many people do immediately recognize beta as a bug hunt, and that’s certainly a critical part of it because the real world does just uncover an enormous amount of situations and scenarios that, that an, a business environment wouldn’t.

[00:06:49] Luke Freiler: Um, but it goes far beyond that, and from our perspective, We’re always looking for three things and, and we’re looking for issues in the product. You know, what’s it not doing that, that it, that it is supposed to be doing? Uh, what’s not behaving as the customer expects it to. We’re looking for ideas. What could be improved in the product, either small improvements all the way to features that could end up in future versions or, or what competitors are doing.

[00:07:11] Luke Freiler: That might be an expectation they didn’t see coming. And then the last thing we’re looking for is, is praise. What do people love about the product? What is it that that really stands out to them? And what could be promoted to grow the audience? And. Those three things together, issues, ideas, and praise. You know, we sort of paraphrase it as fix, improve, promote, um, that’s what we’re going for.

[00:07:29] Luke Freiler: Mm-hmm. We’re trying to look at this product in, in that real world and say, okay, how can we maximize the success of this product? And, and how can we use a, an audience of enthusiastic volunteers to make that happen? Yeah. And you know, there’s,

[00:07:41] Matt Paige: I wonder if this is another one or if it fits into like the, the idea improved category, but we’ll see a lot of times when, you know, a product actually gets into the hands of end users.

[00:07:50] Matt Paige: They start using the tool in unintended ways and the initial reaction’s like, oh man, that’s the wrong way. You’re doing it wrong. But they may unearth like whole new ideas Yes. With how to use the solution and get value from the solution. Right. Oh

[00:08:03] Luke Freiler: yeah. It, it that, that promote side and, and the praise is where you find the things about your product, you know, that, that happen, that, that are just happy accidents.

[00:08:12] Luke Freiler: Um mm-hmm. And, and that happens every time. I mean, virtually every time there are gonna be things about your product. That are problematic that you didn’t see coming, that’s the most obvious thing. But to sort of balance out the morale, uh, you’re gonna see just as many things that, that are new opportunities to, to leverage something you did in that product that you, you know, sort of emergently came out you didn’t even see coming.

[00:08:31] Luke Freiler: Mm-hmm.

[00:08:32] Matt Paige: Take us back to, uh, your first deal. This is such a, a cool story. Your first deal, your first conversation when you got this like, idea of a product you’re building. Yeah, I think it’s such a good lesson for folks in terms of really value, I wanna say pricing, but it’s, it’s all about value.

[00:08:48] Luke Freiler: I, I spent, I, I’ve spent decades on unpacking this deal and, and trying to figure out, you know, what, what to do with it.

[00:08:54] Luke Freiler: But yeah, it’s, it’s, it is a great story. So, when I started this company, uh, I was an engineer at heart and I, I brought together a few people from Erickson with me. We all left and went and started this. And I was leading the tech side. As I said, I, I wasn’t a A C E O at the time, but I was the product guy.

[00:09:09] Luke Freiler: It was my vision, my idea, and I, my first idea for this company was that this was gonna be a platform that we were gonna sell online for a couple hundred dollars a month, and it was gonna be sort of SaaS before SaaS and, and that sort of self acquisition before product-led growth was a thing. This is, again, going back 20 plus years.

[00:09:26] Luke Freiler: But that’s how I thought. I, I wasn’t a sales guy. I didn’t want to be a sales guy on the phone. I, I just wanted to, to sell online and, and be frictionless so I could go focus on building something cool. And one of the first calls we got was from Sun Microsystems, who at the time was an absolutely enormous entity.

[00:09:42] Luke Freiler: They made the Solaris operating system, which was kinda the heart of business. Uh, big Deal, eventually acquired by Oracle. But at that point in time, they, they were one of the biggest names in tech and, and they called us and, and they wanted to inquire about the product. And I, I was in shock, like, I, I know how to handle this.

[00:09:59] Luke Freiler: I, I basically was a kid and I, I brought someone on the call who I had worked with at, at, at Samsung, initially, a guy named Scott. Mm-hmm. And Scott had spent 30 years at i b m in a, you know, growing to very, very senior positions running B two B sales at I B M. And I basically asked him to come be the adult on the call.

[00:10:17] Luke Freiler: Uh, I, I would be there to answer any of the product questions, but as far as sort of orchestrating this call, I, I really needed his help. And, and he did that as a favor to me. He was great. And ahead of this, we hadn’t talked about what the product would cost for me. It, it was gonna be a couple hundred bucks a month.

[00:10:33] Luke Freiler: It was gonna be self-serve, credit card, all that kinda stuff. And that wasn’t his world, but we didn’t really talk about it because we didn’t think the price would come up on the first call. My understanding of, yeah, those large sort of B two B engagements were that they had numerous fact finding calls and so on, and it was gonna take a while to even get there.

[00:10:48] Luke Freiler: So we get on the call, we, we show them the product, which wasn’t done. It was far from done in fact, and we were bootstrapping this company and, and kind of working it out on, on, on our own time. And I started trying to throw every objection possible at, at these guys. And I was saying like, Hey guys, it’s not done yet.

[00:11:04] Luke Freiler: And their response was, well, that’s great. We can help shape the roadmap. And you know, I I, I felt like they must have thought hard, playing hard to get was my sales strategy when in reality I was just scared to death that, that they were gonna overwhelm us and, and we were gonna disappoint them and, and, you know, it was gonna be crushing.

[00:11:19] Luke Freiler: So anyway, we show them the product, they love it. Um, we, we get to the point where they eventually said, just flat out, you know, what, what does it cost? And I’m sitting there deer in headlights. I don’t know what to say. And, and Scott and I hadn’t talked about this, and Scott just immediately blurts out, well, it’s $300,000 and 60,000 a year in, in perpetuity for support.

[00:11:38] Luke Freiler: And they flat out said, that sounds reasonable. We can do that. And I’m like, we get off the phone and, and I’m like, blown away. Like, I just found my funding, you know, from my perspective. And yeah. And Scott’s like, damn, it should have said 500. Um, and, and I’m sitting here going like, yeah, man, I just said $200.

[00:11:54] Luke Freiler: So, yeah. Thank you. I guess. And, and we sold it to them and, and they ended up a customer for a very, very long time. It was, it was a great experience, great relationship. And, and they did in fact, basically, you know, without knowing it, fund the company for that year. And, and on top of that, like even it, it even went further when we said to them like, Hey, we’re, we’re not on your platform.

[00:12:14] Luke Freiler: They were like, well, great, we’ll just send you a whole bunch of hardware. And they funded all of our infrastructure. As, as test equipment. Wow. Like tens of thousands of dollars, which at the time was amazing for us. We were, you know, living on 79 cent tacos. So it was, it was an incredible first experience and it did sort of reshape the business in, in many ways moving forward.

[00:12:34] Luke Freiler: Yeah, it’s such a good story

[00:12:35] Matt Paige: just on the, the concept of perceived value. Right. And, and that’s the key thing is kind of getting to, um, A sense, like how do you even get to a sense of what that is without having this serendipitous moment like you did, where you had the, uh, the guy in

[00:12:52] Luke Freiler: the room that did this?

[00:12:54] Luke Freiler: It, it’s, it’s fascinating because I, I like that story because it’s, it’s pretty wild and it’s totally true when I was there. Mm-hmm. But it’s actually not all good. And, and I, I, once I, I spoke at a conference about software pricing once, and the guy that spoke next to me was a guy named Rick Nucci, who was amazing.

[00:13:12] Luke Freiler: Mm-hmm. He’s, uh, the founder of, um, Boomie that sold to Dell for a bunch of money. Very successful guy runs a company called Guru now, I believe. And I remember him standing up before I spoke and, and said, uh, something along the lines of, you know, we’ve spent as much time on our pricing as we have our product.

[00:13:26] Luke Freiler: I like literally laughed out loud in, in the audience. And the reason was because what I was writing or talking about to that group that day, um, this was some sort of SaaS conference is, uh, sort of our journey of pricing and how every price we ever had, like the pricing we have today, could not look more different than that first price.

[00:13:45] Luke Freiler: It’s actually in a lot of ways, closer to my first vision for the company, ironically. But every price we had for about a decade and a half after that. Um, had some roots in that original price that we made up on the fly, and the reason was because that price was accepted, Scott basically threw down the gauntlet and said, look man, this is what the market will bear.

[00:14:05] Luke Freiler: They accepted it. This is what’s gonna happen. Mm-hmm. And what we recognized later is that they were actually a unique company in a unique, in a unique situation. What, what they had was an operating system to test. And if you think about it, operating systems were basically the first agile developed products in tech because you couldn’t just sit on your hands for two years.

[00:14:26] Luke Freiler: In between updates, there is security problems. Mm-hmm. There’s compatibility issues constantly coming out. So people have been patching, uh, operating systems frequently, forever. And where I’m getting at with this is, is they had a unique need that we didn’t understand was a unique need. Yet we pinned that as the market.

[00:14:43] Luke Freiler: And for years after, and I used to have this, this deck that kind of beautifully outlined this, every price that we came up with from there, every new model we came up with was somehow traced back to that original price. So if it was first 300,000 with 60,000 in support, I would make the argument of, oh, let’s get it down to.

[00:15:00] Luke Freiler: You know, 120,000 a year, but now it’s under 20,000 a year. So over a certain course, it’s the same amount of money. Then we would get and say, okay, well our average customer has six seats, uh, six full-time users who live in this product, so let’s sell it for $20,000 a seat. I could actually trace back like a decade and a half of pricing strategies that were all hooked to this idea of we don’t wanna give up or cannibalize that very first customer.

[00:15:24] Luke Freiler: And the problem with that thinking is we didn’t accept for the longest time that that first customer had a very unique model. And again, the irony is they were just early to agile and because they had agile development, they had a continuous need. They could justify building a team around this in one place, and therefore they could justify the budget to support that team.

[00:15:44] Luke Freiler: Whereas for the next decade, plus most companies were putting this on individual product managers like we had at Ericsson, which meant they were only actually running a beta test every couple years. And when you have that, you don’t have, when you’re decentralized in that form, you don’t have a centralized team, a centralized process.

[00:16:00] Luke Freiler: So you don’t have tools to support that centralized process. You don’t have a budget for those tools. And that actually held us back again, we were bootstrapped, so things like awareness weren’t something we could invest heavily in. And as a result, that kind of held us back sticking to those guns. And it was only in recent years after I took over c e o and, and wanted to get a lot more aggressive with how we approached the market and rethink what our go-to-market strategy is that we got around that.

[00:16:23] Luke Freiler: So, but, but we lived with those, I mean, it funded our first year. I can never say it was a first mistake. It really did help put the business on the map, gave us an incredible customer and a wonderful relationship and all sorts of social proof. But it also kind of stuck us to our guns way longer than we should have because we didn’t accept that the context changed and.

[00:16:41] Luke Freiler: Now any company that’s grown up in the last 10 years or so, uh, is agile. They’re, they’re typically doing consistent development of some, you know, most products or services now. So our newer customers, the Pelotons and Sonos and Rokus of the world, they all have teams for this. They all centralize efforts for this, um, because they have a constant need.

[00:16:59] Luke Freiler: Therefore, again, it went back to that idea that, that we need a tool or, or services to support that. So yeah, it’s been an interesting journey that’s been in kind of in all directions. No, that’s really interesting.

[00:17:10] Matt Paige: It’s the, it’s interesting at that initial frame of reference, it’s just kind of burned into your memory almost.

[00:17:16] Matt Paige: Like you talked about, like, you were always working from that point and it, it was kind of, uh, you know, it taking you down the, the wrong path in a sense. That’s, that’s really interesting. Uh, let, lemme get back real quick to, you know, the product wasn’t fully ready in the beginning either, you know, and you mentioned y’all were bootstrapping this because you were coming out of effectively the.com.

[00:17:39] Matt Paige: Boom and bust. Right. So funding was, it’s not like it was by choice per se, it just wasn’t a,

[00:17:44] Luke Freiler: it didn’t even, didn’t even try to raise money. No, no point. Yeah. And I

[00:17:47] Matt Paige: think this is an underserved thing from a lot of companies is, you know, services is, is a lot of times a very viable way. A kind of help fund what you’re doing, but you also learn as you go.

[00:17:59] Matt Paige: I know with Hatch Works that we’re, we’ve been bootstrapped as well. We are a services company, so it allows for more of that. But talk me through that journey, because I think that’s a really interesting story, how you got kind of into the services game. Yeah. And it’s evolved over time,

[00:18:13] Luke Freiler: so, so yeah. Even before that big software deal, you know, we had spent about, I don’t know, nine months or a year kind of building up the platform and.

[00:18:20] Luke Freiler: This point we’re taking on separate contract work. ’cause we’re all like 20, 21 years old and, and don’t have life savings. So at, at that time through a friend of a friend, we met this, this brilliant guy named Mike Fine. And, and Mike had spent a bunch of years running beta teams at, at a few big tech companies.

[00:18:37] Luke Freiler: And at the time, he was the most passionate person I had ever met about this whole concept, this whole domain. And when he heard we wanted to start a business about it, he was just instantly all in it. Just, it was, was in his blood and what his perspective was. And, and the pitch he gave us that, that to this day is, is still alive was, Hey, while you’re building the software, why don’t we deliver managed services on top of it there.

[00:19:01] Luke Freiler: Is a need out there. Especially if you look back at that sort of episodic need of, of once every year or two per product manager and they have a product. He said there’s a need out there and if we’re delivering on the software, it doesn’t need to be done. Right. The, the last 10, 20% of the product takes 80% of the time when you’re, you’re polishing and getting it ready for users and going through the UX and everything else.

[00:19:20] Luke Freiler: So, you know, when we’re working close in hand within the company, we don’t need that level of polish. We can do the, the heavy lifting behind the scenes. Mm-hmm. And, and he made a compelling pitch and, and we started working with him virtually immediately. And, and we started delivering managed services. And, you know, initially it, it was challenging because we’re now effectively running two companies in one.

[00:19:39] Luke Freiler: We wanna be a software company. Mm-hmm. We also have managed services, but at the same time, those services lean on the software heavily. They’re a tech enabled service. And that means they have pretty great margins. And, and as we innovate the software, we can directly have an impact on the margin of our services, which is a really great thing.

[00:19:55] Luke Freiler: So to this day, we still offer managed services. It’s still a part of the business. And most importantly, as you’ve alluded to, it became what we think of as our test kitchen, right? We, we knew that we now had this central team that ran a more diverse set of, of beta tests than virtually any company in the world.

[00:20:12] Luke Freiler: And therefore we could learn and implement and. From that, we started to recognize that, hey, one of the problems in our space is that there’s no framework, that everybody does it differently. Everybody’s kind of recreating it from the ground up, and everybody’s wasting a lot of time, and the opportunity cost there for them is huge.

[00:20:27] Luke Freiler: So what if we stopped thinking about just technology and started thinking about the process and everything that’s layered on top of it that companies could reuse and b, That we are in the center of this and do so many of these tests. We don’t even have to go learn from everyone we can and we do, but we have it direct.

[00:20:42] Luke Freiler: Like we have the information that they would never wanna share with us anyway, right? It’s, it’s right in our wheelhouse. So, so that led us to making a framework and certifications and, and everything else, and just kind of mastering our space and recognizing that if this category is going to flourish and, and achieve its goals, um, it can’t just be software solving a problem.

[00:21:00] Luke Freiler: And, and therefore you need, you know, in, in it, the typical way is, you know, software process and people put those three sort of things together to achieve something. And we recognize that we had the software and that’s what we were focusing on. The people were our customers. And ironically, the process we’d already been doing, but never really thought about it as a gap.

[00:21:18] Luke Freiler: So, it’s been a while now since we implemented that. It’s been pretty successful for us. We’ve been very happy with it. But our goal was to just get rid of that first thousand hours that every company was gonna have to use to, to reinvent their own wheel. And it’s been, it’s been great. Mm-hmm. So services to this day are a big part of our business and, and I likely will be indefinitely.

[00:21:36] Luke Freiler: Y you kind of hit on the

[00:21:37] Matt Paige: topic there. It’s the, the dog fooding, uh, uh, um, element of it, right. Where you’re using your own product to enhance the product. H how do you think of that in terms of, you know, thinking about your own RO roadmap for your product, how you think about Sure. You know, prioritizing your backlog, like what’s going through your mind, because there’s all kinds of different levers from taking.

[00:21:57] Matt Paige: Taking care of technical debt to new features, to, like, what, what’s your approach to

[00:22:01] Luke Freiler: that? So, I, I’m uniquely positioned to, to think about that probably more than most CEOs because I am very much a product driven, C E o I, I still lead our roadmap. I’m still, I, I live in Figma, like a product manager would.

[00:22:13] Luke Freiler: It is like, I wake up Saturday mornings and, and work in Figma for fun. Uh, I’m just that kind of nerd. So for me, I, I’m very, very involved and I beg my accounts team to get me roadmap meetings with literally every customer possible. I show them everything we’re working on. I take feedback in, in real time.

[00:22:29] Luke Freiler: I prioritize it from there. Um, it, it, it is a process of, you know, we are on a, a very agile process where we have a release every week, uh, which is typically faced facing, bug fixes and small improvements. And then we have a feature release, a significant feature release every month. At any given time, we’re working on three to four months of those with different subsets of dev.

[00:22:49] Luke Freiler: So for us, it’s very interactive with our customers. We’re in. The process right now of shifting. In fact, we’ll be announcing it. It, it actually might already be on our What’s new page, but if not it will be this weekend we’re shifting to a feature flag based, uh, beta program. So in the past we’ve had what we call our Go early program, which our customers can opt in and they can either opt in their main implementation or they can opt in a stage.

[00:23:11] Luke Freiler: And we have a unique challenge in that beta testing. The beta testing platform is a snake eating its own tail, and we don’t. Don’t want to create friction through an unfinished product for their customers who are dealing with an unfinished product. Mm-hmm. It multiplies on itself. So what we’re doing now is, is the, the more what I would consider modern approach of being able to flag on and off certain features on demand for different customers, and basically pre-leasing everything at least a month ahead of time.

[00:23:35] Luke Freiler: Giving them some time to, to adopt it and give us feedback on it, implement those, those improvements and changes, and then roll it out in the next month and roll. Out the next new thing. So this month, actually, as of this weekend, we’re gonna announce what the first sort of feature flagged feature is.

[00:23:49] Luke Freiler: Everybody’s who the customer is able to kinda opt in and try it out. And then as of this month on what we’ll have those with, with each new release. So, but uh, you know, our main process is, is to dog food our own stuff, to use as many customers as we can. Uh, there’s a lot of challenges with that. Like I said, snake eating its own tail and our customers are very busy, but, Thankfully, we got a lot of tactics and tools to, to get past that.

[00:24:11] Luke Freiler: So it’s, it’s been pretty successful for us. And you

[00:24:14] Matt Paige: talked about Agile in this, uh, process of releasing every week and all of that. How long has that been going on? Like, is did y’all, were y’all agile from the very beginning, back when it was this new novel concept of something? No. No. So,

[00:24:27] Luke Freiler: so, so yes and no.

[00:24:28] Luke Freiler: In fact, I, and I think many companies have gone through some variant of this, and, and I’m always mm-hmm. Willing to, to follow my sword and talk about it. We were. Agile from a very early age, and, and I just fell in love with it. I, I think I was CTO at the time and I remember my first write up to the company about, you know, how interesting and, and amazing it would be to finish a feature and get it in in people’s hands, right?

[00:24:47] Luke Freiler: The main pitch for Agile was rather than doing a big release, you know, in frequently where half the features are done and sit on a shelf for six months while the other half get done, um, And therefore they’re not producing value for your customers. Mm-hmm. And when people are paying for something, you know, monthly or annually, they want persistent value, they’re paying for a service.

[00:25:05] Luke Freiler: Uh, it just clicked in my head, like, there’s a few things in my career that immediately just like, oh my God, that’s amazing. Why aren’t we doing that? Content marketing was one. I absolutely love it. This was another, um, so we tried it and, and we built our own system and, and we did it kind of our own way and, and we failed miserably.

[00:25:20] Luke Freiler: Uh, it just did not go well. And part of it was at. At that point, the enterprise was not super savvy to agile delivery or acceptance. In other words, they didn’t want releases to come that quickly. And when you have a company who builds a process and builds a team on top of a platform, they. Need, you know, a lot of, of, of time to redo kind of how that team works and anything you change can disrupt that team.

[00:25:48] Luke Freiler: So we, we were at this situation where we were capable of releasing monthly, but we have customers who are basically saying, yeah, I only want it quarterly, or I only want it every year even. And, and we’re sitting here going, okay, well we’re gonna roll whatcha talking about every week. These guys are gonna get it every month and these guys are gonna get it once a year and we’re gonna support oh man, 11 that they don’t have.

[00:26:07] Luke Freiler: And, and it was just a nightmare. So, Eventually, we basically walked away and said, okay, we’re not gonna do Agile anymore. We’re still gonna operate Agile internally, but we’re not gonna release Agile. Because we’d gotten used to it and started to like it, but our customers weren’t accepting of it. And so then at that point we’re releasing every like three to six months, we’ll do a big meaty release just because at that time our customers were pretty much exclusively enterprise and, and that’s what made sense.

[00:26:31] Luke Freiler: And as we built a bigger team and our engineers got more confident and we put project management instruction in, in place and whatnot. We started to get pretty confident that we could do the rollouts in a better way. And more importantly, the, you know, this is what happens. Everybody kind of matured and it became the norm.

[00:26:49] Luke Freiler: And it’s, you know, it’s the companies like Slack and whatnot that don’t give you a choice and therefore everybody kinda got used to it. So now, and I’d say for about a year now, no, two years now, um, We’ve been, yeah, more than that. I don’t know how long. For a long time now we’ve been pure agile doing these monthly releases, you’d have to go through the mega backlog.

[00:27:07] Luke Freiler: That is our, our What’s new site. Uh, and, and our customers seem fine with it. Every once in a while you’ll get a little bit of pushback on something, um, because we didn’t do a good job of, of prepping them for it. And we’ve done some interesting things that, you know, we, we question, we constantly experiment.

[00:27:21] Luke Freiler: Like we always release our release notes early now, and. I still question if that’s the right idea or not. There’s pros and cons to it, but at least if those customers are interested, they can get a sense of what’s coming. They can ask questions ahead of time because they might not want to be part of the test.

[00:27:35] Luke Freiler: They might just want it when it’s out, but still, you know. Yeah. Wanna know. So we’re constantly experimenting. It’s a challenge and it is directly proportional to sort of the size of the company if they’re a slow moving, very, very large company with a big infrastructure. Are, they’re not gonna adopt most of your new functionality anyway.

[00:27:50] Luke Freiler: So you have to find a way to roll it out that doesn’t impact them. And the smaller guys like want every bit of value and, and efficiency they can gain. So they’ll take it as quickly as you can give it to ’em. So we try to make the best of both worlds. Yeah. Yeah. Interesting journey there.

[00:28:04] Matt Paige: Uh, kind of, I got a back and forth a bit.

[00:28:06] Matt Paige: Uh, so I wanna get into, you know, the hot topic right now, whether it’s. Hype or not around generative ai. You know, the, the concept of ai, even large language models been around for a while, but the hype around it and the access to it is just accelerated. I’m curious from your perspective, how does that impact, you know, your business, your product, the concept of usability testing?

[00:28:30] Matt Paige: Or are there any just like, uh, you know, views you have of the future of how things just completely stepwise change from what they. Are today related to your world, or do you see it as this, you know, just it’s hype and it’ll die down and, oh, not

[00:28:45] Luke Freiler: much, much change. I, I, I definitely don’t see it as just hype.

[00:28:48] Luke Freiler: I mean, obviously there’s a hype component to it, but I, I think it’s incredible. And I, I’ve been mm-hmm. Fascinated since like day two, uh, my. My marketing VP was fascinated from day one, and he spent a few days trying to get me interested and, and I’d played with GPT when it was a little earlier on and it wasn’t as impressive.

[00:29:03] Luke Freiler: Mm-hmm. And then the thing that turned me was when, well, two things. One, he had to write a rap about our company. And what was amazing about that was he didn’t tell it anything about our company. It just knew, and, you know, we’re, we’re a pretty small player for a site we’ve never heard of at that point in time to know about us.

[00:29:19] Luke Freiler: But, but it wrote a, a very accurate rap about our company and culture that just broke. My brain. And then the next thing he had it do, which again to this day might be one of the most impressive things I’ve ever seen it do. This is like last November was he had it write a, a social post that was, um, Thanksgiving themed.

[00:29:37] Luke Freiler: It was holiday themed. And what was amazing about it is it created its own analogy. All, all he did was said, I, I want a social post that’s Thanksgiving theme for my company Centercode. And it knew that we were in the beta test management space and it made. Made this insane analogy about how beta testing is like the, the final spicing of your food.

[00:29:57] Luke Freiler: And if you don’t, you know, put the spicing on at the right time, oh, your, your meal’s gonna be a failure. And I’m sitting here going, I, I can’t process mentally how it came to that. We’ve been doing this for, for 20 years and we’ve never thought of that analogy. And there’s a temporal aspect. I mean, it, it really is incredible.

[00:30:14] Luke Freiler: Uh, It was very shortly after that I basically went to my team and I still subscribed to this line of thought of like, look guys, um, AI is not going to replace you. Someone who uses AI is so, let’s be that good. Let’s be that person. Right? And I, I kind of put it out there of like, let’s imagine that AI is, is saving each of us, you know, 20 minutes a day.

[00:30:34] Luke Freiler: And let’s imagine the company over there isn’t using at all. How much of an advantage do we have as a company? Once you sort of roll that up, that, that. We really have, you know, no choice not to say yes to. So I’m very positive on it in that regard. Um, at the same time, I have all sorts of reservations of just kind of how I mature.

[00:30:53] Luke Freiler: It’s right. One of the interesting things is there’s just no such thing as a mature AI product right now. Uh, we just did our entire performance review program process with an amazing tool. Um, I will give them a shout out. It’s called Hyper Context. We use it to. Manage meetings. Mm-hmm. Um, but they invited us to try out their new tool that uses AI to write performance reviews and kind of imagine a scenario where you are already collecting a year’s worth of, of meeting notes, um, for everybody in one system.

[00:31:21] Luke Freiler: What could you do with that in sort of understanding how people are performing and whatnot? It’s just a fascinating to think about, but as you. Might imagine as, it’s amazing, it’s still immature, so, you know, yeah. There, there is nothing that’s mature yet, so we can’t rely on it. They approached it very smartly to where it’s basically giving you a starting point, which is a great way to think of ai.

[00:31:38] Luke Freiler: They’re not trying to say, here’s, you know, an AI review, just hand it to them and walk away. They’re basically saying, here’s everything we learned, now edit it and, and make it your own and make sure it’s accurate and whatnot, which I think is the right way to approach ai. Um, we have a bot in our tool that.

[00:31:52] Luke Freiler: We started developing about three years ago, and there, there’s some, you know, re religion to this, but we refused ever to call it ai. We, we never thought of it as ai. We still don’t think of it as ai. Um, we certainly think of it as machine learning, kind of a tier below ai, but it’s not what chat g p t is and on.

[00:32:08] Luke Freiler: One hand you could say, oh, that, what I’m really saying is it’s not as impressive as ChatGPT is. And that’s actually not what I’m saying. What I’m saying is it’s not as, as powerful or unpredictable as ChatGPT is. Mm-hmm. And in our world, what our bot does is effectively manage all test or engagement for you.

[00:32:25] Luke Freiler: That the biggest problem in our space is, is getting people to engage and participate and give you feedback. And we built a bot that builds sort of a behavioral profile on every individual communicates. Them through the channels that they prefer, at the times that they prefer, simply by watching how they perform and what they do.

[00:32:41] Luke Freiler: And all of its messaging and everything is unique and it’s a bot. And our, our sort of go to market was, we’re not gonna say it’s a bot, but we’re also not gonna say it’s not a bot. We’re just gonna kind of see what happens. And most importantly, we can control the code 100%. We know exactly how it works, which is not true of most generative AI and sort of the black box of it.

[00:33:00] Luke Freiler: Um, so we. Know it’s not gonna go rogue and, and piss off somebody. Um, but we also, uh, know that it’s not sending data anywhere. It, it’s not, it’s all within our systems. It’s all controlled and whatnot, which is another big AI problem that most of our customers have. So we’ve ran contests internally and prototypes a million different ideas and whatnot.

[00:33:19] Luke Freiler: We’ve run contests to think of every single way that AI could benefit our company as a tool, every single way we could implement AI into our, our platform as a tool. And we’ve got. At this enormous backlog of incredible ideas, but we have to be careful with it. It, it has to be reliable, it has to be somewhat predictable.

[00:33:36] Luke Freiler: If people are gonna sort of put their brand behind it, they need to be sure that it’s not, you know, Skynet. So, so that, that’s a lot. But I, I’m in love with it. I think it’s fascinating. I was so excited, like in the early days of showing my mom and my, you know, my kids, uh, it’s been just mm-hmm. Fun to watch people’s brains explode and, and try to figure it out.

[00:33:57] Luke Freiler: It’s, to me, You know, it basically went the pc, the internet, the mobile phone, and this. Those are the, you know, major technological breakthroughs we, we’ve had in our lives. And this one is just as big as the rest of my mind and, and could be bigger. Yeah. I’m with you

[00:34:13] Matt Paige: there. I think it’s, uh, on that same level, let me ask you this.

[00:34:16] Matt Paige: Do you see a future where you actually have a generative AI agent or, you know, bot, whatever we want to call it, doing the using. Ability testing instead of an actual person. Is there ever, is there some, you know, utopian future where that’s the case, or do you think it’s always you need that human involved, uh, in

[00:34:39] Luke Freiler: the process?

[00:34:40] Luke Freiler: I, I don’t, I would not be comfortable at this point in time predicting too far out on anything with ai. Mm-hmm. Um, that’s safe. I, I just, I, I don’t, yeah. I, I, I, I’m not, I’m, I’m looking to hedge my betts, I guess. But, you know, I, I, again, I would’ve never imagined it could’ve written a wrap that was weirdly sensible about my company without me telling anything about my company.

[00:35:02] Luke Freiler: Is, is just unwinding that is, is still mm-hmm. Uh, fun to think about and endless. I’ve never felt both more excited and overwhelmed about a technology in my life, ever. It’s just nothing’s happened this quickly. So the rate which it’s happened is impressive. But at the same time, I do think we’re about to see a lot of diminishing returns.

[00:35:22] Luke Freiler: I, I think they kind of had it under wraps for a long time while they did a lot of the back work and, and then they exposed it and it moved really quick for a little while. But again, I, that’s ’cause that was already done. They just hadn’t really rolled it out yet. They weren’t confident in it. Mm-hmm. Now I think we’re starting to see a little bit of a slowdown, but now, What’s cool about AI is we’re moving from the tech being the impressive part.

[00:35:42] Luke Freiler: And now the impressive part is the creativity of all of the developers who now have access to that new thing and can think about entirely new ideas that, that aren’t out there. Mm-hmm. So back to your question though, I think there will be a level of, of user testing and product testing that happens in the AI space.

[00:36:01] Luke Freiler: Uh, I mean, I, I know there’s already companies working on it. We’ve talked about it, we’ve thought about it. It’ll happen. Will it ever fully replace? I doubt it. Um, in part, again, not knowing where it goes, who knows if it’s, you know, on a robot in your house, they can walk around, then it’s a whole different game.

[00:36:16] Luke Freiler: But, um, there’s a lot of things that, that happen in the physical space and have to do with humans and whatnot that are very, very, very difficult to predict. But could it, you know, like all things take out the, the 20% of stuff that wasn’t quite obvious but is predictable. Yeah, it, it could definitely do that and, and will, but depending.

[00:36:35] Luke Freiler: On the product. Just the nature of, of what technology is. I think you’re always gonna have to be testing what comes next. And I don’t think, and anyone

[00:36:45] Luke Freiler: and

[00:36:45] Matt Paige: anyone looking to go a little deeper on the generative AI side, check out our previous episode with Jason Slacker. Um, you mentioned the idea of like, you know, if everybody could be 20 minutes more productive, he kind of framed it as, imagine you have a thousand interns or a small country trying to get people into thinking, framing out.

[00:37:03] Matt Paige: Side of their, their normal, uh, working model, which is kind of an interesting or mental map, I guess I should say. Interesting way of kind of framing things. That’s

[00:37:12] Luke Freiler: Yeah. That’s, that’s very savvy. Yeah. But

[00:37:16] Matt Paige: Luke, uh, really appreciate the, the time today. Uh, and where can folks find you, find Center code? What’s the best way to find that?

[00:37:22] Matt Paige: Yeah, a absolutely.

[00:37:23] Luke Freiler: So center code.com uh, is is the website, as you might imagine. Um, my email is just look@centercode.com. I’m, I’m not nearly as active on, on the social media as I should be. So while I am. On LinkedIn and, and you can connect with me and whatnot. Uh, I’m also just kind of an old school emailing guy, so feel free to, to shoot me a message.

[00:37:40] Luke Freiler: Um, you know, our product is, is very different than the stories we told. You know, initially it was aimed for big teams and whatnot, and in recent years what we’ve done is evolve it to really target product managers and, you know, all the way starting at, at free to run a whole program on your own to a very inexpensive self-serve.

[00:37:56] Luke Freiler: And, and within just a few minutes, you’re gonna be recruiting your own customers. And, you know, I think it starts. Said about 40 bucks a month. It’s really, really simple and straightforward. But then as your program matures, if you do choose to centralize and whatnot, the platform can grow with you to more of its traditional base.

[00:38:09] Luke Freiler: So, um, kind of options for everybody. If you’re looking to engage with your audience or even do internal dog fooding, anything that involves using, using your product and gathering, you know, feedback again, issues, ideas, praise, or whatever else is gonna help improve your product. Um, check us out. You can try it for free, you can play with it.

[00:38:24] Luke Freiler: And my team is, is built a product managers who are there to help you and, and talk about it with you. So happy to help. 

[00:38:32] Matt Paige: Yeah. And I know we didn’t get into that, but that’s an interesting piece from the evolution of the 300 K to now this p l g motion where you can try it for free. And that was the unique thing too.

[00:38:41] Matt Paige: Like, I know we kind of just ended, but keep going. I it the, the fact that you have product managers almost acting as like your salespeople just kind of blew my mind a little bit. You actually have like knowledgeable people. People talking to the customers that actually know the customers in a

[00:38:55] Luke Freiler: sense, yeah, I, when I, when I talked about doing that and, and talked about why I wanted to do that, I, a sort of group of peer CEOs thought I was crazy at first.

[00:39:04] Luke Freiler: And, and I, I, as I’ve gone through it, I’ve, I’ve really enjoyed it. There, there is certainly a place for, for sales and for me it’s more about relationships with the customers and growths and making sure their needs are met. But what I don’t want is somebody on the phone with a customer who doesn’t have.

[00:39:20] Luke Freiler: The agency or the right motivation. Mm-hmm. You know, my dream org from a sales perspective, and, and we’re there now is, is nobody selling the product, has a commission. And, and that’s because I, I want them to be motivated to sell the right thing to the customer as opposed to the right thing to hit their quota.

[00:39:35] Luke Freiler: That that quota is valuable to the company. It’s how we we eat, but you sell the right thing to the customer, they’re gonna use it, they’re gonna grow, and it’s, it’s not gonna be an issue. So, um, yeah, I’m very proud that, that we have a, a product focused team and yeah, we have. Have the p l G method in to get people going and, and then if you do want to move up from there, we have experienced product managers who are literally the product managers of our platform that are stepping in to help with those conversations.

[00:39:58] Luke Freiler: And, and I’m really excited about that and I’m trying to use as much of my time to have those conversations as I can. ’cause they’re, they are, you know, one of the fun parts of the job, so it’s a good time. 

[00:40:08] Matt Paige: That’s awesome, Luke. Well thanks for joining us today. Have a good one.

[00:40:11] Luke Freiler: Thank you, sir.

 

The post From 79 Cent Tacos to Technological Milestones appeared first on HatchWorks.

]]>
Off the Shelf or Custom… That Is the Question https://hatchworks.com/built-right/off-the-shelf-or-custom/ Tue, 05 Sep 2023 09:00:29 +0000 https://hatchworks.com/?p=29825 The decision to build or buy when it comes to software is a thing of nightmares for many. There are so many considerations, from cost to timeframe, expertise, team capacity and scaling complexity, to name a few. But, with the right framework in place it doesn’t have to be that scary, says Built Right host […]

The post Off the Shelf or Custom… That Is the Question appeared first on HatchWorks.

]]>

The decision to build or buy when it comes to software is a thing of nightmares for many. There are so many considerations, from cost to timeframe, expertise, team capacity and scaling complexity, to name a few. But, with the right framework in place it doesn’t have to be that scary, says Built Right host Matt Paige.  

To walk you through how to make this decision with confidence, we welcome guest Jerrik Arango, Principal Software Engineer at HatchWorks onto the show to talk about the main factors that need to be considered. He shares the framework they have in place to help with the process and provides some great examples along the way.  

Keep reading for top takeaways or tune into the episode below.  

What should people be thinking about when faced with the decision to build or buy? Matt and Jarrick aim to demystify this process by providing a framework to help you evaluate the options. Here are the nine crucial factors that will help to make the crucial ‘build vs buy’ decision a lot easier.  

1. Competitive differentiation 

Consider whether the software you’re looking to build or buy is your ‘secret sauce’. Some of the questions that need to be answered are:  

  • Is it a core process unique to your business?  
  • Is it something that will set you apart from competitors?  
  • What resources do you have?  

Another consideration is whether you want complete autonomy over your roadmap. If you buy a solution, you may just be allowing someone else to own the roadmap and ideally, you want a partner that you’re buying from to have the same vision. As Matt explains, this is a big part of the decision-making process.  

2. Cost

Evaluate both the short-term and the long-term costs associated with building vs buying. While building can entail upfront development expenses and ongoing maintenance, buying might incur licensing and subscription fees. Factor in implementation, support and potential upsell costs when assessing the total expenditure.  

3. Scaling and complexity 

Determine the complexity of the software and whether it aligns with your team’s expertise. If it’s highly strategic and core to your business, building custom software might be a better fit. However, for less strategic components, an off-the-shelf solution might be perfect for your needs.  

4. Maturity and commoditization  

Consider the maturity level of available solutions. Off-the-shelf products are often more mature, offering immediate functionality. Building customer software allows you to tailor features, but, explains Jarrick, it might require more time and resources to reach the desired maturity level.  

5. Timeframe 

How quickly do you need to deploy the software? If speed is of the essence, an off-the-shelf solution can offer faster implementation. Building custom software may take longer, but it can provide greater flexibility when it comes to aligning with your business needs.  

6. Internal expertise 

Do you have the expertise and capacity within your team to build and maintain the software? If you lack the necessary skills in-house, you can augment your team with external experts or opt for a third-party solution.  

7. Team capacity 

Consider your team’s availability and capacity to take on the software development process. Building software requires dedicated resources, which might detract from other crucial tasks. Off-the-shelf solutions can free up your team’s time. 

8. Support structure 

Examine the ongoing support offered by both options. With custom software, you’re responsible for maintaining it, while off-the-shelf solutions can provide operational support. Weigh up the pros and cons of support based on your organization’s capabilities.  

9. Data and security 

This is, of course, an extremely important consideration, explains Jerrick. Does your software deal with sensitive data and would the third-party own that data, or would you? And is there a competitive disadvantage or advantage to that? These are some of the things to look at when making a decision. Carefully assess the security measures provided by off-the-shelf solutions, says Matt. A custom solution could allow you to tailor security measures.  

Ultimately, the decision to build or buy a software solution involves a complex interplay of factors. By evaluating these nine considerations, you can reduce the stress and make an informed decision that aligns with your business goals and constraints.  

Plus, explains Matt, advances in low code and no code solutions can offer alternatives that may bridge the gap between custom and off-the-shelf software, expanding your options further.  

For more information on how to approach the build vs buy decision, check out the full episode.  

Facing the Build vs Buy dilemma in tech? Make sure to subscribe to the Built Right podcast for more insights and discussions like these. Share your thoughts and experiences with us. Let’s create, innovate, and elevate the world of design together! 

[00:01:00] Matt Paige: Hey everybody, welcome to the show. Excited. You’ve decided to join me today considering you got many other things you could be doing with your time.

 

[00:01:23] Matt Paige: I appreciate it. And today we’re going into a topic that is the thing of nightmares for many. Just thinking about it can cause sweaty palms, shortness of breath, and even panic attacks. And I’m talking about the decision to build. Or buy when it comes to software, and it shouldn’t be that scary though.

 

[00:01:42] Matt Paige: Today we’re gonna help you make this decision easy with a little framework we like to use. And we’re gonna get into the main things to consider when making this decision, a framework to reduce the stress. ’cause it shouldn’t be that stressful. And we’ll give you some examples along the way. And to help me break this down, I’m joined by Jerrik Arango, one of Patchworks principal software engineers.

 

[00:02:03] Matt Paige: He’s got over 18 years of experience with a wide range of technology expertise from e-commerce to sporting events. Even presidential elections. I may want to learn some more about that as we go in. J I’m curious now, but you’ve worked at brands like P G A P G A, tour, cartoon Network and Fanatics, but welcome to the show, j 

 

[00:02:24] Jerrik Arango: Yeah, thanks Matt.

 

[00:02:25] Jerrik Arango: I’m glad to be here. 

 

[00:02:28] Matt Paige: Awesome to have you. So Jerrik’s been through this this fun build verse buy adventure many times with clients internally. So he is got some war stories along the way. But, Jerrik, set this up like at the core. What should people be thinking about when they’re thinking Build verse buy.

 

[00:02:48] Jerrik Arango: Yeah. You’re, doing this, you’re at this point because for a reason, right? You’ve got something, that you want to deliver, you want to bring to the world. And that’s lots of times we hear that’s your secret sauce, right? There’s something unique about you and that product that you want to, bring forward.

 

[00:03:12] Jerrik Arango: So you really want to consider that and Right. A lot of times you can buy software that will help you. But that software normally is a common denominator. That is something that’ll help you standardize. You might your secret sauce might be, might not be the payroll for your employees.

 

[00:03:34] Jerrik Arango: Yeah. And so you’ll potentially, you might want to go into having a payroll software or maybe internal, messaging, you use something like Slack. 

 

[00:03:47] Matt Paige: Yeah. And it’s, we’ve seen this before too. We’ve had clients come to us, they have this great idea for this product, and what they’re describing is like a C RMM or an E R P.

 

[00:03:55] Matt Paige: And we’re like the people have perfected this. Maybe it shouldn’t be the best thing to build, but you hit on a core point. It, At the core of it, like if you’re deciding to build, it should be because it’s your secret sauce, right? It’s, it needs to be tailored to your needs. You have a deep expertise in this domain.

 

[00:04:15] Matt Paige: But for buying, buy the standard, standardize. If something exists, it’s already out there. It’s not part of like your core differentiator, just buy it. It’s at the core. That’s some of the big factors. In there and we got a little framework that we like to use. It goes through nine different criteria and we’ll pop through each one and we’ll go deeper into than others.

 

[00:04:39] Matt Paige: Some. I’m sure Jerrik will have some examples we can jump into. Probably got a few as well. But to hit the list I’ll, hit the list and then Jerrik, we can start to step through ’em. But if you think of the evaluation criteria for this decision, the first one is competitive differentiation.

 

[00:04:55] Matt Paige: It’s cost. Scale, complexity, maturity, and commoditization timeframe. Internal expertise, team capacity, support structure, and data and security. And it seems like a lot of different factors. They’re not scary things. These are things that are gonna help make this decision a lot easier. But let’s, jump into the first one, competitive differentiation.

 

[00:05:20] Matt Paige: What is this factor? What questions should you be asking and ultimately what, factors into that decision would this criteria of build and buy. 

 

[00:05:31] Jerrik Arango: Yeah. As I mentioned is it that secret sauce of yours, right? Is it a core process of your business?

 

[00:05:42] Jerrik Arango: You want to ask and challenge challenge that with with that question. Also as you mentioned differentiation. Will this set you apart? Right? Will this make you different? Or like you mentioned, right? C r m or Right. You have something out of the box that’s at the forefront.

 

[00:06:03] Jerrik Arango: What makes you different from everyone else? Another question you could think of is with the resources that you have do you have the skillset to support that? Do you have that, deep knowledge and understanding for this piece of your product or your business do you have it to be able to deliver on that?

 

[00:06:30] Jerrik Arango: Also do you the roadmap sometimes you might be a startup, you might be you want, you might want to test this product. With small group of, customers or users and you want to, know that roadmap and how, much time do you have to invest.

 

[00:06:56] Matt Paige: I think the roadmap’s an interesting one too. ’cause, and we’ve seen this with clients and it’s a it’s a big part of the decision. ’cause if you go and buy a solution, You’re just allowing somebody else to own the roadmap and ideally a partner that you’re buying from they have a vision for the roadmap.

 

[00:07:19] Matt Paige: You maybe can give some recommendations, but you really don’t have autonomy to really influence that roadmap as much. That’s a big piece of it. And if you’re trying to build something that is gonna be differentiated that you do want to have control over, I think that’s one of those big ones that.

 

[00:07:35] Matt Paige: You really gotta think through is it okay that you’re not gonna have a ton of influence on new features, new functionality, and all of that? 

 

[00:07:43] Jerrik Arango: Yeah. And diving deeper into the roadmap, there’s there’s all kinds of opportunities and decisions you can make. If you want to get out fast to, to like more of a proof of concept it, makes even more sense to what can I have off the shelf that I can use to get the idea across. But then you right you, don’t wanna build your secret sauce, your product on top of a POC. And that’s where then you might take a different approach.

 

[00:08:16] Jerrik Arango: This is what we’re gonna build custom. This is what our product rests on and how is that foundation? You want to build on something solid? So that might be using. A third party and off the shelf, or going more custom. And if we’re talking about uniqueness most of the time that sh that will fall a little more on the custom side.

 

[00:08:40] Matt Paige: Yeah. And you bring up an interesting point. This may be worth getting into later, but I. Like you talk about proof of concept, like maybe it is good to use something off, off the shelf just to prove it out and if it if it has legs, then maybe you do truly build it custom and then you get into the whole world of no code, low code and all of that, which allows you to move faster.

 

[00:08:59] Matt Paige: But it could have some restrictions down the road as well. Yeah. So I think let’s go into, yeah, go ahead. 

 

[00:09:09] Jerrik Arango: Yeah, no I think you I remember watching one of the previous episodes from the Bill Red podcast and, I think episode four, you’ve got Joseph there that he touched on great points about the M V P and that process that you can, that’s a whole other thing that you can look into when you’re in that stage.

 

[00:09:30] Matt Paige: Yeah, that’s a great call out. The discussion with Joseph. We talked about M V P versus like M V R when you’re modernizing. Awesome, content there. All right, so that’s the first one. Competitive differentiation. I think it’s first for a reason. I think that’s probably one of the biggest decision criteria.

 

[00:09:48] Matt Paige: And also think about waiting too, if you’re truly running through this in a structured way, I would give some more weights to competitive differentiation. But the next one for us is cost. So cost is always a factor when you’re making a business decision, but take us through how it relates to a build versus buy type of decision.

 

[00:10:07] Matt Paige: What type of cost exists? 

 

[00:10:09] Jerrik Arango: Yeah, so there’s if, you’re going custom, there’s implementation costs. There’s also that, that focus opportunity cost as far as, it’s gonna take time to build it. You’re gonna have resources focused on that. Now is this your main product or is this something supporting or something on the side?

 

[00:10:34] Jerrik Arango: But you’re gonna have a good amount of. Focus distracted or it might be aligned, right? There’s ongoing cost as well. Normally we can get something out a good application product sophisticated in about six months. But that product is always, is gonna be ongoing, right?

 

[00:10:57] Jerrik Arango: There’s maintenance, there’s you might have feature development to a certain point but there’s an ongoing cost, right? Even from infrastructure. And as I mentioned, there’s opportunity costs if I’m focusing, looking into this maybe to vet the idea. Which hopefully we’ve already done some, legwork there.

 

[00:11:20] Jerrik Arango: That it’s worth now taking the next step to invest in. There’s definitely opportunity cost if you’re doing this. What else could you have been doing? 

 

[00:11:29] Matt Paige: Yeah, and I think opportunity cost, that’s one that often gets, so if we focus to build this thing, we’re you, we’re putting resources towards that so they’re not gonna be able to do something else.

 

[00:11:40] Matt Paige: I think that’s one that a lot of people forget, but you really need to think through that. And it’s a lot of times with build versus buy, I don’t know if you see this, but I see a lot of people compare with the buy decision. It’s like the software licensing, all of that. And when they think of build, they just think about building it to get it to market and they, don’t think about all the ongoing cost of maintaining and enhancing and all of that.

 

[00:12:05] Matt Paige: It’s not just a one time cost. You gotta maintain the thing after you’ve built it as well. Yeah 

 

[00:12:13] Jerrik Arango: And upfront you might wanna go with the third party where you’ll have a lower cost upfront. But in the long term the custom will give you a better return on investment. It might, yeah take a little longer.

 

[00:12:27] Jerrik Arango: You might have to invest more upfront but in the end you can save you can save more on your investment and you have more control for the future. I was in a situation not too long ago where. It wasn’t necessarily that secret sauce, but that secret sauce depended on this. And oh yeah.

 

[00:12:49] Jerrik Arango: They were looking for different third parties that we can use. And while that was happening, I was working on developing a custom solution in the end. And, it has to do with the roadmap. Do you need this was more of a startup. Do you need. How, soon do you need to get to market to, to help support your startup, your this product that’s coming to fruition.

 

[00:13:15] Jerrik Arango: And and this, custom solution was, built now, right? The, a chain is only as strong as the weakest link, right? Yeah. And, sometimes going with a third party solution, right? You also want to take into account how stable, right? If, your product is, sitting on top of this. You want that third party or that this software to be stable, you want to have control over it because that could even you might have a great idea, you might have a great product, but if it’s not dependable you’re gonna use losers users quick.

 

[00:13:53] Jerrik Arango: Yeah. And so, right, this custom solution, in this example, it ended up being a fraction of the cost of any of the third party offerings. You had a lot more control and the business really depended, or the main, the secret sauce really depended on, on this software. So the custom solution here ended up winning and, proving itself.

 

[00:14:19] Jerrik Arango: There was doubters in the organization and there was. Challenges with this service and there was an, there was a try. There was let’s test it out. Let’s see it. And the custom solution are always one. 

 

[00:14:33] Matt Paige: Yeah, that, no, that’s really interesting. I never thought about that in terms of the dependency.

 

[00:14:37] Matt Paige: It may not be your secret sauce, but if your secret sauce depends on this thing, then it does become important, right? That you have autonomy over it, you own the roadmap. And all of that. So that’s a great example to think through. All right, so let’s go to the third one. Scale and complexity. Talk us through this one.

 

[00:14:54] Matt Paige: When you should how, you should think about scale and complexity of either customer or a bot solution. 

 

[00:15:01] Jerrik Arango: Yeah. If your function is if your solution is pretty simple then You might wanna go more on the build. Yeah. And I think it comes down to, 

 

[00:15:19] Matt Paige: to it’s like almost this, matrix view, right?

 

[00:15:23] Matt Paige: Because if it’s not complex, but super strategic, it’s obviously let’s go build that. But if it’s, what is it If it’s not strategic, but very complex, it’s don’t touch it with a 10 foot pole. So it’s like that, Marrying of those two criteria together kind of factor in 

 

[00:15:43] Jerrik Arango: And, kinda how I touched at the beginning of that expertise, that, that knowledge that you have, right?

 

[00:15:50] Jerrik Arango: That’s gonna, that’s gonna be part of the reason, right? Or your secret sauce. Yeah. And so you want to focus on what you’re bringing that’s unique, right? What’s unique about, you, about the product and. And so you might wanna buy when it’s something that’s supporting you that you need, right?

 

[00:16:11] Jerrik Arango: Yeah. But you don’t, that isn’t your lane. And so you can leverage a company that’s a hundred percent dedicated to building out that side. They’ve got experts, they’re working 24 7 on that. And so you can leverage their expertise into your product, into your right into your business. Yeah.

 

[00:16:32] Jerrik Arango: No, I love 

 

[00:16:32] Matt Paige: that. And I, gotta pause for a second for those on video you’ll see this, but those just listen, you won’t be able to, I keep seeing like something swimming out to the side. Is that like an eel you have in an aquarium over to the behind you there? Yeah, so 

 

[00:16:45] Jerrik Arango: usually I’ll blur out my background, but no.

 

[00:16:48] Jerrik Arango: That’s cool. It might get distracting. I’ve got, yeah, those are called dojo LOEs. They grow about a foot long. 

 

[00:16:54] Matt Paige: Wow. That’s, is it an eel, do you have other stuff in there or is it just the eel? No, I’ve 

 

[00:16:59] Jerrik Arango: got all kinds of fish in there. Yeah. Nice. 

 

[00:17:02] Matt Paige: Okay, cool. I gotta say that after we get done, then recording. Yeah.

 

[00:17:05] Matt Paige: All right, next one up. Maturity and commoditization. What, does this factor this criteria all about? 

 

[00:17:16] Jerrik Arango: Yeah. Just as, we were touching on the maturity, right? If you’re, building this product or right. Yeah. If you’re buying this solution that’s gonna help you out it’ll, it will most likely be, it will be a lot more mature than you starting off building something. And so you can leverage that maturity again based on your roadmap, on your timelines, and but as if you’re going custom It’ll be less mature, but you’ll be able to, enhance it, to customize it.

 

[00:17:56] Jerrik Arango: It’ll be unique to yourself. You, really wanna weigh that, roadmap and that at that time. Yeah. So 

 

[00:18:05] Matt Paige: more time to evolve it, I think too, it’s it gets back to the commoditization of the industry as well. I think we talked, we’ve talked about like e R P and C R M. Highly commoditized, been proven and done before.

 

[00:18:21] Matt Paige: And I think that’s where it’s so important to start out with the competitive landscape of whatever you’re trying to build. What exists out there. You gotta check that first before you go and or thinking about building. Now if you’re building, if you’re trying to disrupt like the c r M game and you got some u new unique way to do it, and that’s what you’re selling different.

 

[00:18:41] Matt Paige: Different story. If, this is just a function within your business, then yeah that’s that’s, a different way to think about it. Timeframe. Yeah. You 

 

[00:18:51] Jerrik Arango: wanna foc, you’re good. Yep. Yeah. You, wanna focus on that uniqueness. What did, what’s the reason, right? What’s the purpose of why you’re starting on this journey. Even nowadays you consumers want choices and sometimes you can have this But someone does it better, right? There’s the quality aspect. You mentioned c r m there’s CRMs out there, but The quality might not be there for the niche that you’re in or that you know that you want to do.

 

[00:19:25] Jerrik Arango: It might be a c r M for a specific industry and it, doesn’t quite have the bells and whistles or the customization that you want. So can you build something on top of that? Do you build custom to support that and diff and make you different from the rest? 

 

[00:19:43] Matt Paige: Yeah, no that’s, a great point.

 

[00:19:45] Matt Paige: It’s sometimes off the shelf is. Built for most, but not the few. It’s kinda that 80 20 rule. And if you’re in that 20% there, there are cases where custom makes, sense. Timeframe, so this is the next one, and we’ve talked a little bit about this, but when you’re factoring in the time it takes to build something how do you make this consideration set or factoring with build versus buy?

 

[00:20:13] Matt Paige: I think build obviously gets the Right or wrong the, connotation that it takes very long to do relative to a buy, I can just buy, it’s ready to go off the shelf. But talk us through that. ’cause that’s not always the case based on your, scenario. 

 

[00:20:31] Jerrik Arango: Yeah. If the timeline’s tight and the time to market is a higher priority, you definitely want to consider more on the buying buying side.

 

[00:20:43] Jerrik Arango: If you have more runway and you have the available resources. Then you’ll, probably want to go custom where you have that more control. You can develop it as time goes on. There was another, example that I was on. It was a streaming service that we were building.

 

[00:21:04] Jerrik Arango: And there was this third party sometimes it might not be the solu, the software might not be the whole thing but it might be components to your solution. And so there was this third party that we, that was decided to be used and It, brought a lot of challenges.

 

[00:21:21] Jerrik Arango: There was also a lot of, man hours getting that third party integrated. So there was still work there and in the end it was, again, it was that weakest link in our chain. Yeah. And, it kept on failing. And ultimately that project unfortunately didn’t work out. And the main reason was that, choosing of the third party, now there might.

 

[00:21:47] Jerrik Arango: Be another third party that would’ve worked better and we could have gone to market a lot quicker or Right. The consideration of should we have gone custom. 

 

[00:21:58] Matt Paige: Yeah, no, that’s the biggest thing I’m taking away from this episode is this concept of the chain. And where your weakest link is and how to think about that.

 

[00:22:07] Matt Paige: That’s a great way to, to think about it. It’s, we could get 

 

[00:22:11] Jerrik Arango: into, yeah. There’s a G D P R and all this data privacy that came up, right? That’s something else that you can consider is right. This third party, it might be it has some data. How do you control that?

 

[00:22:27] Jerrik Arango: Do you own that data? Is this third party owning it? Is there a competitive disadvantage or advantage to that? So there’s a lot of things that you would want to consider when you’re going when you’re gonna buy something is right that, that control, that ownership of the different aspects.

 

[00:22:47] Matt Paige: Yeah, maybe we diverge a bit there. I that’s a great point. And I don’t know your perspective here, but you have the, all the generative AI stuff, the chat G P T, and you’re putting your data out there and it’s and not that it’s gonna be publicly available to the world, but you’re giving it to somebody else we’ve heard of like the Samsung breach and all of these things.

 

[00:23:10] Matt Paige: And that may be an interesting kind of build verse buy decision in the future is okay. Do we take like an open source llama or something like that large language model and try to build our own thing versus going with an off the shelf type of chat G B T for a, use case we’re dealing with.

 

[00:23:28] Matt Paige: So it’s like a whole nother level with this privacy data comes into play. And like we 

 

[00:23:34] Jerrik Arango: were talking about the expertise, right? You might not be, you might not have that, the talent or the skillset for thinking about security. And so buying something off the shelf, you’ll be able to bring in that maturity of maybe right.

 

[00:23:51] Jerrik Arango: Security has been taken care of, has been thought of, it’s a reputable third party or software that, that you’re gonna use. But at the same time, like you mentioned Samsung there’s a whole bunch of other examples we could name where the. Company came out as they had a breach, but when you dig deeper, it ended up being because of a, it could have been a third party that they were using, which a lot of times it’s a bummer. That wasn’t necessarily you, but the third party you chose they had a leak. Yeah. And I saw that happen many times where you want to protect your brand. I your brand identity and this third party ends up affecting you. Yeah. 

 

[00:24:38] Matt Paige: That’s a great, consideration and you actually hit on the next one right there, which is internal expertise.

 

[00:24:43] Matt Paige: And I think you hit it in a, great point is, this expertise the domain knowledge core to your people in your business, or do you even want it to be, that’s a good criteria for whether you, build versus buy as well. And the next one, team capacity. This is a, an important one. Take us into this ’cause like team capacity.

 

[00:25:03] Matt Paige: The obvious thing is, do I have the people to build the thing? But it’s more to that, it gets back to that opportunity cost and what else could they be doing that’s maybe more important. 

 

[00:25:13] Jerrik Arango: Yeah. Again, And it depends on the different scenarios, different stage, right? The size of your company, the maturity.

 

[00:25:24] Jerrik Arango: If you have you have fewer resources and you’ve got this product you’re stronger on the product side you’re, thinking through this. There’s also opportunity where you can. Augment your team and possibly, get the experts and to help you build custom.

 

[00:25:42] Jerrik Arango: So even if you don’t have that expertise in house you can leverage a third party and, get. That expertise to help you build this custom where you’ll still have that uniqueness, that custom control to build your product. Or if you don’t have you don’t have the, assets or the the time, then you can buy you can think of buying if they’re off the shelf where.

 

[00:26:11] Jerrik Arango: They do have that that expertise already comes out of the box. You’ll have support. Sometimes you have to pay a fee for that support, but that’s something you could consider. Yeah. And, you’re teeing 

 

[00:26:24] Matt Paige: up the next one right after that support structure and that is something that can be a benefit of the buy at times.

 

[00:26:31] Matt Paige: ’cause sometimes they have the operational support behind it and it is a consideration when you’re building. Whether it’s working with a partner like Patchworks, do they have plans and things they can actually support after you’ve built the solution, right? But what, about support structure?

 

[00:26:50] Matt Paige: Anything else there to consider? 

 

[00:26:53] Jerrik Arango: Yeah. When you go custom and you build this you, own it, right? And you have to keep the lights on. There’s infrastructure, there’s ongoing maintenance. And you’re, gonna have that cost. Now if you buy something off the shelf They normally they wanna make a profit, they wanna expand, they want to grow their account. Yeah. And, you could also get stuck into this rabbit hole of these services. Price might be increasing. There’s these additional things that, that plugins or widgets that you might need, that you want to add on.

 

[00:27:30] Jerrik Arango: Yep. And without noticing you’ll, quickly ramp up on, on the cost of, having that that off the shelf product. Yeah. 

 

[00:27:41] Matt Paige: The, 

 

[00:27:42] Matt Paige: Almighty upsell, right? So that’s a really important thing, right? If you are talking about a buy solution, what does that truly look like? So it’s not just the initial buy of the product, it’s implementation cost, it’s maintenance, it’s all of that.

 

[00:27:55] Matt Paige: Something to consider as you look at a buy versus bill. ’cause a lot of times the buy looks more attractive on paper it’s not always the case. 

 

[00:28:07] Jerrik Arango: And, a lot of times you wanna make sure that you don’t corner yourself for the future. So some, a lot of times when you’re get buying off the shelf As I mentioned it’s, usually a common denominator, right?

 

[00:28:21] Jerrik Arango: It’s a c r m, it’s not a c r M for maybe real estate, or it’s a c RMM for for, something else. And so it’s, made to be a common denominator to. L and going back to that uniqueness having that you, don’t want to corner yourself where you’re developing these features that make you unique.

 

[00:28:44] Jerrik Arango: But now you’re limited by the the off-the-shelf product that, that you’ve purchased. Versus if you’re going custom in a way the sky’s the limit as far as you can always develop it and build on it. But Right, that is, that little slower startup on that. 

 

[00:29:03] Matt Paige: Yeah, definitely. And the last one we, did a little divergent topic on this one, but the data insecurity, anything else to talk about on the last kind of factor here related to data insecurity?

 

[00:29:18] Matt Paige: I know we touched on it a minute ago, but any other kind of points you want to hit on there with the build or the buy decision? 

 

[00:29:25] Jerrik Arango: Yeah. Something else that comes to mind that’s sensitive could be e-commerce and having that, user credit card information user information.

 

[00:29:38] Jerrik Arango: And so how do you handle that? Do you have the expertise? Do you have the support to, have that locked down or again, depending on the maturity of where your product is, where you’re going You might buy off the shelf to help you handle some of those areas. Right? Keeping that data secure.

 

[00:29:58] Jerrik Arango: Also right data breaches. You want to w make sure and you want to, take care of your investment and your brand name. If you lose confidence in your clients and right, and. And your users, you, you get hit a lot harder. So taking that into consideration as far as how do you protect yourself from possibly other off the shelf products that might that you could compare and see quality pick one, that one might be cheaper but.

 

[00:30:33] Jerrik Arango: Are you less secure there or is it better for you to take that in-house and really, hold onto it? Really take care of that. And so that’ll be up to you. It depends on what area of your application or your product you’re using this for. 

 

[00:30:49] Matt Paige: Yeah, no that’s, a great point. It’s the and I think the one thing too, the build versus buy decision, it’s not.

 

[00:30:57] Matt Paige: Always as easy as we build everything or we buy everything. Like to your point earlier, and that’s what’s so great about having somebody like a Jerrik that can like navigate these waters. ’cause you may want to buy, maybe there’s an e-comm component related to purchase that you want to purchase and integrate with a custom solution.

 

[00:31:16] Matt Paige: So there’s different components and pieces. So it’s many times a lot more nuanced than just like we buy every. Everything, or we build everything. And I think that’s what’s interesting with our, built right method that we use. And it’s taking existing frameworks off the shelf, libraries, things like that allow us.

 

[00:31:36] Matt Paige: To make a custom solution a lot faster, a lot cheaper as well. There’s all kinds of things out there. I don’t know. Any, parting words with the build versus buy or the low code, no code and all the different other components that have come into play to make building custom a lot more easier, more uniform, more cost effective.

 

[00:32:02] Jerrik Arango: Oh, I think mute. 

 

[00:32:06] Matt Paige: We’re missing the, last dissertation, the all the good knowledge. Building 

 

[00:32:12] Jerrik Arango: up the suspense. Yeah. You, were mentioning plugin or module, we can use different components. And it’s a smaller third party or off the shelf, for example, a calendar.

 

[00:32:25] Jerrik Arango: I can, I could grab this calendar widge and not have to recreate that and use it for your bigger i your bigger picture and your unique product where the calendar’s not really gonna going to affect that. Thing I’ll, end with is you, really wanna watch out that you don’t corner yourself and sometimes that just like that calendar, you can have this off the shelf that really helps speed up your delivery. It could help speed up even your quality. But you don’t want to corner yourself in the future. Does your de, does your business depend on this? That example I gave ear early on there are several third parties that do this, but if your business, your secret sauce even depends on this you might want to, you, you wanna.

 

[00:33:12] Jerrik Arango: Make sure it doesn’t go away. What happens if that third party goes out of business or they close down? They stop supporting it, right? What do you do now? So you don’t want to corner yourself and you also want that freedom to to keep on developing, improving your, product and you wanna maintain relative, we’ve seen so.

 

[00:33:30] Jerrik Arango: Many industries, how they change drastically. And you need to be able to, pivot, to be agile. So taking that into consideration is I would definitely say is worth it. 

 

[00:33:42] Matt Paige: Yeah. No, I think that’s a great place to end to Great, kind of parting words with the, almighty build verse buy decision.

 

[00:33:49] Matt Paige: But thanks for joining us today on the Built Right podcast, Jerich. We really enjoyed it. Thank you. 

 

[00:33:55] Jerrik Arango: Yeah, thanks for having me.

The post Off the Shelf or Custom… That Is the Question appeared first on HatchWorks.

]]>
How Generative AI Will Impact the Developer Shortage with Trent Cotton https://hatchworks.com/built-right/generative-ai-impact-developer-shortage/ Tue, 22 Aug 2023 10:00:36 +0000 https://hatchworks.com/?p=29722 Could generative AI help recruiters fill the gaps in the talent market?   The developer community is facing a shortage of skilled workers, and the needs of the tech industry are growing. To stay ahead of the curve and remain competitive, companies must hire the best of the best. But with a shortage of talent, recruiters […]

The post How Generative AI Will Impact the Developer Shortage with Trent Cotton appeared first on HatchWorks.

]]>

Could generative AI help recruiters fill the gaps in the talent market?  

The developer community is facing a shortage of skilled workers, and the needs of the tech industry are growing. To stay ahead of the curve and remain competitive, companies must hire the best of the best. But with a shortage of talent, recruiters face a tough challenge.   

To share some perspectives on recruitment difficulties, Trent Cotton, our VP of Talent & Culture, joins this episode of the Built Right podcast. Trent explains why we’re facing such a talent shortage, what that means for businesses, and why broken HR processes are holding many companies back.  

Trent explores the growing usage of generative AI in the HR space and how that could help to patch up some of the gaps in the talent market.  

Tune in to the discussion below or keep reading for the top takeaways.  

What the talent shortage means for businesses

A report from Korn Ferry found that by 2030, there could be more than 85 million jobs left unfilled because there aren’t enough skilled people. That talent shortage could result in approximately $8.5 trillion in unrealized annual revenues.  

The talent shortage in tech, especially in the developer space, is more than simply frustrating. It directly results in potential lost revenue. If you don’t have top talent to bring projects to life, this can dampen business growth and make you less competitive.  

Why we’re seeing a shortage of talent 

While the skills gap has often been an issue in industries such as tech, it was intensified during COVID. Many from the baby boomer generation were forced to retire and haven’t re-entered the workforce, and younger generations haven’t been trained to fill those gaps.  

Another reason companies are struggling to fill roles is because the average person changes jobs every three to four years. But tech professionals are doing this 20% more than in other fields.  

To add to this, Trent believes that most recruiting processes are “utterly broken.” It’s hard enough to get the talent, but you’ve also got to worry about regular retraining, and then because the recruitment processes are so long-winded, it takes a long time to fill a role.  

That’s why we’re seeing more HR and recruitment professionals turn to AI to help improve some of their recruitment processes.

The four problems with recruiting 

1. Everything’s a priority, which means nothing is

Recruiting groups are limited by time, focus, and energy. Whenever you’re constantly moving the needle and trying to get ahead, it’s hard to make progress in the areas that are most important. 

2. No opportunity for recruiters to identify obstacles 

Another dysfunction in recruitment teams is that there’s little space for recruiters to stop and think about what’s working and what’s not. This is essential so that you can find ways to scale what’s working well. 

3. Lack of alignment 

There’s often a lack of alignment between recruiters, hiring managers, leaders and candidates, which can often create a lot of conflict in the process. 

4. The feedback loop is broken 

It can sometimes take weeks for candidates to receive feedback – which just leaves them hanging.  

The four principles of sprint recruiting 

To combat these issues, Trent uses the sprint/agile recruiting method, which follows four principles: 

1. Address issues in two weeks and prioritize the most important roles to fill. 

2. The business defines the priority of which jobs need to be filled first. 

3. Work in progress limits to reduce the number of candidates at each stage of the process to enhance the candidate’s experience. 

4. 48-hour feedback mandate for the candidate and recruiters. 

By following these principles, it ensures that everything moves fast, everyone’s informed, and the recruitment workload is manageable.

Where generative AI could help the developer shortage 

Trent believes that the biggest impact in the job market will be on frontline workers. Anything that doesn’t necessarily need a human to do it is likely to be automated first. However, this is likely to create a surplus of unskilled workers. 

AI is also going to help streamline processes and make things more efficient – leaving companies to focus more on client engagement and retention. The same also goes for developer roles.  

If developers can leverage generative AI to replace some of the more tedious or manual tasks, they have more time to spend on upskilling, problem-solving, and more creative tasks. Any chance for developers to level up and improve their skills is going to be a huge plus for the tech industry when there’s such a shortage of skilled developers. 

What companies and industry leaders can do to protect the talent market  

1. Offer training  

With AI becoming a big focus for many tech companies, it makes sense to train people in AI. By offering training for existing and future talent, companies can remain competitive while also helping people enhance their skills.  

2. Nurture the next generation of talent 

Another way to get ahead of the curve is to start training the next generation of talent. That means starting as early as high school to help younger people get inspired and interested in the opportunities in the tech industry.  

This should encourage more people to choose it as a career path, which goes a long way in easing the talent shortage. 

3. Be open to flexible working arrangements

More and more people are working in the gig economy as freelancers. The pandemic made some people realize that they don’t want traditional employment and prefer flexibility.  

However, if your job opportunities are only focused on those who want the traditional 9-5, this could exclude talent who have a lot to offer.  

It may be time for companies to be more flexible when it comes to working arrangements to tap into this wider talent pool. Having a mix of regular employees and being open to hiring freelancers could help businesses remain competitive in the talent market.  

Hear more about the potential impact of AI on the developer shortage and tech job market in the full discussion with Trent.  

Discover how HatchWorks’ Generative-Driven Development™ can accelerate your software projects – explore our innovative approach today.

[00:00:00] Matt Paige: All right. I’m excited to be joined today by Trent Cotton Hatworks, VP of Talent and Culture, and he’s the reason we’re able to attract and retain the best talent in the industry across the Americas. He’s got deep experience and talent management, organizational development, HR tech. Data analytics, I gotta take a breath here.

[00:00:25] Matt Paige: HR process. And his even developed his own unique process to recruiting called Sprint recruiting, which frankly is just completely transformed how we recruited Hatch Works. And oh, by the way, he’s a published author, as you can see from the two book titles behind them. Sprint Recruiting, the first one, and the FutHRist coming out later this fall.

[00:00:44] Matt Paige: But welcome to the show, Trent. 

[00:00:47] Trent Cotton: Thank you. Thank you. It’s uh, a little humbling. That’s quite the introduction. I appreciate it. 

[00:00:52] Matt Paige: Yeah. Lots, lots of good stuff. And I’m sure we’ll hit on some of the, the sprint recruiting stuff in the discussion later today. But the main topic, it’s, it’s a, a meaty one. It’s a hot topic right now in the industry and it’s the tech talent shortage and how AI is gonna impact that.

[00:01:09] Matt Paige: And how do, how do, I know this is a hot topic because our blogs right now on our, our website, those are getting the most traffic. It’s talent shortage and it’s how the impact of generative AI. Those are the most trafficked right now. And in PS you can check those out on the Patchworks website. And I’d say make sure you stick around till the end.

[00:01:27] Matt Paige: We’re gonna, we’re gonna go into whether AI is actually gonna help shrink this talent shortage or make it even larger, but Trent, uh, to set the stage. Mm-hmm. Help us just kind of set the stage of what, what is the current talent shortage gap? What does the future projections look like? And kinda what’s, what’s attributing to that help, help break that down for us to kind of set the stage for today.

[00:01:49] Trent Cotton: Korn Ferry, which is a, a fairly large search firm, they have a fantastic analytic arm. Uh, it’s one of the ones that I go to just to try and get a good feel of what’s going on in the talent market. They estimate that by 2030, so not too far down the road, that 85 million jobs will go on field because of the shortage. So that, that’s a, that’s a huge number Now that’s worldwide. Um, you’re looking at about $8.5 billion, or excuse me, trillion with a t. In revenue loss and just 162 billion of that is here in the United States. So I mean, we’re, we’re, we’re going against something that is borderline unprecedented…

[00:02:36] Matt Paige: for context, the mm-hmm. I was gonna say just for context, that gap in revenue, right. Is because there’s initiatives companies wanna do and things they want to get done, but they just don’t have the talent to do ’em.

[00:02:48] Matt Paige: Right. Wow, that’s insane. 

[00:02:49] Trent Cotton: Yeah. With insane the, the competitive landscape now everything is driven by tech. So if you’re not staying ahead of the curve with the latest tech and, and making sure that something as simple as your website, your apps, your delivery systems, you know, think about Amazon, all of the different technology that’s involved, that’s what made them the behemoth that they are, they can’t keep the roles filled quick enough to be able to stay ahead of the curve, which is a direct translation over into revenue.

[00:03:15] Matt Paige: Hmm. That’s, that’s interesting. It’s really scary. Yeah. Yeah. And so we got this. 

[00:03:22] Trent Cotton: Go ahead. No, I was gonna say, and, and some of the driving things. So, I mean, that’s a, that’s a scary stat and so let’s kind of peel it back and, you know, what’s driving this and some of this has been hyper intensified since covid.

[00:03:37] Trent Cotton: So one of the first ones is the baby boomers. That was a huge generation of the workforce. A lot of them were forced to retire early in covid and they never reentered the workforce. So they reentered at, at kind of a lower skill level. So that greatened the gap that was already there. And then you have this up and coming generations that are not necessarily at that same skill level that’s furthering the gap.

[00:04:00] Trent Cotton: But then you throw technology in the constant evolution of technology and you can’t keep your workers skilled enough, fast enough to be able to evolve as quickly as, um, AI is changing the game. I mean, just think about it this time last year, were we talking about chat, G P T? No. I mean, I, I know I wasn’t, um, we weren’t looking at the impact that generative AI is going to have.

[00:04:21] Trent Cotton: There was some talk about it in theory. Now the rubber’s hit the road and now companies are looking at, you know, it, it just, in the last six or seven months, all the evolution is gone. So that, that’s just a micro vision of what’s going on in the economy and the direct impact of, of a gap that’s already huge in the talent market is just going to exacerbate the issue.

[00:04:45] Trent Cotton: And then two tech professionals, you know, the average person changes the job about three to four years, according to LinkedIn. And tech professionals change 20% more. So it’s, you know, the, the average is anywhere between a year and a half to two years. So not only do you have the gap, not only do you have all of this changing technology, then you then you gotta figure out, how do I keep these people once I get them on board?

[00:05:09] Trent Cotton: So it is, Hmm. For a lot of talent people or talent professionals, we’re fighting a battle on six or seven different fronts. So for anyone that is listening, that is a c e o, uh, go in love when your talent person, they’re exhausted. You know, we get past the pandemic. I love that all this other stuff starts happening.

[00:05:27] Trent Cotton: So, um, I usually know talent people ’cause all of us have circles under our eyes for the last three years. 

[00:05:34] Matt Paige: No. And you, and you’re deep in the industry too. That’s funny. Yeah. Go, go give your love to some talent people. They need it right now. Yeah, but it’s interesting though. It’s, it’s not like one thing, it’s like five, six different things all attributing.

[00:05:47] Matt Paige: To this talent shortage we 

[00:05:48] Trent Cotton: have right now, right? It is, it is quite the perfect storm. Um mm-hmm. Just because you can’t, you can’t deal with just one issue. So let’s go back. Yeah. Four or five years. Um, you were able to, to diagnose one particular area and go in there and fix it. Do some triage and then move on about your business.

[00:06:08] Trent Cotton: There’s no way to do triage whenever you’ve got all of this stuff that is so interconnected and interdependent and constantly changing. So just when you think you can diagnose it, it it’s very much like a virus. You know, you kind of treat the virus just as soon as you think that you have. It nipped the thing mutates, and now you’ve got something different.

[00:06:27] Trent Cotton: That’s the current state of the talent market. And then you add to that, that, that most recruiting processes, Are utterly broken. It, it’s just so you, you can’t get the talent. You have to worry about retaining the talent and then it takes too damn long for the talent to get on board because of the, the broken recruit process.

[00:06:45] Trent Cotton: So there’s a lot of things that companies are trying to do. Um, to leverage AI to help fix some of that, uh, at least from a process and engagement standpoint. Uh, some of the analytics, you know, we use a lot of, um, HR analytics to really kind of get us some sentiment analysis of what’s going on with our, um, with our team members because the, I think the difference for us versus a lot of companies that I’ve consulted or that I’ve worked with is everyone talks about, yeah, retention’s a big thing.

[00:07:15] Trent Cotton: I have never worked for a company like Hatch Works. We’re obsessed. Like we almost take it personally whenever people leave. We want to dig into why did they leave? You know, how do we make sure that no one else in the organization feels that way? And I think that speaks a lot to why we have over 98% retention in our organization.

[00:07:33] Matt Paige: Yeah. That 98% number is insane. And I do want to get to this topic around ai, but you, you hit on something that’s interesting, you know. Everybody kind of sees AI as this, you know, maybe this is the, the solution to solve all our problems. But you mentioned the process point of it and I think it’s worth noting just, you know, ’cause I’ve been amazed at how much it’s helped us, but the sprint recruiting and then we’re gonna go on a full tangent on the sprint recruiting and everything there, but just hit it at a high level.

[00:08:00] Matt Paige: ’cause it’s done so much for our organization. It’s worth noting that, you know, there, there are basic fundamentals with process that are important to have. Mm-hmm. In, in talent, recruiting anything in business, but this is especially true here. 

[00:08:14] Trent Cotton: Yeah. The, uh, so let, let’s tackle the four dysfunctions of the recruiting.

[00:08:18] Trent Cotton: I said that the, the recruiting process is broken. I’ve been in recruiting and HR for 20 years. I’ve done through Hatch Works. I’ve also done some consulting for our clients on their recruiting process. There are four constants. Uh, the first dysfunction is that everything’s a priority, which means nothing is a priority.

[00:08:35] Trent Cotton: Recruiting groups are limited by time, focus, and energy, and whenever you’re constantly moving the needle or or moving their cheese, they’re not able to make the progress that they need. Mm-hmm. The second is that there is no rhythm or opportunity for recruiters or recruiting leaders to stop and go, Hey, what’s working and what’s not, and find ways that you can scale what’s working and, and identify the obstacles and work together with the partners to overcome them.

[00:09:00] Trent Cotton: And then clients and recruiters are. Um, it’s kinda like running a daycare sometimes as a talent leader. ’cause you have the hiring manager saying, this person hit me. And then you got the recruiter saying, well, this person looked at me and there’s just this huge, uh, lack of alignment. And then the last one is the feedback loop was broken.

[00:09:19] Trent Cotton: Uh, whenever I first started this, I, I, I went through agile training, came out of it and I said, okay, there’s got to be something that I can learn from Agile and a apply to recruiting. And the first one was looking at the feedback. Um, yeah. The average amount of time that it would take for us to get feedback on candidates was two to three weeks.

[00:09:37] Trent Cotton: So there’s your four dysfunctions? Mm. We balance that in sprint or agile recruiting with the four principles. The first one, we look at things in two weeks. So if you’ve got 400 jobs, the first part of that funnel is okay. In the next two weeks, what’s realistic and what’s the most important roles that the team needs to focus on and get?

[00:09:55] Trent Cotton: That can be to mitigate risk. That could be to drive revenue. That could be to hit certain milestones within the technology sprint. The next is the business defines the priority, so we go to them and say, okay, outta those 400, you say these 20 are the most important. You have 200 points. I want you to assign a point value to those 20, and we’re gonna work them in order.

[00:10:15] Trent Cotton: The next is that we have WIP limits or work in progress limits, so we limit the number of candidates in each stage of the process because that enhances the candidate experience. It makes sure that we do not lose great candidates. It also stops this fomo that a lot of hiring managers have. If I wanna interview 25 people, Look, dude.

[00:10:32] Trent Cotton: Mm-hmm. There’s not 25 people out there. You know, we need to go and move on these top five. And the last one is that we have a 48 hour feedback mandate. Um, we present a candidate, we want 48 hours. We want feedback. Yes. No. And what this does is it provides almost like a drumbeat, it also provides us metrics.

[00:10:50] Trent Cotton: So we, I know on average, on 10 day sprint, day two, Day seven, day eight, and day 10. That’s usually whenever our offers go out. I don’t schedule meetings with my teams. I block any kind of meeting or anything that’s going to disrupt them from focus on getting those offers out the door. We’re also able to track how many candidates we can almost precisely say, if you need a full stack developer, we can get it done in 32 days.

[00:11:18] Trent Cotton: Or if you just trust a judgment and you want us to hire it for ’em and we place them on the project, we can get it done in probably about one sprint or at least maybe 15 or 16 days. Yeah. There’s not a lot of companies out there that can do that. And we move with speed because now we’re focused so intently on what is important to the company, not just working on everything.

[00:11:37] Trent Cotton: We’ve developed these pipelines of candidates that are just sitting and waiting for us to go and pluck ’em and put ’em on our project. So we’ve really been able to ship, I mean, kudos to our talent team. Uh, this time last year we were not in this space. Now we’re on the offense. We’re we’re ready to go.

[00:11:52] Trent Cotton: Yeah. 

[00:11:52] Matt Paige: I mean, you hit it. It, it changed the way we work. And I love the, the comment it’s, there is a rhythm to it. It’s like the whip and you know, my wife will tell you rhythm’s important and I’m six foot eight left-handed and two left feet, and I don’t have it. So it, it is critically important and it, it, the team has it, you can just, and it gets them excited too, so that mm-hmm.

[00:12:11] Matt Paige: You know, a little bit of a tangent, but it’s, It’s, it’s worth hitting on. Um, ’cause so many people, it’s our sauce. Yeah, yeah, yeah. Alright, so let’s get into this topic of how AI is gonna impact this talent shortage. And I think, you know, one thing to note, like AI’s been around for, for a super long time.

[00:12:31] Matt Paige: It’s nothing new in, in, like, I. Encourage folks to check out our episode with, uh, Jason Schlachter. He’s got a lot of insight on the history of AI and everything there, but what’s been interesting about this latest evolution, like you mentioned with chat gbt, these large language models, the generative aspect of it, it’s almost made it, uh, you know, it, it, it democratized it, it made it accessible to all in a lot of ways where you don’t have to be.

[00:12:59] Matt Paige: You know, in the code doing things to, to leverage it. Uh, but what’s, let’s get into kind of your perspective of how how’s this gonna impact the talent market? Uh, whether it’s, you know, does it shrink it, does it grow it? Or how does it enable people to perform better at their jobs? There’s a lot of angles.

[00:13:18] Matt Paige: We could take this, but we’d love kind of your take of, you know, this, this AI boom that’s going on right now. 

[00:13:24] Trent Cotton: Yeah, absolutely. So I think the, the greatest impact that it’s gonna have is, is some of the frontline workers. I think there’s going to be a lot of, um, A lot of intense look by organizations to say, okay, what can AI do that we, we don’t necessarily need a human to do?

[00:13:40] Trent Cotton: There’s gonna be also, so that that’s kind of the first major impact there, so that that’s not gonna create a skill gap that’s actually gonna create a surplus of un unskilled workers, which is. Again, this is just part of that whole big trifecta that we’re dealing with. But then if you move a little bit upstream, there are gonna be jobs that are highly impacted, that are very manual in process, that ai, um, or even, you know, some of the machine learning, some of all, all of the different technology impacts are gonna look and make.

[00:14:09] Trent Cotton: How can we do this in a more streamlined fashion, uh, more efficient with a focus on client engagement and client retention. I think that’s gonna be one of the very interesting things because you know, whenever you have these manual processes, you don’t have analytics on the background, uh, on the backend companies now, especially since Covid are so.

[00:14:29] Trent Cotton: Obsessed with what is the data telling us? I know in HR we are, um, what is the data telling us and how do we make sure that we stay ahead of the curve? That, that, that’s going to be one of the things that companies go, okay, we, we’ve got to invest in this. So there’s, there’s going to be opportunities for a lot of workers to be able to learn some of these processes.

[00:14:48] Trent Cotton: Maybe not from a technology standpoint. Mm-hmm. But how do they actually. Leverage AI as a partner versus it’s an us versus them. Yeah. And I think this is, this is the part that’s gonna be really exciting for the right part of the workforce that sees this as an opportunity to level up their skills and they go into it with an open mindset.

[00:15:08] Trent Cotton: Um, I always use the example of, um, Because I get asked in, in HR a lot, you know, is, is HR gonna be replaced by ai? And the answer is no, it’s not. Yeah. Well, some of it, yes. Um, I look at AI almost like Ironman. So Rob Stark, fantastic guy, wonderful business guy. Mm-hmm. Billionaire, sexy, charming, all, I mean, he’s a badass all by himself.

[00:15:31] Trent Cotton: You, you put him along with his AI in Jasper and you put the suit on. Now he’s a superhero and if you watch, he’s taking in all this data that AI is able to process incredibly quickly, but he’s still making the decision. I. On whatever action he’s going to do. So I think that the more that that we as talent professionals and leaders in the organizations can look at our workforce and go, how do we take our, our Rob Starks, you know, that are not Rob Stark, that’s Game of Thrones.

[00:16:00] Trent Cotton: I. Oh my gosh. Tony Stark. There we go. Tony Stark. 

[00:16:02] Matt Paige: Yeah. This is like, we’re mixed in, uh, genres here. Like, 

[00:16:06] Trent Cotton: yeah. So now everybody knows I’m one of those kind of nerds and I, I like Game of Thrones and man, um, but you take your, who, who are your Tony Starks and how can we make them better by pairing them up with something that’s going to, um, just enhance their delivery or enhance their, their job skills.

[00:16:22] Trent Cotton: But then you have the whole thing with tech. So it’s, it’s really interesting. Um, yeah. I was talking to a, a professional not too long ago, and, and they were frustrated because they were trying to get some, some roles filled for developers, and the managers were getting so ticked because they were doing like an assessment.

[00:16:38] Trent Cotton: And a couple of the people were using chat G p T to kind of help with some of the basic code. And then they were focused on like the more sexy stuff and the manager was disqualifying them. And to me that that’s the antithesis, that that’s, that’s what tech people should be using ai. What’s the mundane, non-value add, but critical and necessary parts of this coding or, or whatever it is, let AI do that so that way they can work on some of the things that are.

[00:17:06] Trent Cotton: More impactful. 

[00:17:07] Matt Paige: Yeah, that’s, I mean, that’s the whole point around our, our built right method at Patchworks is how do you enable, and I love that co-pilot mindset. ’cause that’s what it is, right? It’s, it’s not gonna take over, it’s gonna make, uh, the folks leveraging it better. Mm-hmm. I think one interesting point though that I’ve heard from some, it’s like, you know, you can’t over rely on it to the extent of if you have a bad developer, You give them AI that could actually make a more tanked mess.

[00:17:35] Matt Paige: Mm-hmm. Versus you have skilled developers leveraging it. I love that idea of it gets rid of the mundane, that’s like the first Yes. Go at this. Um, but it’s, it’s like the co-pilot example. You know, I think Tesla has this, where if you’re in co-pilot mode, it forces you to touch the steering wheel every so often.

[00:17:52] Matt Paige: So you don’t just go completely, you know. Yeah, mindless. And same thing with flying a plane, same kind. Uh mm-hmm. You know, analogy could be applied to technology. 

[00:18:02] Trent Cotton: But I do think that this is going to force companies to, and, and we’ve been looking at this since Covid, how do you mm-hmm. Reskill, upskill and redeploy your workforce.

[00:18:14] Trent Cotton: Yeah. I think now with, with some of the intensity that’s coming driven by artificial intelligence, that is going to, that’s really gonna kind of come to the forefront. I know that. Organization, we’re talking about it. We, um, yeah, we give all of our employees a, a $2,000 a year training budget to use to get certifications or, you know, whatever they want to learn to enable them to be even more productive in an area of interest.

[00:18:39] Trent Cotton: And so, you know, we’re, we’re looking at what are some AI courses, what are some AI certifications that we can offer to our, our employees to make sure that they’re staying ahead of the game. Um, so not just to benefit them, but also to benefit our clients. You know, we want to be that trusted partner, not just someone that you come to and say, Hey, I need three or four software developers in the DevOps.

[00:19:02] Trent Cotton: You know, we wanna be able to come in and add a level of knowledge and acumen that is unparalleled to anybody else in the market. Yeah. 

[00:19:10] Matt Paige: And I think too, the, the other interesting point, and you hit on it, it’s like so many folks are looking externally for this talent when you have this like, workforce sitting there.

[00:19:20] Matt Paige: Yeah. That, that if you give some, you know, love and care to, in terms of upskilling them, you can help evolve them. So that that’s, that’s a great point and a big piece that’s missed a lot I think with a lot of organizations. 

[00:19:33] Trent Cotton: Yeah. And there’s actually, uh, some, there’s a lot of government funding that’s going into boot camps.

[00:19:38] Trent Cotton: Mm-hmm. They’re looking at, you know, some of these low to moderate income.

[00:19:49] Trent Cotton: So there’s boot camps out there that will teach you, uh, some of the basics of, of coding, software development, ai and some of this. So a lot of companies are actually being forced to shift the educational requirements and look more at non-traditional approaches. So it, it’s. It has had a very far reaching and very deep impact on the talent strategy for most companies in the us.

[00:20:13] Matt Paige: Yeah, no, that’s, that’s a great point. So next thing I wanna get into, um, you know, what, what’s your take here? Is, is ai this new evolution with ai, is it going to close this talenting gap or does it make it. Wider, like maybe what’s your take there? Or maybe there’s alternate, you know, parallel universes where it could be the case on that size.

[00:20:36] Trent Cotton: Yeah. It’s prefer more to parallel. Yeah. It, it’s definitely in, um, you know, reference another movie, kind of the metric’s like which pillar are we gonna take here? Uh, and it’s, and it’s a lot of what we’ve been talking about. Do we use this as an opportunity to re-skill some of those that may be replaced by AI or their jobs change as a result of implementing some type of AI practices?

[00:20:58] Trent Cotton: If we do, then I think that that’s gonna shorten. The gap, um, and, and be able to tap into a huge force. And it’s actually gonna help break some of the, the, the poverty cycles because a lot of these frontline workers, you know, they, they just kind of stay in that mode. If we’re able to go in and, and take them and give them a skill that’s actually applicable in the new market, I think that’s gonna help us economically.

[00:21:20] Trent Cotton: But it’s also gonna help from, from a talent gap. If we do not, that gap is going to continue to exponentially, um, Just grow and it’s, it’s terrifying, uh, looking at, I mean, 85 million jobs by 2030. That’s, that’s mind boggling, staggering. Um, I mean that, that’s, that’s more jobs than were added in. I can’t even think of how many years that, that, that we’re just going to lose in the blip of mm-hmm.

[00:21:49] Trent Cotton: You know, a decade or two decades. Yeah. That’s, 

[00:21:52] Matt Paige: that’s crazy. It’s, I, I wanna get your take here. So there’s Andreessen Horowitz, you know, he wrote the. Seminal article of why software’s eating the world and he has this new one out, why AI will save the world. And just to call out a couple points, he has these different AI risk he goes through.

[00:22:10] Matt Paige: Mm-hmm. And I encourage anybody to check this out. Super interesting read here, but it is no point. Number three is will AI take all our jobs? And his perspective is, you know, every new major technology has led to more jobs, higher wages throughout history. Uh, with each wave accompanied with a panic of, you know, this time is different, it’s gonna steal all our jobs and everything.

[00:22:34] Matt Paige: Uh, and he gets into this point that, you know, that that doesn’t necessarily happen. Uh, and then he goes on to call out. You know, if we’re allowed to develop and proliferate throughout the economy, this new kind of evolution with ai, it may actually cause the most dramatic and sustained economic boom of all time.

[00:22:54] Matt Paige: With corresponding like record, job and wage growth. Um, but it’s, it’s interesting point. And, and the last thing I’ll hit on and let’s, let’s chat there. I’m curious to get your take, but he talks about this lump of labor fallacy, which is the notion that we have this fixed amount of labor. To be done. It’s like supply and demand side.

[00:23:12] Matt Paige: Mm-hmm. Um, and either humans are gonna do it or machines are gonna do it, but the fallacy comes into play. He goes on in the article to state that, uh, when you have AI and things like that, making it cheaper to. To do things. It increases purchasing power of people. People have new demands and wishes and all these things, which creates all new types of, uh, businesses and verticals and industries and all these things.

[00:23:41] Matt Paige: And like one point he mentions from Milton Friedman. Humans wants and needs are endless. Like, it’s just kind of an interesting point, but like what’s, what’s your take there on Andreessen Horowitz’s, you know, kind of perspective? Uh, I think it’s a unique one. Um, what, what’s, what’s your perspective there?

[00:24:00] Trent Cotton: Um, so I’m, I’m gonna get to an answer. Um, so I’m gonna equate it to something like the, the economic downturn. So let’s, let’s go back and look at 2008 through let’s 2011, okay? Mm-hmm. Banks failed. Massive, massive recession setback. People are outta jobs. Mm-hmm. Darkest of times you would think. But look at what came out of that prior to 2008 2010.

[00:24:27] Trent Cotton: Did we know that we needed an Uber? Do we know that we could have a social network that, you know, people could go and actually create their online business and be an influencer and make money from that. Mm-hmm. So, I, I agree with what he’s saying that, that this new technology will spawn. An an economy or, or facets of the economy that, that we don’t know that we need yet because we haven’t really, the need has not been created.

[00:24:54] Trent Cotton: So I, I do agree with that from a, from a talent perspective, it’s gonna be really interesting to see. Um, that sounds really, really exciting. Of new, new economic avenues, new job opportunities, new job growth, but I’m a little concerned that we can’t fill the damn jobs that we have now. How are we gonna fill some of these new ones out there?

[00:25:13] Trent Cotton: So, yeah. Does it make it worse? Right. Right, right. So there’s like the personal side of me that gets really excited going, oh, I wonder what’s next. And then the talent part of me kicks in and goes, oh crap. You know, here comes. Mm-hmm. Here comes another level of, of complexity. But I, I do think that this is, this is an opportunity for a lot of organizations.

[00:25:32] Trent Cotton: Uh, we do this to a degree of, of going in and trying to get ahead of the curve. So looking at how do we train up and get high schoolers, we’ll just start high school. How do we get them involved in some of the tech, um, jobs and the tech opportunities that are out there? Because a lot of, I know I did, I thought tech is fun.

[00:25:56] Trent Cotton: I like it as a consumer, but I don’t necessarily wanna sit there and code all day. Well, there’s other things in the technology sector besides just sitting down and coding. Uh, but there may be a kid out there that that’s how their brain works and they love that kinda stuff, but they don’t know that that’s actually an avenue.

[00:26:11] Trent Cotton: Mm-hmm. So our, um, we, we have a philanthropic arm called Hatch Futures, where we actually go in and, and we do that. So anyone in the United States who’s familiar with Junior Achievement, it’s very similar to, uh, junior Achievement, but we do it through stem. So we, we teach them the basics of an algorithm using a.

[00:26:27] Trent Cotton: Couple of pictures saying, Hey, put this in order. Guess what? You just wrote logic. That’s what an algorithm is, and it’s just an opportunity for us to be able to get them excited. So I think more companies that go in and start doing that, it’s going to help. Not in the immediate, but it’s gonna help us in the next five to 10 years as those I.

[00:26:44] Trent Cotton: High schoolers come out and, and they’re on the cutting edge of some of those technology programs. That’s one avenue. The other avenue, it gets back to how are we gonna reskill and redeploy our, our current workforce? And will we have the interest, will we have the, um, commitment to some of our current employees to make sure that they stay abreast of the new technology?

[00:27:08] Trent Cotton: So when those new opportunities do come up, we’re, we’re ready to meet them and we’re ready to push them into it. 

[00:27:14] Matt Paige: Yeah. It, I, you, you triggered one thought in my head too. That’s interesting with, we kind of hit on it earlier, you know, this, uh, evolution of generative ai, it’s democratizing AI in a lot of ways.

[00:27:24] Matt Paige: Mm-hmm. But a lot of folks, especially younger kids coming up, you know, they think of coding as like, I lemme see if I get the sides of the brain. Right. It’s more like right brained, like analytical thinking, all that. Mm-hmm. And it’s like, oh, I’m creative. That’s not for me. But what it does is like the actual physical coding.

[00:27:41] Matt Paige: Becomes maybe less important to, there’s other, other avenues you can leverage from a creative standpoint that I think is a huge unlock. Whether it’s, you know, with a chat G p T or you have like Mid journey and people are creating whole movies with ai, right? Generative ai, and it’s like this whole new world in a sense for like the creative folks out there that thinks can be really interesting to see how that evolves over time.

[00:28:08] Trent Cotton: It is, and, and, and with, with ai there’s, I, I think, um, it was probably a couple of months ago that one of the big articles on LinkedIn was, you know, a company was paying over $300,000 for an, um, chat G p t prompt engineer. Like how do you structure the logic to get AI to do what you want to do? It’s crazy.

[00:28:28] Trent Cotton: So that’s not necessarily coding, but I mean, that is an avenue and you do have to understand the logic behind it. So, I think that there are going to be opportunities that open up that are not the more traditional, as we think of today, um, technical routes. Mm-hmm. And how are we going to educate the youth currently and how are we going to reeducate our workforce to be able to meet those, those demands.

[00:28:51] Trent Cotton: That that’s, to me, that is probably public enemy number one. 

[00:28:56] Matt Paige: Yeah. Do, do you think this whole evolution has an impact on the proportion of folks that prefer, prefer. Like gig type work versus like, you know, gainfully employed by a single employee. Do you think that impacts that in any way? That kind of trend?

[00:29:10] Trent Cotton: It does. If you look at, um, the, it’s called the workforce participation rate, so it measures, mm-hmm. I think it’s from 24 to, don’t hold me. I think it’s 62 or 48. Sorry. It’s looking at what are the, what percentage of the population is actually working. Yeah, it is flatline. It is in the eighties. It was at 80%, you know, it dropped down to 70%.

[00:29:37] Trent Cotton: We have been hovering in and around 60 to 62% over the last three or four years since Covid, because what happened with Covid is that it wasn’t just a recession where just the financial industry or the car industry was impacted. Covid was unilateral in its, um, decimation of jobs. And so a lot of people move to the gig workforce because they don’t, they don’t want to have to depend on their livelihood coming from someone else.

[00:30:02] Trent Cotton: Yeah. This is hyper intensive in the technology space. There are people that just enjoy working on a project. And they wanna do it for themselves. Uh, they’re, they’re a freelancer. They don’t wanna necessarily clock in or have to go to meetings or anything else like that. They enjoy that freedom of just doing the work that they’re passionate about and then clocking out and enjoying life.

[00:30:23] Trent Cotton: We’re starting to see a lot more of that behavior wise, mindset wise. You know, it’s something that we look at internally of, you know, we’ve got people that are highly engaged and really wanna be on that employee side and all the training. And then we’ve got others that just. All they wanna do is do their work and, and call it a day.

[00:30:39] Trent Cotton: Yeah. So, you know, we’ve had to learn to be very flexible and agile, to be able to accommodate both types of mindsets so that way we can retain the top talent in the market. If a company goes in and says It is this way or No way. You are probably going to have more of a talent gap or, or a talent shortage Yeah.

[00:30:59] Trent Cotton: Than your competitor who’s willing to say, you know what, if you just wanna be a contractor, that’s fantastic. Just get the work done, you know, and, and go and live your life the way that you want to. So it’s, it’s, it’s another aspect that’s a result, uh, that was intensified with Covid. Um, there’s 24, 20 4% of the male market left.

[00:31:20] Trent Cotton: From 2020 to current and economists cannot figure out where they went. Now ideally, interestingly enough, if you look at the, the timeline and you look at, uh, average number of hours of gameplay, yeah, it’s almost proportional from when they left hours. That’s playing video games. I think that link, yeah, they’re playing video games, but I think it’s more because they’re doing gig work and they can go and, you know, enjoy games and work whenever they want to.

[00:31:45] Trent Cotton: So there, there’s some benefits, both sides. But companies have got to learn to be. A little less dictorial and a lot more flexible and agile if they want to survive. 

[00:31:53] Matt Paige: Yeah. The old don’t, don’t cut off your nose despite your, your face. Oh, yeah. Uh, yeah. So a couple rapid fire questions for you. Okay. What, what AI technology is most exciting to you right now, whether it’s a tool or anything.

[00:32:07] Matt Paige: In 

[00:32:07] Trent Cotton: particular, uh, um, for me it is the impact on HR analytics. So cinema analysis, forecasting, um, looking at, ’cause for the longest time you could look at what was going on internally, but it then you would have to pull in data from external and it was very manual process. Yeah. Um, now you’ve got these that can just go and scrape the information.

[00:32:28] Trent Cotton: Say, here’s your retention, male, female age groups against what’s going on in the industry. Quicker than it would take for me to actually go and find the information two or three years ago. So the impacts on the HR analytics on talent analytics is, is probably one of the things that I am just, I’m like over there, like a, a kid at Christmas waiting to open up a gift.

[00:32:49] Trent Cotton: Yeah. I’m, I’m ready for it. 

[00:32:51] Matt Paige: Oh, there’s so many tools coming out. It’s so cool to watch. Oh, yeah. Who, who are you watching in the hr, you know, talent or tech talent? Industry. Who are you following that you really find insightful or interesting? 

[00:33:03] Trent Cotton: You know, it, it’s, um, I have a love hate relationship with applicant tracking systems.

[00:33:08] Trent Cotton: Most of them are built for HR processes, not actually built for finding and engaging and nurturing talent. Um, there has been one that I, I, I’ve looked at not too long ago, Luxo, who has got all the AI and machine learning power for sourcing across multiple platforms. It’s got the nurturing and everything that, again, that’s driven by AI and nudges and all that from a.

[00:33:33] Trent Cotton: Candidate relationship management, and then it’s got all the cool little backend stuff with all the analytics. So to me, it’s just interesting to kinda watch some of these thought leaders take these thoughts and actually become advisors, uh, for some of the HR tech companies and, and having an immediate and direct influence on it.

[00:33:50] Trent Cotton: So I think some of the big boys that have enjoyed all the, the limelight and the talent market, like the LinkedIn mm-hmm. Uh, that has gotten so much money invested in it, and I don’t really recognize a change over the last 10 years. You know, there’s a, there’s a lot of people that are coming for him, so I am, I’m here for the show.

[00:34:07] Trent Cotton: I’m kinda like that, that Michael Jackson just popping popcorn going, okay, which one’s gonna win? 

[00:34:12] Matt Paige: That’s right. Uh, all right, last one for you. Uh, what’s one thing you wish you could go back to your former self before you started your career to give yourself some advice? Any, any one piece of advice you would 

[00:34:23] Trent Cotton: give yourself?

[00:34:24] Trent Cotton: Uh, trust the journey. There are so many. So I, I started out as a banker. I put myself through college as a banker. Uh, got into HR because I hated working with HR professionals. And there are so many curves that I took, um, yeah, that did not make sense at the time, but now whenever I look at it, It makes complete sense.

[00:34:45] Trent Cotton: Makes sense. Yeah. The banker, the analytic, the minoring in statistics. That comes in handy in in, in HR now because I can look at data and see what is the story that’s being told. So it is just kind of trust the journey. 

[00:34:58] Matt Paige: Yeah. Trust the journey. I love that. Alright, Trent, thanks for joining us. Uh, before we go, where can people find you?

[00:35:05] Matt Paige: What’s the best spot to go find Trent and all the great insight you have? 

[00:35:09] Trent Cotton: Um, linkedin.com. Trent Cotton. You can follow me at Twitter, uh, at Trent Cotton all one word. Uh, you can follow me on sprint recruiting.com or futurist as. F u t h r i s t.com. And of course you can follow, uh, all the blogs and posts that we do on Patchworks.

[00:35:28] Matt Paige: Yeah. And find the Sprint recruiting out on Amazon, I’m assuming, or I guess on your website directly so you don’t have to pay the piece to 

[00:35:34] Trent Cotton: Amazon. Yeah, it’s, yeah, it is, it is on both, but uh, yeah, you can go and get it on Amazon and it’s on Kindle and then futurist comes out in the fall. 

[00:35:42] Matt Paige: Yep. Great. Trent, I appreciate you joining us.

[00:35:44] Trent Cotton: Thank you. I enjoyed it.

The post How Generative AI Will Impact the Developer Shortage with Trent Cotton appeared first on HatchWorks.

]]>
Talent Unleashed: Trent Cotton’s Epic Sprint Recruiting & Agile Recruitment Magic! https://hatchworks.com/podcast/sprint-recruiting/ Tue, 15 Aug 2023 19:25:08 +0000 https://hatchworks.com/?p=29709 Don’t miss the latest episode of the Recruit CRM Podcast, “Talent Unleashed: Trent Cotton’s Epic Sprint Recruiting & Agile Recruitment Magic!” In this riveting discussion, Trent Cotton, the VP of Talent and Culture at Hatchworks, delves into the groundbreaking influence of Sprint Recruiting in modern talent acquisition. Discover how this Agile approach is making waves […]

The post Talent Unleashed: Trent Cotton’s Epic Sprint Recruiting & Agile Recruitment Magic! appeared first on HatchWorks.

]]>
Don’t miss the latest episode of the Recruit CRM Podcast, “Talent Unleashed: Trent Cotton’s Epic Sprint Recruiting & Agile Recruitment Magic!” In this riveting discussion, Trent Cotton, the VP of Talent and Culture at Hatchworks, delves into the groundbreaking influence of Sprint Recruiting in modern talent acquisition. Discover how this Agile approach is making waves in the recruitment sector and the hidden techniques driving its success.

Key Takeaways:

  • Prioritize the Candidate Experience: Ensuring a positive journey from initial outreach to onboarding is crucial. This not only shapes the candidates’ perception but also affects retention and boosts employee referrals.
  • Harness Data for Process Enhancement: Using data analytics is essential to spot and rectify issues in the recruitment process, such as delays in feedback. This proactive approach can greatly improve recruitment strategies.
  • Embrace Structured Recruiting: Adopting methodologies like sprint recruiting introduces robust metrics into the process, making hiring more efficient and enhancing the candidate experience.
  • Engage Deeply with Hiring Managers: Open-ended questions, such as “describe your ideal candidate,” can provide invaluable insights into the specific needs and expectations of hiring managers.
  • Champion Empathy and Diversity: Leading with empathy and transparency is key. Recognizing the importance of inclusivity and actively seeking diverse perspectives ensures a rich and varied talent pool.

The post Talent Unleashed: Trent Cotton’s Epic Sprint Recruiting & Agile Recruitment Magic! appeared first on HatchWorks.

]]>
5 Ways Generative AI Will Change the Way You Think About UX and UI Design https://hatchworks.com/built-right/generative-ai-ux-and-ui-design/ Tue, 08 Aug 2023 10:00:03 +0000 https://hatchworks.com/?p=29698 Generative AI has taken the world by storm. In a short space of time, it has changed the way many people do their work. For UX and UI designers, it has the potential to change the entire design process – but how?  In this episode of Built Right, host Matt Paige sits down with HatchWorks’ […]

The post 5 Ways Generative AI Will Change the Way You Think About UX and UI Design appeared first on HatchWorks.

]]>

Generative AI has taken the world by storm. In a short space of time, it has changed the way many people do their work. For UX and UI designers, it has the potential to change the entire design process – but how? 

In this episode of Built Right, host Matt Paige sits down with HatchWorks’ Andy Silvestri, Director of Product Design, to break down the five main ways generative AI will change the way we think about UX and UI design. 

Keep reading for the top takeaways or tune in to the episode below.  

We’ve identified five main ways generative AI could impact the design process – all the way from those early stages of design to the final product.  

 

1. A shift from an imperative to a declarative design

One way that UX and UI design have changed is that there has been a shift from an imperative, or point-and-click style of design, to a more declarative approach.  

Now, you can declare what you want from a tool, and it will work toward a solution. This allows you to work with what Andy calls a “design copilot,” which could result in machines reading a design brief or refining a narrative. A declarative approach is essentially a dialogue you have with a machine, which has the potential to be a game-changer in the design space, especially for smaller teams with limited resources.

 

2. Getting to proof of concept quicker 

Another way generative AI could change the way we design UX is by allowing design teams to reach a proof of concept much faster. In the ideation phase, design teams will typically grind out multiple iterations of flows, wireframes, and other elements to support a proof of concept.  

But AI tools could shave time off for designers. It could also allow them to explore different ideas in those early phases without needing to spend a large amount of time drawing them up.  

 

3. Exercise caution while using generative AI 

While generative AI can save time and refine the design process, it’s still important to be cautious while using it. Using AI tools will still require designers to check the quality, assess for bias, inaccuracies, or even copyright infringements. 

That’s why human designers aren’t going to be fully replaced by AI anytime soon. We still need human eyes to check for these things. Common issues with generative AI in the design process include shadows, angles, depth of field, and proportions looking “off.” But it will take a human to spot them. 

 

4. Impact on creation and utilization of design systems 

Generative AI can also help in the later stages of the design process by speeding up workflows and helping to finalize designs. For example, you could take the design system and ask it to decrease all color gradations by 10% – which then takes some of the more manual work out of tweaking designs.  

By making the design process more efficient from those early ideation stages to finalizing the designs, designers are freed up to spend more of their time on other elements of the process.

 

5. Keeping users engaged 

Andy believes generative AI will give practitioners a chance to be more diligent in the process. Designers can spend more time thinking about the thing that’s most important in UX and UI design – creating an all-round good experience for users.  

Getting feedback from users and acting on that feedback is still essential, which is why human designers are still so crucial to the process. As of yet, AI can’t really assess a design for value, efficiency and accessibility in the same way a real person could.  

Overall, generative AI has great potential to streamline elements of the design process to save time. However, designers will still need to oversee the work and adjust accordingly to ensure that the design is user-friendly and provides real value for people. 

For more information on how generative AI is changing the design process, check out the full episode.  

Excited about the possibilities of Generative AI in UX/UI design? Make sure to subscribe to the Built Right podcast for more insights and discussions like these. Share your thoughts and experiences with us. Let’s create, innovate, and elevate the world of design together! 

Experience the power of AI in driving software development with HatchWorks’ Generative-Driven Development™.

We know what AI can and can’t do. Our seasoned team members guide you through every aspect of the software development lifecycle and strategically introduce AI when and where it adds real value to your solution.

See how it can benefit your projects.

[00:00:00] Matt Paige: We got a spicy one for you built right. Listeners, today we’re breaking down the five ways generative AI will change the way. You think about UX and UI design. And by the way, number two, I believe we’ll actually create a complete shift in the way we think about designing and building software. So make sure to stick around for that one.

[00:00:28] Matt Paige: But to help me do it today, I’m joined by Hatworks own Andy Sylvester. Who brings 20 years of experience in the digital design space, including graphic design, creative design, user experience, customer experience, product strategy, all the things. And he even ran his own experience strategy and design firm for 10 years prior to leading our product design practice here at Patchworks.

[00:00:53] Matt Paige: Andy’s a returning guest, so check out episode two if you like this one with Andy, but welcome back to the show, Andy.

[00:01:00] Andy Silvestri: Hey, thanks Matt. It’s good to be back.

[00:01:03] Matt Paige: Yeah, good to have you all, and I’m really excited about this topic and this format that we’re gonna get into to today. So we at patchworks have been digging into everything generative AI as of late as it relates to the world of UX and UI design specifically for this conversation.

[00:01:21] Matt Paige: From testing new tools, talking to folks in the industry. And what we’ve done is we’ve distilled all of these learnings down to. The five key points so you don’t have to and this includes how it’s making things easier, better, faster, but also what we need to watch out for. And plus things that may sound completely foreign.

[00:01:39] Matt Paige: Now we believe this is gonna be standard practice in the future, so trying to help you get ahead of that. Without further ado, Andy, let’s get into number one. So I’ll key it up for us and let you take it from there. But number one is, Taking this shift from an imperative to a declarative design approach.

[00:01:59] Matt Paige: Take us through that. What is, what does this mean and how is this going to evolve?

[00:02:04] Andy Silvestri: Yeah. So I think as practitioners, right? We’re coming from this time of you point and click in the imperative fashion of doing everything, right? And now we’re getting into this model of course, where like you can declare what you want from the tool, right?

[00:02:18] Andy Silvestri: I can say I gimme this, gimme that. And the tool will use its use, its AI powers to give you a result, right? I think that that focus on moving more into this declarative kind of approach to design is really an interesting one because that idea of working along perhaps like a design co-pilot of sorts where you have this running narrative with the generative AI tool that really has a potential to be a, pretty big game changer. And not only from the standpoint of like just a singular prompt and response where I’m like, Hey Give me a design that has this, and this, and the thing gives it to you.

[00:02:52] Andy Silvestri: But maybe more so the ability to work through like a design, brief and refining a narrative, tweaking things, adjusting things, that kind of stuff. So really like it’s this kind of dialogue you have with the machine, and yeah, seeing that as having a really big, upside to streamlining the process.

[00:03:12] Andy Silvestri: Especially for smaller teams where it’s maybe a designer of a design team of one. And. Like maybe you’re a startup, you have a smaller budget, right? So you’re really leveraging the tool as almost a again, a co-pilot or another designer on your team.

[00:03:29] Andy Silvestri: There are some tools out there that are embracing this kind of dialogue approach. When there a lot of thoughts. This stuff’s in beta right now. It would be interesting to see where this goes. We’ve seen it from like the, imagery perspective of gimme a still image or gimme illustration. But still in the sense of like screen design, there are some big players who haven’t really weighed in specifically Figma and by extension, Adobe. So it’s gonna be really interesting to see what they bring to the table.

[00:04:01] Matt Paige: Yeah it’s, really interesting. This is this one kind of, as you start to think about it, it blows my mind a bit because there’s been so many best practices established with the way we’ve done things to date.

[00:04:13] Matt Paige: There are heuristics and, some of that will still stay around. But it’s based on the human interacting with a machine in a particular way. And these generative AI tools and way of interacting with a machine via natural language it’s still got room for improvement. But I always look back to like cell phones.

[00:04:35] Matt Paige: Back in the day when we were playing snake or the internet back in the day, we were dialing up on a o l. Like, things have progressed so much. And you already see what’s happening with generative ai, how fast it’s progressing like this has the potential to really shift how we interact with technology, which really flips on its head, potentially how we’ve been doing things in a big way.

[00:04:57] Matt Paige: Date.

[00:04:58] Andy Silvestri: Yeah, a hundred percent. There’s always that kind of room for improvement, room for advancement. I think that we’re seeing that right now, in this moment with all these tools that are coming out. And you’re really the hard part, I think right now is staying on top of it.

[00:05:10] Andy Silvestri: All right. Yeah. It’s been an interesting ride for sure.

[00:05:15] Matt Paige: I think it’ll be interesting too. You think of this concept of the, innovator’s dilemma. Take a Salesforce or a HubSpot or that’s how their whole solution’s built. Most solutions today are how are they going to adapt this new way of interacting?

[00:05:31] Matt Paige: With technology do they adapt and make the shift potentially upset some existing customers, right? Yeah. To try, to stay ahead, or is it gonna be what we’ve seen so many times, newer competitors coming into the space without the bloat and that, that gets back to an interesting point you just made.

[00:05:48] Matt Paige: You may not need an army of people to do some of this stuff so that it changes the game a bit.

[00:05:55] Andy Silvestri: Yeah, it’s interesting subset of that example for what Adobe is currently doing with their AI tools, right? They’re slow rolling things into their existing product based, right?

[00:06:05] Andy Silvestri: So you’re seeing like a feature here and a feature there come into Photoshop or come into Illustrator, right? I think just today they released a. A color the modification tool piece within, the illustrator, so that’s one approach. Is like just tease it out, get a little bit of proof of concept, get a little bit of traction around something before just throwing a huge new application right in front of people.

[00:06:28] Andy Silvestri: Like kind of meet them where they are in their current workflows. Yeah. Yeah, it’s an interesting aspect of how all this stuff is coming together.

[00:06:35] Matt Paige: So that’s number one, shift from imperative point and click to more of a declarative chat focused. Design, how that’s gonna impact user experience and something to stay on top of number two.

[00:06:48] Matt Paige: And, this is the one I called out earlier. I think this is one of the most interesting ones. I think it’s the ability to get to proof of concept quicker. And some of this is obvious, but we’ve been playing around with some tools that, you know, whether it’s that tool or that concept or idea and somebody else adopts it, but there’s big potential here to really accelerate this process.

[00:07:08] Matt Paige: So take us through this one.

[00:07:10] Andy Silvestri: Yeah, this is great. And this is probably, at least in my opinion, one of the biggest upsides to at least currently using generative AI in design. And from both the standpoint of like low fidelity, medium fidelity, high fidelity all those things. I think the, idea is why not use these tools when the stakes are low, right?

[00:07:29] Andy Silvestri: If you’re in a a, a. A ideation phase or concepting phase, grinding out multiple iterations of flows, wire frames, interface concepts components, anything to support a proof of concept, it’s not really gonna be a slog anymore. Like you want to get using these tools to get directionally correct.

[00:07:49] Andy Silvestri: Correct. Will take a lot less time. And probably in a, in an interesting way people are afraid of this idea of, oh, it’s gonna replace designers completely, but I think it’s actually going to. Open up the door for designers to become even more exploratory in those early ideation and concepting processes.

[00:08:06] Andy Silvestri: Because you’re gonna have more time. ’cause you’re not necessarily doing all the heavy lift in the backend. So if, for example, I can say to a tool, Hey, I need three concepts of this, and this. Give it some requirements and, work through that prompting. And then take what it gives me and and then leverage what it’s given me to refine it further as needed. That’s a big, that’s a big step up from just, okay, I gotta make three concepts and I gotta think through every single Yeah. Every single piece. Yeah, I think this is gonna be a really big a big lift in terms of workflow.

[00:08:42] Andy Silvestri: And again, like in that earlier phase, I think is when it’s gonna be most profound in terms of just kinda saving time.

[00:08:49] Matt Paige: Yeah. You think of how we interact with our clients today, that upfront piece, like it gets shortened so much. You almost could think of this world in the future where you’re in a workshop and.

[00:09:00] Matt Paige: You’re actually getting an idea, like as, you’re working through the workshop. And then there’s the debate of, okay, the is whiteboards still superior because if you start to get real looking designs, the discussion immediately shifts to color and placement and not the, core functional piece of things.

[00:09:23] Matt Paige: And I think that’s

[00:09:24] Andy Silvestri: where like kind. Yeah, like the tactful use of those tools in those scenarios, I think is what’s gonna be paramount, right? Because you can think about that of yeah, we might be going through and doing a workshop, getting some stuff on the whiteboard with a client.

[00:09:37] Andy Silvestri: We come back and we say, okay give us a week to turn around some, rough proofs of concept that might be, let’s do it right now with what we just ideated through and see what we can get. But of course, like to your point, leveraging it at the fidelity that makes the most sense, so that, yeah, you don’t jump too far ahead or get down a rabbit hole and get.

[00:09:54] Andy Silvestri: Kind of distracted from like the discovery task at hand, right? But, yeah that’s a really, that’s a really good way to think about it. Of let’s put the concept and let’s create like the artifact for it much more in a much lighter lift and, use that to further the conversation.

[00:10:11] Andy Silvestri: So that’s what I mean by getting it getting quicker and getting getting to the concept and, getting it out the door in an efficient manner.

[00:10:20] Matt Paige: Yeah. And you mentioned 1.2. So it’s two things in, my view. I’ve heard some others express this too. The minimum bar is gonna be raised or lowered in, in whichever way you look at it in terms of it’s gonna be so much easier to do stuff, right?

[00:10:39] Matt Paige: But I think to your point, it elevates. The true practitioners, the true designers that skillset, it’s like back to the back to the Renaissance it’s the artist. It’s those folks that start to become more empowered, and I think that’s, the future. Is those type of folks become even more important, critically important as this new technology starts to take, shape.

[00:11:05] Andy Silvestri: Yeah, for sure. A hundred percent.

[00:11:09] Matt Paige: So that’s number two, getting to this proof of concept quicker. Multiple iterations of proof of concept testing. That whole idea of the agile mindset, it just accelerates things. But getting to number three, this is more of the cautionary tailwind.

[00:11:25] Matt Paige: So exercise caution when leveraging generative AI tools and solutions. So take us through the the more doomsday one on the list here.

[00:11:36] Andy Silvestri: Yeah. Yeah. And I think this is a good one to think about in that it highlights the need for cur curation, I think like now more than ever.

[00:11:46] Andy Silvestri: And I mean that from the sense that it’s not just about proofing the quality of what comes back from these tools proofing for bias, proofing for inaccuracies copyright infringements been in a lot of the conversation around using. Some of these tools, right?

[00:12:00] Andy Silvestri: There’s still a very real need, I think, for a designer’s eye to want going back to what you just saying about the, nce, right? There’s still a need for this skillset. And we will still need to employ, I think, a very good bit of common sense when we’re using these tools, right?

[00:12:17] Andy Silvestri: Like right now, whenever we are looking through things that we see in the space that are generated via ai, there’s a bit of a tell or like a look and feel. That I think this imagery is taking on, right? Like maybe there’s odd angles. The shadows are off, depth of field is not quite right.

[00:12:32] Andy Silvestri: Proportions are weird, the thing I think about, it’s like when when Photoshop became more mainstream and Photoshopped turned into a verb. Like you could tell that something had been photoshopped, right? And I think people that have been doing this for a while are seeing that kind of tell of oh, AI is, doing this and something’s not quite right.

[00:12:54] Andy Silvestri: I. That’s one way to think about it. I think, I guess the scary thing is just in what we’ve seen in the last couple months of this this stuff becoming mainstream is how good will it get, right? If you see the increase in quality that just a few months of since Chad T p D dropped the market, right?

[00:13:11] Andy Silvestri: And there’s this increased focus on more sophisticated prompting and prompt engineering is becoming like a real thing and people are learning more about how to use these tools and interact with them. So very soon, like very, soon, That tell may be indiscernible from reality. So that’s the one thing that’s okay, let’s use curation.

[00:13:30] Andy Silvestri: Let’s let’s curate this. Let’s, make sure that we are using a bit of common sense. And, maybe that goes into how we actually, I. Disseminate the work in an honest fashion, right? Somehow indicating that this was used this was made using AI tools to, help with the design, that kind of thing.

[00:13:48] Andy Silvestri: So it’s very interesting times. We’re still in that gray area, right? But I think that with, good practice, we’ll get there.

[00:13:57] Matt Paige: Yeah. And there’s, gonna be instances we saw recently. This is more chat, G p T focus, but the lawyer that was creating a legal brief off of chat G P T, and it just, it completely hallucinated just completely false information.

[00:14:12] Matt Paige: And what does that begin to look like on more of the design side? I think that’ll be interesting to see, to your point does it look like it’s been produced through generative ai? Is that a negative connotation there. But you, mentioned it earlier, it’s this concept of co-pilot that’s, the important piece.

[00:14:32] Matt Paige: It’s your co-pilot. It’s not set it and forget it. What was the, was that the George Foreman Grill back in the day? The SA or one of those? Yeah, something

[00:14:40] Andy Silvestri: like that. Yeah. Yeah, but you said it it’s interesting, right? Just as like Photoshop coming to the masses, allowed a lot more people to.

[00:14:52] Andy Silvestri: Manipulate their imagery and quite frankly do better work with the types of photos they were taking. This is a similar kind of effect where it’s not all bad, right? Just because something has been AIed doesn’t necessarily mean Yeah. That it’s going to be a bad thing for just to, again, access.

[00:15:08] Andy Silvestri: Raising that baseline, getting more people into design, getting familiar with the tools. I think that’s a a silver lining to this kind of cautionary doom and gloom outlook to AI replacing jobs and things like that. Again, very interesting taking the good with the bad, but we’ll see where it goes.

[00:15:25] Matt Paige: So, this generative AI kind of de democratizing AI and potentially design related things, good or bad, in your opinion.

[00:15:36] Andy Silvestri: I think it’s a good thing, right? I think the more we as society can get better with the tools of course there’s always a negative side to, new technology and there’s people that are gonna Yeah.

[00:15:49] Andy Silvestri: Use it for bad actors, negative things and bad actors. Yeah. All that stuff’s gonna happen. But I think generally speaking the more empowered people can be with these tools and actually again, at the end of the day, if it’s delivering value to folks right, and they’re seeing a really.

[00:16:03] Andy Silvestri: And a, benefit to using this technology, then I’m all for it. But again yeah there’s, two ways to look at it, of course. So

[00:16:12] Matt Paige: yeah you, hit on the core thing. It’s, all about value at the end of the day, and we talk a lot about that at Patchworks delivering and owning the outcome, but It, goes back to the.com era, you slap.com on anything.

[00:16:27] Matt Paige: The concept of value just completely got ignored. And there’s some of that going on today where it’s the hype train of there’s a new generative AI tool out every day, multiple ones. But do they provide value? Are they defensible? Do they have some kind of differentiated mo. When you can spend something up on a weekend, probably not.

[00:16:47] Matt Paige: So I think that it’s, that back to first principles is still gonna hold true even with this. Yeah.

[00:16:55] Andy Silvestri: Right on.

[00:16:57] Matt Paige: Alright, so that is number three, the cautionary Tale with generative AI tools. Number four this is a, I like this one. So this is impact to creation and utilization of design systems.

[00:17:11] Matt Paige: For me, this gets, to like easier standardization, just like mundane tasks. Get ’em outta here, but take us through this one.

[00:17:19] Andy Silvestri: Yeah, sure. And this is a click deeper to what we were talking about earlier around developing whole concepts via prompting. So like this is more I think when you get into that stage of finalizing a design there’s a lot of work that now has to be done for preparing that to be taken further in the delivery lifecycle, right? So leveraging the automation aspects of what these tools have is. I think potentially a, big kind of speed up in terms of a designer’s workflow, right?

[00:17:51] Andy Silvestri: If you could imagine just simply making a design and then prompting a, tool to say generate the components for color typography. The focus states from what, I just designed. And it’s boom, there it is. Or take the design system, decrease all the color radiations we have by 10%, and boom, it’s all done.

[00:18:10] Andy Silvestri: You don’t have to go in and do all that, individual legwork. So while it sounds like little stuff, all of that from the standpoint of a a design team’s effort can, really add up and, shave a lot of time. So I think this is really a, powerful kind of aspect of that, ladder stage of finalization design.

[00:18:29] Andy Silvestri: Not just within like pulling components apart for the sake of delivery, but also in kind of refinement It’s very similar, I think to a time when the concept of reusable symbols and components were introduced in interface design. Yeah. Programs like Sketch, right? Like it really changed the way we as designers thought about our workflow.

[00:18:49] Andy Silvestri: It’s oh, I can create this one component or this one symbol that I can use an infinite number of times within my design system. Great. It was a huge efficiency gain for a lot of people’s day to day. And again, I think like we’re in. We’re in that moment with these tools coming to light of how does it come into a workflow?

[00:19:08] Andy Silvestri: How does it shave off or make you know, things that much easier? And I think that’s again, where like the real value and the real adoption’s gonna take place. So whomever in whatever fashion comes out with the killer app when it comes to design, again, this is why I was talking about Figma earlier.

[00:19:23] Andy Silvestri: ’cause a lot of the industry is waiting to see what their move is when it comes to. Generative AI and baking it into their, whole their software suite. These are interesting times and it’s, the stuff that will impact a hundred percent of how we approach our workflows as as designers here at hatworks too.

[00:19:45] Andy Silvestri: Really interesting stuff, really looking forward to learning more and kicking the tires on even more tools. Yeah. And

[00:19:52] Matt Paige: Everybody that’s listening. The whole summation there, it’s speed to value, right? It’s helping increase speed to value, efficiency, all of that.

[00:20:01] Matt Paige: But for the listeners that may not be acquainted with what, a design system even is, like what’s, what is a design system gives just some some context for what that is, why you have design systems, the value of them. So I think it’s a piece that’s overlooked sometimes. If you’re building product and you’re not as acquainted with user experience UI design, things like

[00:20:26] Andy Silvestri: that.

[00:20:27] Andy Silvestri: Yeah. So at its core, a design system is really the foundations of what makes up the the visualization or the interaction interactive elements of your product, your solution, whatever it is that you’re designing, right? You can think about there’s the atomic method of breaking things down to the atomic level of this is our our, very small elements that build up into molecules that build up into organisms, et cetera, et cetera. Having a system of all those components within your design is, really again, the foundational part. And I can, as a designer or someone who owns that design system for a specific product or a brand shepherd that and, work through, okay, do we need to make updates to colors?

[00:21:13] Andy Silvestri: Do we make, updates to our typography suite? All of that kind of thing. Those are the nitty gritty pieces that all you know, again, funnel back up to what in the final interaction, the final interface. So again, that’s why there’s a lot of components to design systems that need to be managed and overseen.

[00:21:30] Andy Silvestri: And again with, these AI tools coming to light, There’s a lot of opportunity for a lot more efficiency to, be interjected into that whole workflow. Yeah.

[00:21:40] Matt Paige: Yeah, and I think to your point, what’s gonna be interesting, at least what I’m interested in, is it gonna be your Figma obviously your Adobe’s the, large players.

[00:21:53] Matt Paige: Are they going to adapt to these things and win in the market, or are you gonna have smaller players? This is like a Figma, like they have a moat. Established people are. Bought in and they use that solution are, new players gonna be able to come in by advancing and kinda leapfrogging here, or the plugging into those type of solutions?

[00:22:15] Matt Paige: Do you think it, it is gonna be the big players out there that ultimately win, it may take a little more time to actually build this into their solutions, or do you think it’s more of a, newcomer.

[00:22:27] Andy Silvestri: Space. Yeah. Yeah. The established players definitely have a leg up, right? Especially Figma, who now has Adobe in their corner, right?

[00:22:35] Andy Silvestri: So they have the resource to push this stuff through more quickly. They got the money but that’s not to, that’s not to say that a, new player couldn’t come out of left field and really nail all of the core value prop that a Figma does and add on top of it with something related to generative AI and beat them to the punch.

[00:22:52] Andy Silvestri: Yeah. It’s completely feasible. And there are groups out there I’ve seen buzzy I’ve seen you wizard, all these kinds of things that are they are their own ecosystems, but maybe, this is maybe, this is the time that a new player comes out and takes the crown.

[00:23:09] Andy Silvestri: I don’t know.

[00:23:11] Matt Paige: Yeah. I, guess we look back as far as Figma and they, yeah. Or they’re, a little bit older, but they definitely came in and just. Stole the show. Alright, so that is number four. The impact in, in creating and utilizing design systems. It’s gonna be big there. It’s getting, rid of all the mundane kinda tasks that designers really they can focus on more important stuff.

[00:23:34] Matt Paige: The last one, this is I mentioned the, p o c one is the most impactful, but this one’s critical. I, love this one. It’s the importance of keeping your users actively engaged. When they’re in copilot mode so they don’t crash the plane. What, does this mean? And this is thinking about the users of Generat generative ai, or your products and services that leverage generative ai.

[00:23:59] Matt Paige: Take us through this one.

[00:24:01] Andy Silvestri: Yeah. And I love this notion. I, think it’s I. This idea of, oh, generative AI is just gonna allow us to just prompt, do whatever and then go about our day and do the really important things when, but I think it’s really more kind of an opportunity for us as practitioners to be diligent.

[00:24:20] Andy Silvestri: And I think quite frankly, this is why I think we’ll always have jobs as designers because even with the assistance of generative ai all of these potential big wins we’ve talked about in terms of automation. A human is still responsible for creating a good experience for other humans, right?

[00:24:40] Andy Silvestri: You, have to factor in what it is that other people want in a service of product, right? And, make sure that you’re delivering on that. So no matter how quickly or efficiency or efficiently you get there you still have to test concepts. You still have to gather feedback from real people.

[00:24:56] Andy Silvestri: You have to synthesize that feedback. You have to act upon the feedback, put it into a roadmap, work on it, right? I think generally one way to think about it is that just ’cause we have all these tools that are taking the place, you know of, other humans, you’re not going to take the human out of human-centered design, right?

[00:25:14] Andy Silvestri: Yeah. So we’re still gonna need to be able to be diligent and, oversee and shepherd this work, as I was mentioning earlier.

[00:25:22] Matt Paige: Yeah. As long as the solutions we’re building are for humans, I think that’s 100% true. I, wonder if some point though you do have these AI type agents and you’re building experiences and solutions for them, what does that look like?

[00:25:37] Matt Paige: And maybe they’re just completely headless. It’s just the a p i layer in making the everything easy and accessible. That’ll be a interesting evolution if, things change in that way.

[00:25:48] Andy Silvestri: Yeah with the speed we’re going again, I was going back to what we were talking about earlier with how quickly it’s evolving how, much things are adapting to people prompting better, doing better things.

[00:26:02] Andy Silvestri: Like how are we going to what is it gonna be in three months, six months, another year from now? We’re, talking, we’re looking back over the course of only a handful of months right now. And so much stuff has come to the market. So many ideas are out there. The, opportunity to really embrace this area and lean into it as practitioners is really, fascinating.

[00:26:23] Andy Silvestri: And so again may the best, tool win, right? That’s the way I see it, is if it’s gonna be improving our workflows as designers, if it’s going to be increasing value to end users, if it’s going to be making really any aspect of the design process. Just a little bit easier, even if it’s that little bit of an efficiency gain, I think it’s worth it and it’s something that we that we can continue to work through and, learn from.

[00:26:48] Andy Silvestri: So yeah, really exciting times really and, a lot more to, to look at. We have a list as Matt, you mentioned. We have a list of tools that we’re looking through that’s just it seems like it gets longer and longer by the day. So

[00:27:00] Matt Paige: yeah, we got a notion board of about a hundred tools I feel like that we’re going in and testing.

[00:27:05] Matt Paige: And, one thing we’re doing too, off topic, but. We’re creating, what are our guidelines to using these tools which is important for organizations to think through. If we’re just testing a tool internally for an internal project or solution we’re building, great. That’s one level.

[00:27:22] Matt Paige: If we’re doing something on a client project that is potentially exposing client data or something like that’s another level of consideration. So that’s one thing we’re defining at. Patchworks and I encourage every other organization to do the same. And it’s about creating a standard set of practices and just agreed upon rules of how you engage with these tools.

[00:27:49] Andy Silvestri: Right,

[00:27:49] Andy Silvestri: Yeah, a hundred percent. I think it’s super important getting, back to that kind of curation beyond just the quality of what you’re getting back from these tools. It’s about. Is it ethical? Is it accurate? Is is it not infringing on copyrights? All of that kind of stuff.

[00:28:05] Andy Silvestri: And then, yeah is it not exposing you to risk? There’s a very real factor there that I think a lot of there’s just all this gray area that everybody’s gonna have to take their own path through and decide what’s right for, their organization. But I think, yeah a lot of those kinds of considerations, the more you can define them, talk through them, make sure you have a plan, or at least an approach.

[00:28:27] Andy Silvestri: Formulated the better off you’ll be because we’re not gonna see the end of new tools coming out. And, yeah, new, weird, deep fakes and stuff like that. And people, again, being bad actors and leveraging the tools in, not so nice ways. So

[00:28:40] Matt Paige: yeah. Another offshoot too, like we’re playing around with Firefly, Adobe’s version of like Mid Journey and you can actually now prompt in there the type of camera lens you’re using and things like that.

[00:28:52] Matt Paige: It’s just taking it to a whole nother. Level for, better or for worse. But to your point earlier, it’s what jobs are gonna exist? That, that don’t even exist today. Like you think back before pre-internet, you know this, all kinds of jobs that exist now that never existed previously are, do you think we’re gonna have that kind of stepwise change from generative ai?

[00:29:14] Matt Paige: Is it that level of impact or do you think it’s somewhere less, somewhere more? Where do you think. Play fortune

[00:29:21] Andy Silvestri: talenting for us. Yeah. I think within a year or two you’re gonna see prompt engineering on everybody’s resume, right? I think that’s the next step is maybe not that, it’s wholly different job titles or, roles, but it’s more skillset building, right?

[00:29:38] Andy Silvestri: So I think in the near term, Years, the more and more that becomes mainstream in terms of understanding how to interact with these tools. As we were talking through the more valuable that’s gonna be as an individual. If I have that skill and I’ve invested in that, and also to an organization who’s leaning into these tools.

[00:29:56] Andy Silvestri: Like understanding how to talk to the robots is gonna be a really big deal. So, yeah, that’s all I can say. Right now though, yeah, we might see in five years from now that there is a a prompt designer, I don’t know, like something like that, or an AI designer, something, somebody who’s got their little their, sidekick.

[00:30:13] Andy Silvestri: That’s the ai, I don’t know.

[00:30:15] Matt Paige: And I guarantee you’ll see required skills, 10 years of experience with the tool that’s been out there one year. So that’s, guaranteed to be out there. All right, so the five, just to recap the, number one shift from imperative point and click to a declarative chat focus design.

[00:30:31] Matt Paige: It’s changing how we think about user experience ability to get to proof of concept quicker. Again, speed to value, speed to testing. Agile taking to the next level. Number three, exercising caution when leveraging generat generative AI tools and solutions. Number four, impact to creation and utilization of design systems.

[00:30:52] Matt Paige: Get the mundane stuff outta there. And then number five, don’t crash the plane. Be considerate of users actively engaging with the solution. And at one point, see, like you see this with Tesla, they require you to, I think, touch the steering wheel ever so often. Yep. But it’s just to remind you that you are engaged with the, thing.

[00:31:10] Matt Paige: But really, enjoy the conversation today, Andy. And for those that are looking to go a little bit deeper on generative ai, we got the blog out there on generative ai. We’re doing one specific to software development. We’ll have another one coming out. Specific to UX and Design, and we’ll link to some of those in the show notes.

[00:31:27] Matt Paige: And then we also have episode, I believe it’s episode eight, it was our built right live podcast we did with Jason Slacker. Go check that one out. This guy is insanely smart. Leads AI empowerment group right now used to be leading AI products at Ance Health. Tons of experience with a lot of crazy stuff.

[00:31:49] Matt Paige: And there we get into the, idea of how do you validate and identify winning generative AI use cases for your business. So don’t miss that. But thanks for joining us today, Andy.

[00:32:02] Andy Silvestri: My pleasure, Matt. Thanks as always. And yeah let’s, talk more about a, because I think in another six months we’re gonna have a whole nother little layer of topics to talk about, so it’ll

[00:32:11] Matt Paige: keep evolving.

[00:32:12] Matt Paige: We’ll, look back at this and we can test our, hypotheses we’ve had. Right on. All right. Thanks Andy. Thank you.

The post 5 Ways Generative AI Will Change the Way You Think About UX and UI Design appeared first on HatchWorks.

]]>