Generative AI Archives | HatchWorks Your US-based Nearshore software development partner Fri, 19 Jan 2024 19:21:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://hatchworks.com/wp-content/uploads/2021/04/hatchworks-favicon-150x150.png Generative AI Archives | HatchWorks 32 32 Generative AI Statistics: The 2024 Landscape – Emerging Trends, and Developer Insights https://hatchworks.com/blog/software-development/generative-ai-statistics/ Fri, 19 Jan 2024 19:21:05 +0000 https://hatchworks.com/?p=30636 With 2023 dubbed the year of Generative AI, what advancements, trends, and use cases might we expect in 2024 and beyond? To find out we need to look at recent research and AI stats. In this article, we’re analyzing the statistics and trends informing the adoption and use of AI. Throughout, we’ll comment on what […]

The post Generative AI Statistics: The 2024 Landscape – Emerging Trends, and Developer Insights appeared first on HatchWorks.

]]>

With 2023 dubbed the year of Generative AI, what advancements, trends, and use cases might we expect in 2024 and beyond?

To find out we need to look at recent research and AI stats.

In this article, we’re analyzing the statistics and trends informing the adoption and use of AI.

Infographic on "Generative AI Statistics: 2024 Trends and Developer Insights" with icons representing code, user login, and analytics on devices.

Throughout, we’ll comment on what those AI statistics mean as well as add insights from some of the developers on the HatchWork’s team who are part of our Generative-Driven Development—a method that has led to a 30-50% productivity increase for our clients.

What you’ll be left with is a clear overview of the state of Generative AI and its future.

The Current State of the Global AI Market

🗝Key takeaway: AI is a growing industry, with projections showing an annual growth rate of 37.3% between now and 2030. It’s largely fueled by advancements and the adoption of Generative AI.

AI tech and its potential has mesmerized the world for decades.

Film and TV have long projected a world where artificial intelligence is a facet of everyday life—to both sinister ends (iRobot) and peaceful coexistence (the Jetsons).

We’re closer than ever to finding out just how well humans and artificial intelligence can live side by side. And as a whole, we’re investing big in its development.

In 2022, Precedence Research found the global artificial intelligence market was valued at USD 454.12 billion. It showed North America having the biggest piece of the pie, with their AI market valuation hitting USD 167.30 billion.

In the image below you can see how much money is being invested into AI by geographic area.

Bar chart of AI private investment in 2022 by country, with the US and China leading.

And the AI market is only set to grow. In fact, McKinsey projects an economic impact of $6.1-7.9T annually.

Behind much of this growth, high valuation, and investment is the development and increased use of Generative AI.

Gartner reports that in the last 10 months, half of the 1,400+ organizations they surveyed have increased investment in Generative AI.

They’ve also found that 44% of organizations are piloting generative AI and 10% have put it into production. Compare that to 15% and 4% respectively in just March and April of 2023.

The rapid adoption of generative AI demonstrates its potential to revolutionize how we work, the skills we need, and the type of work we will do.

What’s driving our need for AI? It’s a mix of:

  • Increased Demand in Various Sectors: AI solutions are increasingly sought after in healthcare, finance, and retail. Check out our guide on use cases across various industries.
  • Advancements in Generative AI: Innovations in neural networks are propelling AI capabilities forward.
  • Big Data Availability: The rise in big data availability aids in training more sophisticated AI systems.
  • Complex Data Analysis: AI’s ability to analyze complex datasets is invaluable in numerous applications.
  • Digital Transformation and Remote Work: The shift towards remote work and digital operations has accelerated the adoption of AI technologies in business.

What Tools Are We Using? Core AI Technologies and Generative AI Systems

🗝 Key takeaway: With systems like ChatGPT, AlphaCode, and DALL-E 2 leveraging vast datasets, industries are witnessing a shift towards more intuitive, creative, and efficient processes.

Generative AI relies on core technologies like deep learning and neural networks.

These technologies empower AI systems to learn from vast datasets and generate new, original content. This capability extends across domains, from language processing to visual art creation, and code development. It’s changing how tasks are approached and executed on a daily basis.

Generative AI : A Brief Definition 📖

Generative AI refers to artificial intelligence systems that can create new content or data, which they were not explicitly programmed to produce.

These systems use advanced machine learning techniques, such as deep learning, neural networks, and transformer technology to analyze and learn from large datasets, and then generate original outputs.

This can include tasks like writing text, composing music, creating images or videos, and even generating new ideas or solutions.

Among the most notable tools leveraging generative AI is OpenAI’s ChatGPT, known for its ability to engage in human-like conversations and provide informative responses. It exemplifies the advanced natural language processing capabilities of these systems.

Here’s a list and description of other core Generative AI tools people across industries are adopting:

  • AlphaCode: An advanced tool designed for programming challenges, utilizing AI to write and optimize code.
  • Mid Journey: Specializes in generating detailed and imaginative visual narratives based on text prompts.
  • Copilot: Developed by GitHub, this AI system transforms natural language prompts into coding suggestions in various programming languages. It’s complemented by similar systems like OpenAI’s Codex and Salesforce’s CodeGen.
  • Katalon: A comprehensive tool for automated testing, integrating AI to enhance the efficiency and accuracy of software testing processes.
  • Amazon Bedrock: A robust AI platform by Amazon, designed to provide deep insights and analytics, supporting various AI applications and data processing tasks.
  • CodeGPT: A specialized AI tool for coding assistance, offering features like code completion and debugging suggestions based on Generative AI models.
  • Hugging Face: Known for its vast repository of pre-trained models, Hugging Face is a platform that facilitates AI development, especially in natural language processing.
  • Llama by Meta: An AI system developed by Meta, aimed at pushing the boundaries in various aspects of AI, including language understanding and generative tasks.
  • Make-A-Video: A revolutionary system that enables the creation of videos from concise text descriptions, opening new possibilities in media production.
  • AI Query: A tool designed for streamlining data analysis and simplifying complex data interactions using AI.
  • Bard: Focuses on content generation, leveraging AI to assist in writing and creative tasks.
  • DALL-E 2: OpenAI’s image generation AI, known for creating detailed and artistic images from textual descriptions.
  • Copy.ai: Aims at automating content creation, particularly in marketing and advertising, using AI to generate high-quality written content.
  • Murf.ai: Specializes in voice synthesis, enabling the creation of realistic and customizable AI-generated voices for various applications.

This list is truly the tip of the iceberg. Every day new tools are launched into the AI ecosystem.

Time will tell which of them become indispensable to the modern work landscape or who may fall into the deep abyss of almost forgotten memory—anyone remember Ask Jeeves? Or AIM? We do…just barely.

Developer Insights on Generative AI: How is it Impacting Software Development

🗝 Key takeaway: Generative AI is already a fixture in the work processes of forward-thinking software developers with data on productivity proving its a worthwhile addition to the industry.

A recent McKinsey report claims Software Engineering will be one of the functions most impacted by AI.

The data and lived experiences of developers back that claim up.

ThoughtWorks reports software developers can experience 10-30% productivity gains when using Generative AI.

While GitHub has run its own studies on the use of CoPilot by software developers and seen positive results on productivity and speed of task completion.

Across two studies (1 and 2) they’ve found developers who use Copilot are:

  • 55% faster in general
  • 88% more productive
  • 96% faster with repetitive tasks
  • 85% more confident in code quality
  • 15% faster at code reviews

At HatchWorks, our integration of AI has revolutionized our Generative-Driven Development™ process, resulting in a 30-50% productivity boost for our clients.

By utilizing these tools, our engineers have streamlined coding and minimized errors, fundamentally transforming our project delivery methods.

These advancements highlight the significant role of AI in enhancing efficiency and spurring innovation in our field.

To delve deeper into this transformative journey, HatchWorks’ engineers have shared with us their perception of Generative AI tools and how they’re using them to enhance their work.

Key Statistics and Trends in Generative AI

🗝 Key takeaway: The world is divided in its trust of AI but businesses are using it to fill shortages and increase productivity in the workplace.

We’ve covered the state of AI, highlighted some core tools and technologies, and talked specifically about how Generative AI is impacting Software Development.

Now we’re covering other key artificial intelligence statistics and trends that are defining the opinions, use, and impact of Generative AI.

Trend: Programming/Software Development is Seeing the Most Impact on Productivity

Stat: AI improves employee productivity by up to 66%.

Across 3 case studies by the Nielsen Norman Group:

  • Support agents who used AI could handle 13.8% more customer inquiries per hour.
  • Business professionals who used AI could write 59% more business documents per hour.
  • Programmers who used AI could code 126% more projects per week.

What it means: It’s not just one industry or function that stands to benefit from AI. It’s all of them.

AI tools likely assist in faster query resolution, provide automated responses for common questions, and offer real-time assistance to agents, thus reducing response times and increasing the number of inquiries handled.

They also can assist in tasks like data analysis, content generation, and automated formatting, enabling professionals to produce higher volumes of quality documents in less time.

In the case of programming, this leap in productivity could be attributed to AI’s ability to automate routine coding tasks, suggest code improvements, and provide debugging assistance, allowing programmers to focus on more complex and creative aspects of coding.

Trend: Adoption of Generative AI is Explosive

Stat: ChatGPT reached 100 million monthly active users within 2 months of launch, making it the fastest-growing consumer application in history.

What it means: Word of mouth marketing and an impressive display of the capabilities of Generative AI likely fueled such fast and widespread adoption.

It suggests we’re hungry for tools that optimize our work while reducing time and money spent elsewhere. It wasn’t a case of if we’d be adopting AI but rather a case of when and for what.

Even Bill Gates has been impressed by the capabilities of Generative AI. He recently wrote a piece titled, The Age of AI has begun. In it he claims to have seen only two truly revolutionary advancements in his lifetime; one being the graphical user interface, the second being ChatGPT.

He even wrote upon witnessing the capabilities of ChatGPT, ‘I knew I had just seen the most important advance in technology since the graphical user interface.’

So not only is the adoption of generative AI explosive in its numbers, it’s explosive in what it can do.

Trend: The East is Generally More Accepting of AI as a Benefit

Stat: In a 2022 IPSOS survey, 78% of Chinese respondents agreed with the statement that products and services using AI have more benefits than drawbacks.

Those from Saudi Arabia (76%) and India (71%) also felt the most positive about AI products. Only 35% of surveyed Americans agreed that products and services using AI had more benefits than drawbacks

What it means: Notably, the US exhibits more skepticism towards Generative AI than other leading nations.

Earlier there was a stat showing Americans are privately investing the most in AI, followed by China. It’s interesting to see the countries that most trust and least trust AI are the ones investing the most heavily in it.

What comes of this could be reminiscent of the US’s space race with the former USSR. The biggest difference is that Generative AI is accessible to the world’s population in a way space technology never was (or likely will be).

And it prompts questions about whether AI technology is more or less dangerous in the hands of everyday people compared to governments. And whether American skepticism of the AI space is rooted in the potential for government overreach, foreign interference, job security, or how autonomous AI thought can become.

Trend: Trust in AI is Divided Among Those with Geographic and Demographic Differences

Stat: Another survey shows that 3 out of 5 people (61%) are wary about trusting AI systems, reporting either ambivalence or an unwillingness to trust. They looked at geographical location as well as generational divides. This time India came on top and Finland bottom.

When we break it down by generation and education we see the young are more trusting of AI as are the higher educated. Those in manager roles are also more trusting.

Bar chart comparing trust and acceptance of AI by age group and education level.

What it means: Younger people are typically more accepting of advancement in technology than their older counterparts. This stat is thus unsurprising. It’s also unsurprising that managers see the value of AI given their job is to make their teams and departments more efficient and productive. AI is a proven way of doing so.

Trend: Generative AI is Being Used to Fix Labor Shortages

Stat: 25% of surveyed companies are turning to AI adoption to address labor shortage issues, according to a 2022 IBM report.

What it means: The fact that companies are turning to AI in response to labor shortages suggests that AI is increasingly seen as capable of filling gaps left by human workers. This could be in areas like data processing, customer service (through chatbots), and even more complex tasks that require learning and adaptation.

To learn more, watch or listen to Episode 11 of the Built Right Podcast, How Generative AI Will Impact the Developer Shortage.

Trend: Businesses Believe in Generative AIs Ability to Boost Productivity

Stat: A significant 64% of businesses believe that artificial intelligence will help increase their overall productivity, as revealed in a Forbes Advisor survey.

What it means: The belief in AI’s role in increasing productivity suggests businesses see AI as a tool for driving growth. This may involve automating routine tasks, optimizing workflows, or providing insights that lead to more informed decision-making.

This statistic also reflects a response to the rapidly changing market demands and the need for businesses to remain competitive. AI adoption can be a key differentiator in responding quickly to market changes and customer needs.

Worryingly, we should watch that human contribution and value aren’t overlooked to the detriment of the company. Sometimes it’s our humanity that is our best differentiator and businesses should be wary of passing on too much, too quickly to our AI sidekicks.

The Impact of Generative AI on Employment and Skill Development

🗝 Key takeaway: AI isn’t replacing our need for human intelligence, it’s freeing human intelligence up to do other work which puts demand on us to upskill in the use of AI.

The emergence and growth of generative AI are shaping job markets and skill requirements and will have significant implications for employment and workforce development over the coming years.

Job Market Dynamics:

Increase in Gen. AI-Related Job Postings: A notable trend is the increase in Generative AI-related job postings. LinkedIn reports that job postings on the platform mentioning GPT or ChatGPT have increased 21x since November 2022, when OpenAI first released its AI chatbot into the world.

Job Creation vs. Displacement: A McKinsey report forecasts that AI advancements could affect around 15% of the global workforce between 2016 and 2030. This statistic encompasses both job displacement due to automation and the creation of new jobs requiring AI expertise.

Skill Development and Educational Trends:

Evolving Skill Requirements: With AI’s growing integration across industries, the skill requirements for many jobs are evolving. There’s an increasing need for AI literacy and the ability to work alongside AI systems as evidenced by the earlier stat showing a rise in AI related postings.

Educational Response: In response, educational institutions are adjusting curricula and offering specialized training in AI and related fields. They’re also finding ways to introduce AI as a tool the teachers and students use. This shift aims to prepare the upcoming workforce for a future where AI plays a central role in many professions.

Ethical Considerations and Regulatory Landscape

🗝 Key takeaway: The recent advancements in AI have made us all question its use and regulation. Governments are finding ways to control it while encouraging its use to advance the world.

The use of AI raises a range of ethical considerations, including concerns about its accuracy, the extent of its capabilities, potential misuse for nefarious purposes, and environmental impacts. And with ethical considerations come questions over how we’ll regulate its use.

Let’s look at how public opinion and emerging research highlight the complexities and challenges in this rapidly evolving field.

Incidents and Controversies:

The number of AI-related incidents and controversies has surged, increasing 26x since 2012.

Additionally, the number of accepted submissions to FAccT, a leading AI ethics conference, has more than doubled since 2021 and increased by a factor of 10 since 2018.

Notable incidents in 2022 included a deep fake video of Ukrainian President Volodymyr Zelenskyy and the use of call-monitoring technology in U.S. prisons. This trend highlights both the expanding use of AI and the awareness of its potential for misuse.

Interestingly, it’s those who use tools like ChatGPT often that lose our sense of skepticism in its accuracy. Ben Evans gave a talk on Generative AI and showed the following slide:

Chart on misconceptions about AI accuracy based on user awareness and experience.

The data from Deloitte shows that use correlates to trust.

Challenges in Reliability and Bias:

Generative AI systems are prone to errors, such as producing incoherent or untrue responses, which raises concerns about their reliability in critical applications.

Issues like gender bias in text-to-image generators and the manipulation of chatbots like ChatGPT for harmful purposes underscore the need for cautious and responsible AI development.

Environmental Impact:

AI’s environmental impact is a growing concern. For instance, the training run of the BLOOM AI model emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco.

However, AI also offers environmental solutions, such as new reinforcement learning models like BCOOLER, which optimize energy usage.

Public Expectation for Regulation:

A substantial 71% of people expect AI to be regulated.

This sentiment is widespread, with the majority in almost every country, except India, viewing regulation as necessary. This reflects growing concerns about the impact and potential misuse of AI technologies.

In fact, President Biden has already “signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.”

Source: AP News

Looking Forward: Where Is Generative AI Going Next?

Despite the ethical and regulatory considerations outlined earlier, the future of Generative AI appears promising from a growth perspective:

  • Goldman Sachs predicts Generative AI will raise global GDP by 7% ($7T).
  • McKinsey projects an economic impact of $6.1 – $7.9T annually
  • Precedence Research believes the AI Market size will hit around USD 2,575.16 billion by 2032.
Bar graph of AI market growth projection from 2022 to 2032 in billions USD.

At HatchWorks we’re most focused on the future of AI as it relates to software development.

We expect the use of AI will only advance over time with further improvements to developer productivity, new use cases for how developers use AI to assist software development, and an evolution in the skills and capabilities businesses hire for (internally and through outsourcing).

And we expect that because we’ve already witnessed it firsthand among our own developers:

Further reading: Generative AI Use Case Trends Across Industries: A Strategic Report

We’ll continue to optimize our approach and inclusion of these AI tools in our processes and equip our Nearshore developers with the education and resources they need to be efficient with them.

If you want to learn more about how our Generative-Driven Development™ services have led to a 30-50% productivity increase for our clients, get in touch here.

Built Right, Delivered Fast

Start your project in as little as two weeks and cut your software development costs in half.

The post Generative AI Statistics: The 2024 Landscape – Emerging Trends, and Developer Insights appeared first on HatchWorks.

]]>
2024’s Comprehensive Guide to Generative AI: Techniques, Tools & Trends https://hatchworks.com/blog/software-development/generative-ai/ Tue, 19 Dec 2023 01:43:41 +0000 https://hatchworks.com/?p=29510 Major tech companies like Microsoft, Google, Coca-Cola, and Spotify are championing AI, integrating it into various aspects of their businesses, from content generation to product innovation. This groundbreaking technology is reshaping traditional workflows, enabling unprecedented levels of innovation and efficiency across a diverse range of sectors. In this guide, we’ll introduce you to the burgeoning […]

The post 2024’s Comprehensive Guide to Generative AI: Techniques, Tools & Trends appeared first on HatchWorks.

]]>

Major tech companies like Microsoft, Google, Coca-Cola, and Spotify are championing AI, integrating it into various aspects of their businesses, from content generation to product innovation.

This groundbreaking technology is reshaping traditional workflows, enabling unprecedented levels of innovation and efficiency across a diverse range of sectors.

In this guide, we’ll introduce you to the burgeoning world of generative AI. We’ll explore its capabilities, dive into its many applications and use cases, and share tips on making it a seamless part of your projects. Plus, we’ll tackle the ethical and security challenges that come with this groundbreaking technology and provide insights on responsible AI deployment.

A cover for Hatchworks' guide on "2024 Generative AI Techniques, Tools, and Trends".

Generative AI is transforming industries and redefining how we create and build products, as evidenced by the projected growth of the AI market to an astounding $110.8 billion by 2030. 

At HatchWorks, we embrace new technologies to deliver top-notch custom software development services. That’s why we’re harnessing generative AI to build digital products that surpass customer expectations and redefine the future of digital product development.

Are ready to unlock the potential of generative AI? Let’s dive in!

Exploring generative AI algorithms

Artificial intelligence has come a long way in recent years, with advances in deep learning propelling generative AI adoption at unprecedented rates. For example, ChatGPT, an OpenAI language marvel, impressively hit 1 million users in just 5 days, while its sibling, DALL-E, which generates images, reached the same milestone in a mere 2.5 months.

In comparison, other innovative products outside the AI category took significantly longer to gain traction. Facebook, for instance, reached 1 million users in 10 months, and it took Netflix 3.5 years to achieve the same milestone.

A chart showing the adoption rate of three AI tools, ChatGPT, DALL-E, and GitHub CoPilot, over time. The chart displays the percentage of users adopting each tool, with ChatGPT having the fastest adoption rate.

At its core, generative AI is powered by deep learning algorithms that analyze vast amounts of data to make predictions, generate content, and even create new data.

Let’s dive into some of the most influential algorithms and see how they’re shaping the future of digital innovation.

Deep learning

One of the most striking examples of deep learning’s influence on generative AI is natural language text generation. By processing and understanding the structure, syntax, and semantics of human language, these advanced algorithms generate coherent, contextually appropriate, and sometimes creative text that seems to have been written by a human.

This ChatGPT meme, featuring Will Smith from the movie I, Robot, humorously pokes fun at the challenge of creating truly original content.
This ChatGPT meme, featuring Will Smith from the movie I, Robot, humorously pokes fun at the challenge of creating truly original content.

Take ChatGPT, for instance. This large language model is a prime illustration of deep learning’s potential in crafting human-like text. Its rapid adoption showcases the incredible demand for AI tools that can seamlessly interact, communicate, and generate content with an increasingly human-like touch, revolutionizing the way we work, learn, and connect with one another.

Moreover, ChatGPT is transforming our relationship with search engines, as it fosters more declarative and conversational interactions, making the process of seeking information more intuitive, efficient, and engaging.

OpenAI‘s GPT-4 has made remarkable improvements over its predecessor, GPT-3.5, boasting higher scores on nearly every academic and professional exam, even surpassing 90% of lawyers on the bar exam. Additionally, GPT-4 can now accept images as inputs, expanding its potential applications.

Another example is the recent formation of Google DeepMind, a powerhouse union joining forces to responsibly accelerate AI development. This dynamic partnership is set to conquer the toughest scientific and engineering obstacles while paving the way for AI to revolutionize industries and propel science forward.

Reinforcement learning

Taking a step further, reinforcement learning brings another dimension to generative AI. This approach involves training algorithms through trial and error, allowing them to learn from their mistakes and improve their performance over time.

Reinforcement learning has found numerous applications in generative AI across various industries, unlocking innovative possibilities and transforming how we approach problems.

These models have seen so much data… that by the time that they're applied to small tasks, they can drastically outperform a model that was only trained on just a few data points."

The AI toolbox

When it comes to selecting the right algorithm for a specific use case, it’s essential to consider the strengths and weaknesses of various AI tools.

Some popular generative AI algorithms include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformer models like GPT-4.

  • GANs excel at generating realistic images and can be used for tasks like image-to-image translation and generating artwork.
  • VAEs, on the other hand, are particularly well-suited for data compression and can be applied in areas like anomaly detection and image denoising.
  • Transformer models have been a game-changer for natural language processing, powering state-of-the-art text generation, translation, and summarization systems.

Armed with the knowledge of these algorithms, you’re ready to explore their creative applications and unleash their potential.

Unleashing creativity with generative AI

All across various domains, generative AI is sparking a creative revolution. 

Music generation

While it’s unlikely to replace human creativity entirely, generative AI is making waves in the music composition world. It serves as a powerful tool for enhancing the creative process. By generating unique melodies, harmonies, and rhythms that adhere to given text descriptions, AI models like MusicLM inspire musicians to explore new ideas and push the boundaries of their art.

Take, for example, the recent news of a trending song called “Heart on My Sleeve,” written and produced by TikTok user ghostwriter977. The vocals for the song were generated by artificial intelligence and made to sound like Canadian musicians Drake and The Weeknd.

Despite its growing popularity, Universal Music Group (UMG) requested the removal of the song from various music platforms and called for a block on AI using copyrighted songs for training purposes. This incident highlights the ongoing debate surrounding the ethical and legal implications of AI-generated content in creative industries.

Text generation

Language models like GPT and BERT are revolutionizing content creation and automation. With the power of Natural Language Processing (NLP) techniques, AI models can generate coherent and contextually relevant text for a wide range of applications.

Text prompts can be used as inputs to guide AI-generated text, ensuring the output aligns with desired context and themes. This technology is not only automating content creation but also helping writers overcome writer’s block and enrich their writing.

These models can even be prompted to generate code. AI-generated code snippets and templates are streamlining the development process for companies, allowing them to more rapidly prototype and build high-quality software solutions for their clients.

A screenshot of GitHub's CoPilot AI assistance, showing a code editor with a suggestion for a code snippet.
Introducing CoPilot, GitHub’s AI-powered code assistant! CoPilot helps developers write better code faster by suggesting relevant code snippets based on the context of their code.

One notable example is GitHub Copilot, an AI-powered code assistant developed by GitHub and OpenAI. It integrates with popular integrated development environments (IDEs) like Visual Studio Code, Neovim, and JetBrains, offering auto-completion of code in languages such as Python, JavaScript, TypeScript, Ruby, and Go.

By leveraging the capabilities of OpenAI Codex, GitHub Copilot makes it easier for developers to navigate unfamiliar coding frameworks and languages while reducing the time spent reading documentation. Furthermore, a research study conducted by the GitHub Next team revealed that GitHub Copilot significantly impacts developers’ productivity and happiness. Surveying over 2,000 developers, the study found that between 60-75% of users feel more fulfilled, less frustrated, and are able to focus on more satisfying work.

Image generation

A campaign image for our podcast featuring two muscular men in a gym joking about the podcast's name. The image was generated using Midjourney AI.
Thanks to Midjourney AI, we were able to create this hilarious campaign image featuring two muscle-bound guys promoting our podcast, Built Right.

AI-generated art is transforming the creative and design industry by enabling artists and designers to create unique visuals using image generators. From photorealistic images generated using GANs to medical images for research and diagnostic purposes, generative AI is revolutionizing the world of visual content.

According to Everypixel, “More than 15 billion images were created using text-to-image algorithms since last year. To put this in perspective, it took photographers 150 years, from the first photograph taken in 1826 until 1975, to reach the 15 billion mark.” This staggering statistic underscores the transformative power and rapid evolution of AI in the realm of image generation.

At HatchWorks, we’re all about diving into the exciting world of Generative AI, and we wanted our blog to really capture that energy. So our fantastic marketing designer, Luis Leiva, opted for generative design to whip up a unique banner image for our blog post.

We fed the Midjourney AI model this prompt: “A Brave New World of Deep Learning, Reinforcement Learning, and Algorithmic Innovation, vector, illustration, happy, vibrant, teal, orange.”

We fed the Midjourney AI model this prompt: "A Brave New World of Deep Learning, Reinforcement Learning, and Algorithmic Innovation, vector, illustration, happy, vibrant, teal, orange."

Generative AI isn’t just about number-crunching and problem-solving; it’s also about unleashing creative flair. We hope to inspire you to ponder the broader applications of generative AI and explore the endless possibilities it offers in both practical and artistic realms.

Some more groundbreaking applications of image generation include:

Personalized marketing

Generative AI can create tailored visuals for marketing campaigns. Platforms such as Jasper, enable teams to generate personalized and brand-specific content at a much faster pace, leading to a tenfold increase in productivity. By leveraging AI-powered tools, businesses can craft captivating social media posts, advertisements, and marketing copy, considerably boosting the efficacy of their marketing strategies while maintaining a more targeted approach.

Icon and Logo Design

Having unique and tailored branding elements, such as icons and logos, is essential for products to stand out. AI-generated icons and logos offer an innovative solution to this challenge.

Transforming the world of icon and logo design, numerous new tools utilize AI-driven innovation to elevate the creative process. Magician for Figma uses AI to generate unique icons from text inputs, streamlining the icon creation process. Adobe Firefly focuses on providing creators with an infinite range of generative AI models for content creation.

By utilizing these cutting-edge tools, designers can effortlessly generate custom vectors, brushes, textures, and branding elements, leading to more distinctive and memorable designs.

Data Visualization and Analysis

AI-generated charts, graphs, and other visual representations of complex data sets enable companies to present information in a clear, engaging, and insightful manner, enhancing their product’s user experience.

Tools like Ask Viable could play a crucial role in this process, offering AI-powered analysis that turns unstructured qualitative data and feedback into actionable insights, allowing businesses to make data-driven decisions and optimize their performance.

User Interface Design

AI-generated interface mockups and dynamic design elements are revolutionizing the way companies create intuitive and visually appealing user experiences for their applications.

AI-generated interface mockups and dynamic design elements are revolutionizing the way companies create intuitive and visually appealing user experiences for their applications.

Tools like Genius are at the cutting edge of this transformation, offering an AI design companion in Figma that understands what you’re designing and makes suggestions using components from your design system. These AI-driven solutions allow designers to explore a multitude of ideas, iterate more efficiently, and ultimately deliver more engaging user interfaces.

Tips for integrating generative AI into your projects

To make the most of generative AI in your projects, it’s crucial to understand the best practices for selecting, training, and implementing AI algorithms. Here are some valuable tips to help you navigate the integration process and maximize the benefits of generative AI.

Selecting the Right Algorithm

  • Identify your project goals: Clearly outline the objectives of your project and the desired outcomes before choosing a generative AI algorithm. This will help you determine which algorithm best aligns with your goals.
  • Consider your data: Assess the type and amount of data you have available. Certain algorithms may require large datasets, while others can work effectively with smaller amounts of data.
  • Evaluate algorithm performance: Research the performance of various generative AI algorithms and compare their success in generating high-quality, relevant content. Select the one that best meets your quality and creativity requirements.

Incorporating generative AI into your workflows

  • Prepare your data: Ensure that your data is clean, well-structured, and diverse to provide a solid foundation for training your generative AI model.
  • Seamless integration: Design your workflows to accommodate generative AI output, making it easy to incorporate generated content into your projects.
  • Human-AI collaboration: Emphasize the importance of human-AI collaboration, using AI as a tool to enhance creativity and productivity rather than replace human input.
  • Iterate and refine: Continuously test and refine your generative AI implementations, gathering feedback from users and stakeholders to improve the overall quality and effectiveness of AI-generated content.

Assessing AI output quality and effectiveness

  • Establish quality metrics: Define clear metrics to measure the quality and effectiveness of your generative AI output. This can include factors such as coherence, relevance, and creativity.
  • Regular evaluation: Periodically evaluate the performance of your generative AI models against your established quality metrics and make improvements as needed.
  • Seek user feedback: Gather feedback from end-users and other stakeholders to understand how well your generative AI output meets their needs and expectations. Use this feedback to refine your AI models and workflows further.

By following these tips, you can successfully integrate generative AI into your projects and make the most of this powerful technology.

📌 For an in-depth exploration of how generative AI is revolutionizing various sectors, read our comprehensive report on Generative AI Use Cases Across Industries.

To see how HatchWorks is leading the way in AI-powered software development – visit our Generative-Driven Development™ page now.

Navigating the ethical and security challenges of generative AI

Generative AI, like any powerful technology, brings a set of ethical and security challenges that must be addressed proactively to ensure responsible deployment. Here, we’ll provide guidance on how to navigate these challenges effectively and maximize the positive impact of generative AI.

First, address the potential misuse of generative AI by developing and enforcing strict guidelines for its ethical use within your organization. Encourage a culture of accountability and monitor generative AI usage in your projects to prevent misuse.

Secondly, mitigate the risks of biased or uncontrolled AI-generated content by training AI models on diverse and representative datasets. Be aware that earlier models like GPT-3 have demonstrated biases related to gender, race, and religion, which can influence the output. Implement mechanisms to detect and mitigate harmful or offensive content and educate your team and end-users about potential biases and limitations, promoting responsible usage and critical evaluation.

Protection against the malicious use of generative AI is essential. Implement robust security measures, monitor AI-generated content for signs of malicious activity, and collaborate with industry partners and stakeholders to develop and promote best practices for mitigating malicious use.

In addition to security measures, prioritize transparency in your generative AI deployments. Openly communicate the use of AI-generated content and the methodologies behind it. Stay informed about the latest ethical and security developments in the generative AI field and adapt your strategies and practices accordingly. Foster a strong culture of responsibility and ethical awareness within your organization.

Lastly, invest in education and training. Provide your team members with education on generative AI technology, its potential risks, and ethical considerations, fostering a culture of informed responsibility. Encourage continuous learning to stay updated on the latest advances in generative AI and its ethical and security implications. Contribute to public awareness and understanding of generative AI, promoting informed decision-making and responsible use.

It’s predicted that AI could impact 300 million full-time jobs worldwide, so it is crucial to emphasize responsible and ethical use. By proactively addressing these challenges, you can ensure the responsible and beneficial use of generative AI in your projects, leading to a more innovative, efficient, and ethical digital product development process.

Frequently Asked Questions about generative AI

Generative AI is a form of artificial intelligence that uses algorithms to create new data, content, or predictions based on existing data. Unlike discriminative AI, which focuses on classifying and predicting outcomes, generative AI generates new instances, such as images, text, or music, based on learned patterns and structures.

Generative AI is a subfield of machine learning, which is an overarching discipline that deals with teaching computers to learn and make decisions based on data. Generative AI specifically focuses on the creation of new content by learning from existing data.

A Generative Adversarial Network (GAN) is a type of generative AI model that consists of two neural networks, a generator and a discriminator, that work together in a competitive manner. The generator creates new content, while the discriminator evaluates the content’s quality and authenticity.

Generative AI can explore a vast range of design possibilities, optimize solutions, and help designers create innovative, functional, and aesthetically appealing products.

Discover how our Generative-Driven Development services can transform your business by visiting https://hatchworks.com/generative-driven-development/.

Businesses can use generative AI to automate content generation, optimize decision-making, and create personalized experiences for customers, ultimately improving efficiency and reducing costs.

Some limitations of generative AI include the need for large amounts of training data, high computational resources, potential bias in generated content, and difficulty in controlling the generated output. Additionally, generative AI models may struggle to understand and generate content that falls outside the scope of their training data.

No. While generative AI can produce impressive results, it is not a replacement for human creativity. AI-generated content is based on patterns learned from existing data, meaning it cannot replicate the full range of human emotions, experiences, or intuition that drive creativity.

Summary

Generative AI has immense potential to revolutionize how we create, design, and innovate in the digital realm. By harnessing the power of AI tools and technologies, we can unlock new creative possibilities and enhance the quality and efficiency of our projects. By emphasizing responsible and ethical use, we can ensure that generative AI continues to have a positive impact on the industry and contributes to a more vibrant and creative digital landscape.

Generative AI has immense potential to revolutionize how we create, design, and innovate in the digital realm. By harnessing the power of AI tools and technologies, we can unlock new creative possibilities and enhance the quality and efficiency of our projects.

Balancing ethical concerns with responsible use, we can ensure that generative AI contributes to a more vibrant and creative digital landscape while mitigating its potential negative impact on the job market.

At HatchWorks, we understand the importance of leveraging generative AI responsibly and ethically. As a software development partner, we utilize the power of generative AI to build innovative digital products that meet the unique needs and expectations of our clients. Reach out to us to learn more about how we can help you harness the potential of generative AI for your projects.

Hatchworks: Your US-Based Nearshore Software Development Partner

At HatchWorks, we understand the importance of leveraging generative AI responsibly and ethically.

As a software development partner, we utilize the power of generative AI to build innovative digital products that meet the unique needs and expectations of our clients tailored to your industry.

Reach out to us to learn more about how we can help you harness the potential of generative AI for your projects.

The post 2024’s Comprehensive Guide to Generative AI: Techniques, Tools & Trends appeared first on HatchWorks.

]]>
The Best of Built Right: A Season 1 Lookback https://hatchworks.com/built-right/the-best-of-built-right-season-1/ Mon, 04 Dec 2023 16:25:44 +0000 https://hatchworks.com/?p=30435 As the year draws to a close, so does season one of the Built Right podcast. In this podcast, we’ve covered a lot of ground – from the rise of generative AI to the importance of good user experience design. We wanted to round off season one with a special episode that celebrates all the […]

The post The Best of Built Right: A Season 1 Lookback appeared first on HatchWorks.

]]>

As the year draws to a close, so does season one of the Built Right podcast. In this podcast, we’ve covered a lot of ground – from the rise of generative AI to the importance of good user experience design.

We wanted to round off season one with a special episode that celebrates all the brilliant insights, breakthrough ideas, and shared wisdom from our guests. In this episode, we look back at our top ten moments from the podcast. While it certainly wasn’t easy to pick just ten, these are some of our standout insights from our guests.

We’ll be back next year with a brand new season, so stay tuned for updates. In the meantime, keep reading to see which moments were our favorite or listen to the episode in full below.

10. The creative element of generative AI

We had a great conversation with Jason Schlachter, Founder of AI Empowerment Group and Host of the We Wonder podcast, in episode 8 about the creative element of AI. Creativity has always been hard to define, and with the abilities of generative AI, it leads to questions like, “what is art?”

Jason explores how generative AI can be used in different ways in the product development world and how to vet winning use cases.

Check out episode 8 with Jason: Generative AI Playbook: How to Identify and Vet Winning Use Cases

9. Why you need a new approach to modernization

For our fourth episode of the podcast, we sat down with HatchWorks’ own Joseph Misemer, Director of Solutions Consulting, to discuss why the MVP approach doesn’t always work. When modernizing a solution rather than creating a new one, there’s no need to start with the MVP approach.

In this episode clip, Joseph gets into why starting from scratch to modernize a solution might upset users who already love your product.

Watch episode 4: The MVP Trap: Why You Need a New Approach to Modernization with Joseph Misemer

8. Evaluating the value of generative AI

With so many new AI tools on the market, it’s important to be picky when choosing what to use. So, remember to ask yourself, does this provide true value?

In episode 10 with HatchWorks’ Andy Silverstri, he and host Matt Paige discuss different ways generative AI could change how we think about UX and UI design. For this clip, Matt likens the AI wave to the dot-com boom, where the concept of value was sometimes ignored in favor of following the trend.

Listen to episode 10 in full: 5 Ways Generative AI Will Change the Way You Think About UX and UI Design

7. Carrying the weight of product development (and sparing your customers)

For our seventh pick, we revisited episode 9 with Arda Bulut, CTO and Co-Founder of HockeyStack. Arda shares his thoughts on how to build and scale a product while keeping the customer experience front of mind.

In this clip, he explains why it’s often either the developers or the users who shoulder the difficulties when building and using a product. But in Arda’s case, he always prioritizes the user’s experience, even if it makes his work harder. And we think that’s a great way to think about product development!

Listen to episode 9: Listen, Prioritize, and Scale to Build a Winning Product

6. Testing quality products

If you’ve ever heard the phrase “shift left” when it comes to testing digital products, you may find episode 7 an interesting listen. Erika Chestnut, Head of Quality at Realtor.com, explored what it takes to build a high- quality product, and why testing is such a crucial point in development.

In the clip we picked, she explains that when a product is deemed of low quality, it’s reflecting poor quality testing. But she believes phrases like “shift left” are often buzzwords, when instead, we really need to dig into the impact of what that means when testing.

Learn more about testing for quality in episode 7: Quality-Driven Product Development with Realtor.com’s Erika Chestnut.

5. Generative AI and its impact on UX design

Andy Silvestri, Director of Product Design at HatchWorks, explored how generative AI is impacting the world of UX design in episode 10.

We picked a clip with Andy explaining how generative AI is influencing design practices and why it could open more doors for developers to explore new concepts in the early ideation stages.

Check out episode 10: 5 Ways Generative AI Will Change the Way You Think About UX and UI Design

4. Software is a must, not a nice-to-have

In our very first Built Right episode, we welcomed HatchWorks’ own CEO to explore what a “built right mindset” is and how it should influence every stage of development. Brandon Powell breaks down the top three questions everyone should ask before building a product: is it valuable? Is it viable? And is it feasible?

In the clip we selected, Brandon explains how software has already shaped every industry and why digital tools aren’t just a nice-to-have. They’re a must in today’s world.

Look back at episode 1: The Built Right Mindset

3. Developers want to take ownership of the product they’re working on

A good leader needs to be able to manage change effectively and solve the adaptive challenges that come with it. To talk more about that, Ebenezer Ikonne, AVP Product & Engineering at Cox Automotive, joined the podcast for episode 14 to break down six adaptive leader behaviors to adopt and why.

For the clip we picked, Ebenezer says that we need to “give the work back to the people.” Leaders need to let those who are working on the product have greater ownership, and sometimes that means stepping back.

Catch episode 14: The 6 Adaptive Leader Behaviors with Ebenezer Ikonne

2. The human brain vs. AI: Which is more efficient?

With everyone sharing their thoughts on generative AI, we wanted to dive more into the science behind it in episode 17. We invited Nikolaos Vasiloglou, Vice President of Research ML at RelationalAI, to give us the PhD data scientist perspective.

Nikolaos explained why he disagrees with comparisons between the human brain and AI systems – and why the human brain is ultimately more efficient and effective in many ways.

Learn more from Nikolaos in episode 17: How Generative AI Works, as Told by a PhD Data Scientist

1. Could AI help us become “more human”?

For our top pick, we look back at episode 15 with Brennan McEachran, CEO and Co-Founder of Hypercontext. In this episode, Brennan spoke about the AI-EQ connection and how emotionally intelligent AI could help teams boost performance and create faster, more streamlined processes.

In our top clip, he explains why, despite the fears of AI, it could help us refocus on more human-centered tasks.

You can listen to episode 15 here: The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management

After so many fantastic episodes, it was tough to pick just ten clips! If any of the above piqued your interest, you can revisit any of the episodes from season one on our website.

For our listeners, we want to share a big thanks from the HatchWorks team. We’ll be back after a short winter break for season two, with more great episodes and guests to talk about building products the right way.

Explore the future of software creation with HatchWorks’ Generative-Driven Development™.

Leveraging advanced AI technologies, we’re setting new standards in the industry.

See how our approach can revolutionize your development process.

The post The Best of Built Right: A Season 1 Lookback appeared first on HatchWorks.

]]>
How Generative AI Works, as Told by a PhD Data Scientist https://hatchworks.com/built-right/how-generative-ai-works-as-told-by-a-phd-data-scientist/ Tue, 14 Nov 2023 15:00:42 +0000 https://hatchworks.com/?p=30258 On the Built Right podcast, generative AI is always on the agenda. But we thought it was time to hear the thoughts, opinions and predictions of a PhD data scientist and take a deep dive into the science behind it.   We invited Nikolaos Vasiloglou, Vice President of Research ML at RelationalAI, to share his thoughts […]

The post How Generative AI Works, as Told by a PhD Data Scientist appeared first on HatchWorks.

]]>

On the Built Right podcast, generative AI is always on the agenda. But we thought it was time to hear the thoughts, opinions and predictions of a PhD data scientist and take a deep dive into the science behind it.  

We invited Nikolaos Vasiloglou, Vice President of Research ML at RelationalAI, to share his thoughts on how far generative AI will advance, give us an in-depth look at how knowledge graphs work and explain how AI will affect: 

  • The job market 
  • The future of reading 
  • The social media landscape 

 

Plus, he explores the main differences between generative AI and regular AI. 

Continue reading for the top takeaways or listen to the podcast episode for more. 

The difference between generative AI and regular AI 

The term ‘generative AI’ is everywhere. But what does it really mean and how is it different to regular AI? 

For many years, they were separated by the depths of their language models. As things continued to advance, people found themselves with powerful models they weren’t sure how to scale. 

Out of this recent revolution emerged OpenAI. They began feeding data into a transformer (a deep learning model, initially proposed in 2017) and created a useful function where the AI can predict the next word you will type. 

Nikolaos explains that the main difference between generative and traditional AI is the focus on language. Language is the primary determining factor in human intelligence, which explains why language-based AI products are among the most used right now. 

Will AI beat the human brain? 

As generative AI progresses, people continue to ask questions around its limits. So, will AI ever match or exceed the capabilities of the human brain? 

Nikolaos believes AI is a “great assistant” and can do plenty of things more quickly and more efficiently than humans.  

He explains how, with every technological advancement, there will be fewer jobs for engineers. With each passing year, major companies in every industry rely on fewer humans to take on work as the capabilities of technologies progress. 

However, he does think there’s a long way to go until the robots revolt! Nikolaos says there are plenty of things the human brain can do that won’t be challenged by AI any time soon. 

For example, a human being can eat a pizza while performing complicated mathematical computations. If using GPT, you would need plenty of power to perform equivalent tasks. Humans are very energy-efficient and can use signals that take milliseconds to transmit; a much faster process than AI.  

What is a knowledge graph and how does it work? 

A knowledge graph is a collection of interlinked descriptions of entities, used to enable data integration, sharing and analytics. 

Nikolaos describes it as “the language both humans and machines understand” and says its bi-directional relationship with language models provide many benefits. 

Once you have a knowledge graph, you can see considerable ROI and excellent business results but, historically, there was always one caveat – they were challenging to build. An engineer would have to go through databases, finding the correct entities, relations and flows. 

But with the dawn of AI language models, things became much easier. With human supervision, the language model can speed up this menial process. 

All-in-all, Nikolaos says knowledge graphs always provide: 

  • The correct knowledge 
  • The ability to add/remove knowledge based on relevance 

 

In other words, it’s ideal for keeping things in place. 

The future of reading and learning 

AI is changing the way many people read and learn. According to Nikolaos, many people avoid reading books as the information they require only spans a few pages. 

But what could this mean for the future of publishing?  

He says publishers could take advantage of this shift and make small sections of books publicly available, so that users can consume what’s relevant to them. 

This shift can be compared to streaming, where users select specific songs, rather than buying the whole album. 

Social media and its reliance on AI 

From Facebook and Twitter (X) to Instagram and TikTok, the content is always changing. 

Now, Nikolaos believes generative AI will form the basis for the social network and video platforms of the future. 

Platforms such as TikTok already deliver content to us, based on what we watch, but Nikolaos says AI could actually create the content too. 

For more insights and predictions on generative AI, find episode 17 of Built Right on your favorite podcast platform or the Hatchworks website. 

Join the AI revolution in software development with HatchWorks. Our Generative-Driven Development™ leverages cutting-edge AI to optimize your software projects.

Matt (00:04.818)

Welcome Built Right listeners. We have a special one for you today Our guest is a PhD data scientist and he’s gonna help us make sense of generative AI and how it all works And our guest today is Nikolaos Vasiloglou and Nik. I probably butchered the name Nik the Greek I think is What you also referred to you as and he is the VP of research ML at relational AI

 

Like I said, master’s and PhD in electrical and computer engineering here from Georgia Tech and has founded several companies, has worked at companies like LogiBox, Google, Symantec, and even helped design some of Georgia Tech’s executive education programs on leveraging the power of data. But welcome to the show, Nik.

 

Nik (00:50.847)

Nice to meet you, Matt. Thanks for hosting me.

 

Matt (00:54.358)

Yeah, excited to have you on. And relational AI, for those that don’t know, is the world’s fastest, most scalable, expressive relational knowledge graph management system combining learning and reasoning. And for those of you thinking, what the heck is a knowledge graph, we will get into that. Plus we’ll get into how generative AI actually works, as told by a real PhD data scientist who’s been doing this stuff way before chat. GBT was even a thought in somebody’s mind. Plus stick around.

 

we got Nik’s take on what are gonna be the most interesting use cases with generative AI in the future. And he’s saving these, I haven’t heard these either, so I’ll hear them for the first time, so really excited to get into these. But Nik, let’s start here. What is the difference between, we hear generative AI, it’s the hot topic right now, but generative AI and just regular AI, like what is the difference? What makes generative AI special and different?

 

Nik (01:51.867)

It’s a very interesting question. You know, for many years, the emphasis, the way that we were separating, you know, what do you call it, machine learning or AI was on the depth of the models. Like when I started my PhD, we were working on something that we would call like shallow models. Basically, you can think about it, looking at some statistics, you know, the decision tree was the

 

the state of the art, which meant, okay, I have like this feature. If it is it greater than this value, then I have to take the other feature and the other feature and come up with a decision. That’s something that everyone can understand. Then deep learning was the next revolution somewhere in the 2010s. It started and it started doing, you know, more, let’s say complicated stuff that I mean, people are still trying to find out why it’s working. They cannot.

 

understand exactly the math around it. And then the next revolution was, so we had this models that they were pretty powerful, but, uh, we didn’t know how to scale them. We didn’t know how far they can go. And, uh, and that was the revolution that basically open eye, open AI broad, that they realized that, um, you can take this new cool thing called the transformer where you can feed it with a lot of data and do this cool

 

where you are trying to predict the next word and basically come up with what we have right now. It took several years and several iterations. But I think the difference between what we used to call AI and what we call AI right now is the focus on the language. I mean, if you had read about Chomsky and others, a lot of people considered that

 

the human intelligence has to do with our ability to form languages and communicate. I mean, you might have heard that, you might remember as a student, what makes humans different than other animals, the human brain is the ability to form languages. And I think the focus on that made the big difference in what we have right now.

 

Nik (04:15.059)

The previous was more like a decision system. Now we’re focusing more on the reasoning side. So I would say this is the shift that we see.

 

Matt (04:18.442)

Mm.

 

Matt (04:22.218)

And that’s, I think part of the interesting aspect of it is, you know, in the past, it’s like the models, they were trained for very specific tasks in a lot of ways. And now you have this concept of these foundation models, which that’s, you know, what a lot of the large language models are built on. But now to your point, it’s, it’s almost kind of like getting to where how the human brain works and it can tackle these disparate types of ideas. And.

 

solutions and things like that. This concept of like a foundation model, what is that, how does that start to play into like these concepts of like large language models, LLMs that we hear so much about?

 

Nik (05:01.247)

So let me clear that up first. The foundation models and the language models are basically the same thing. It’s sometimes the foundational models were, the term was introduced by some Stanford professors. They were trying to kind of, like it happens a lot in science. You build something for something specific and then you realize that it applies to a much broader.

 

uh… you know class of problems and i think that was the effort that was that was the scared uh… the rationale behind renaming language models as foundational models because they can do the same thing with other types of data not just like uh… text so you can use that for proteins you can use basically for whatever represents a sequence okay so uh…

 

As I said, in the past, a lot of effort was put on collecting labels and do what we call supervised learning. The paradigm shift here was in what we call self-supervised learning. That was a big, big plus, something that brought us here. This idea that, you know, just take a corpus of text.

 

and try to predict the next word. And if you’re trying to predict the next word, you’re basically going to find out the underlying knowledge and ingest it in a way that you can make it useful. Of course, that’s what brought us up to 2020. That was the GPT-3, where we scaled. But there was another leap that’s at GPT.

 

that in the end it did require some labeling because you had like the human in the loop. Okay, let’s, I don’t know, it’s not exactly labeling but you can think about it as labeling because we have a human giving feedback. And then, you know, that brought us to Chad’s EPT. Now the heart of language models or foundational models is something called the transformer.

 

Nik (07:24.367)

It was invented in 2017 by Google, actually. It was an interesting race. OpenAI had, there was like a small feud between OpenAI and Google. So OpenAI came with a model. All of them were language models. Everybody was trying to solve the same problem. They came up with something called Elmo. And Google came back with Bert.

 

Matt (07:54.057)

Mm-hmm.

 

Nik (07:54.131)

from the cartoon, from the Muppet Show, I think. And then, so BERT was based on the Transformer. Then OpenAI realized that actually BERT is better. That’s an interesting lesson. They didn’t really stick, oh, this is our technology, we’ll invest in that. They saw that the Transformer was a better architecture, but then they took BERT and they actually cut it in half.

 

OK, and they picked by accident. We put that way that they invented the Google invented the transformer, which had an encoder and decoder. And they picked BERT was based on the encoder architecture. They took that half. But then OpenAI came and picked, no, we’re going to work on the other half, which is the decoder, predictive text. And they spent three years. They did see that the more data you pour, the better it becomes. OK.

 

That was their bet. And they ended up with GPT 3, GPT 2 and 3, GPT 1 to 3, the sequence 3.5 and 4 later on at GPT. And it was kind of like an interesting race where things basically started from Google, but OpenAI ended up being the leader over there. The transformer is nothing else.

 

Matt (09:17.554)

And they built everything they built was open source, right? Everything Google built. So they were able to. Yeah.

 

Nik (09:22.847)

You know, everything is actually open. So I think up to GPT, even GPT-3, there was a published paper. It’s very hard if you believe that you’re going to get a secret source that nobody else knows. I’ve never seen that playing in machine learning. Okay, because scientists want to publish, they want to share knowledge. I think as the models started to become bigger and bigger, they didn’t, you know, with GPT-3, I don’t think they ever opened the whole model, the actual model.

 

Matt (09:39.382)

Yeah.

 

Nik (09:52.467)

But they gave enough information about how to train it. There’s always some tricks that over time, even if somebody doesn’t tell you as you’re experimenting, they’re going to become public. So yeah, that was never the issue. I don’t think. Yeah, they are a little bit cryptic about after 3.5 and such details. But in my opinion, they’re.

 

Matt (10:17.91)

Yeah.

 

Nik (10:22.675)

The secret sauce over there is not exactly on the model, but on how you scale the serving of the model, we’re gonna talk about that later. This is the secret weapon of OpenAI, not necessarily the architecture, but the engineering behind that.

 

Matt (10:40.162)

Nice. Yeah. Let’s, let’s keep going on the transformer side. Cause like getting under how these, you know, GPTs work. Basically you mentioned that it’s serving up the next word, the next word. It’s not looking at it like a whole entire sentence, right? It’s these, this concept of tokens, but how is it like actually thinking through that and structure of language and something you think a computer wouldn’t be able to do it’s now doing very well.

 

Nik (11:07.163)

Yes, first of all, it’s always this, as I said, the transformer has this encoder decoder architecture, which means that there’s one part that looks into two directions back and forth. Like as it’s processing, it looks both ways, like this token, you know, is affected by the other tokens. But this token, the middle is also affected by the ones before and after them, what it’s going to be.

 

Matt (11:12.972)

Mm-hmm.

 

Matt (11:17.879)

Yeah.

 

Nik (11:31.991)

So that’s like the encoded, that’s the decoded architecture. Architecture, you’re only looking back because you’re not looking at the future. Okay. We can, we can talk more into it. There’s a lot of papers and a lot of tutorials that they actually explain that. It’s not, it’s not always easy to explain it without graphics here, but the, the key thing over here is that, um, you know,

 

Let me go a little bit back. The first revolution actually came by Google was Word2Vec, where they realized that if you give me a word and I try to predict that word by looking five words behind me and five words after me, that was a simple thing, like a small window. And I tried to create a vector representation. They realized that I can take words.

 

Matt (12:02.412)

Yeah.

 

Nik (12:26.647)

make them as something like continuous vectors, put them in space, draw them in space, and I would realize that, you know, that representation would bring words that are semantically similar together. Okay? And there was this other thing that if I, you know, if Paris is here and France is here and London is here, then I can take the same vector, put it here, I can find, you know, England. So they realized, for example, that…

 

Matt (12:39.071)

Hmm

 

Nik (12:53.519)

If I place all the capitals and all the countries, I can just take the vector that connects the first and the other, and it’s translated to the next one. Or if I take the distance of the vector between man and the woman, take that, then take the word king, add that to the king, it’s going to take me to the queen.

 

Matt (13:19.459)

Hmm

 

Nik (13:20.487)

So basically people started realizing with a simple word to vec that, um, you can take words, represent them as vectors. Let’s think about two dimensional vectors like in on the plane, but it’s not two, it’s like 512. Depth is just, it doesn’t really matter. The concept is the same that the, the distances in space, the way that they’re placed in space has, sorry, semantic meaning.

 

Matt (13:47.736)

Bless you.

 

Nik (13:49.211)

Now, the next problem was this is great, but we do know that words actually change meaning based on their context. Okay. So, uh, yeah. So for example, an example will be, uh, um, you know, when you say, uh, you know, when you say flower, well, let’s, let’s pick a now I’m a little bit stuck, but, um,

 

Matt (13:59.246)

Hmm. What’s an example there?

 

Matt (14:19.318)

You had one about boiling the, a person’s boiling a what? And then if it was like an engineer, it had different context of, that one was kind of interesting.

 

Nik (14:19.435)

because I have like…

 

Nik (14:28.767)

Yeah, you can boil an egg or an engineer is boiling, I don’t know, a substance. But it could be like, yeah. So when you say, for example, the bed, it can be something different when you talk about a house, a bedroom. But if you talk about geology, it means something completely different. So what they realized was that,

 

Matt (14:35.062)

or boiling the ocean, they’re trying to boil the ocean, right? Ha ha ha.

 

Matt (14:49.624)

Hmm

 

Matt (14:53.23)

Flowerbed, yeah.

 

Nik (14:58.635)

That vector that represents the word shouldn’t be universal. It should really depend on the surrounding words. So this vector representation, when the surrounding words are this one, it has to be this, and it will also have different relationships. And it should be different when it’s around different words.

 

And that was basically ELMo, that was this idea, it’s called contextual embedding. So this vector representation, like they say these two-dimensional representations called them embedding. So this was actually one of the biggest revolution of deep learning that we’re taking discrete entities and we could place them in space as a continuous vector. So we’ll take something that was discrete and putting on a medium.

 

Matt (15:51.502)

Wow.

 

Nik (15:53.631)

that it’s continuous, okay, continuous and multidimensional. So the first idea of the, before the transformer, ELMO, which is an area version of a transformer, the first idea was that, ah, okay, if I see a text, I will be placing these words, you know, on different places in space based on what is around them, okay? And…

 

And the next thing that, so basically what happens is you are taking the words on the first level, you, you look left and right and you create, you know, embeddings, you know, you create, you put them in space, then you take that. And you apply that again and again. So the transformer actually starts in levels, one level after level, the second level, it has multiple, I don’t know exactly the numbers, but it has different levels, so you can think about that as basically a rewriting.

 

Matt (16:30.935)

Mm-hmm.

 

Nik (16:49.191)

Okay, so that’s why it’s called transformers. So you have a sequence of words and you start, you know, rewriting to something else, something else that else, you know, so when people, actually people have done this experiment, they’re taking that the transformer and they decompose it and they see what are these things that you, you know, the transformer does in different levels. And they’ve actually realized that it starts inventing grammatical rules. It starts like identifying.

 

Matt (16:56.983)

Wow.

 

Nik (17:18.127)

what is the subject, what is the object, what is the verb. Okay. He starts identifying that something is an adjective or not an adjective. Um, it starts, you know, taking, you know, words and converting them to something which is a synonym, maybe, you know, something else. And that’s how the reasoning starts. Like I can give you, if I give you a sequence of words, you know,

 

Matt (17:24.503)

Mm-hmm.

 

Nik (17:49.32)

Nik, I don’t know, lives in Atlanta. You know, he knows that Nik, I don’t know, is Greek. Okay, so he can say, the Greek lives in Atlanta and that can affect the fact that, you know, and then you can say, he goes to the store to buy and because now, you know, that he’s Greek, he lives in Atlanta, you say, Fetatsis, for example. Okay, because now he starts.

 

Matt (18:13.912)

I… Yeah.

 

Nik (18:18.127)

the transformer starts taking different paths. Like it starts exploring, you know, what are synonyms? And, you know, if he leaves, it means he goes to the store, he goes to the supermarket, if he lives there. So it starts, all this information is ingested in the transformer after seeing, you know, endless pages of text and, you know, where basically there’s the reasoning paths. Like it does this on your own.

 

Of course, because there’s so many reasoning paths that can happen, sometimes it can hallucinate. Okay, so we can say Nik buys, I don’t know, souvlaki because he is Greek, which is possible. But there might be somewhere else, some other information that says Nik hates souvlaki and, you know, the language model doesn’t, but it’s a probable event since, you know, Nik is Greek. Anyway, I’m just giving a simple example over there. But that’s kind of like the power of the transformer that at every stage.

 

Matt (19:01.673)

Yeah.

 

Nik (19:14.183)

it starts rewriting things again and again and again, and it explores possible, very possible, very likely paths, highly likely paths.

 

Matt (19:23.73)

And correct me if I’m wrong, what you’re talking about here is this kind of the difference in evolution from structured data to unstructured data. Cause in the past we had very like defined tables, columns, associations to things. Is this kind of getting to that concept of unstructured data where it’s like the vectors and

 

Nik (19:39.652)

It does.

 

Well, the problem with the structure of the systems before that, everything was like, it was very discrete. And unless you had seen before the word Nik, OK, followed by that exact word, it was, you know, if you think about all the variances, like Nik spelled with K, Nikolaus, Nik, Vasiloglou, I don’t know, think about it, all these things. Because now they’re in a continuous space, OK, that’s what makes the difference.

 

It’s possible for the system to create an internal rule if you want, or internal path, about things that are kind of similar. So it doesn’t have to be Nik, it could be Vasiloglu instead of Nik. Or it could be the guy who lives at, I don’t know, say my address, you know. It’s the same thing. Because all these things, I think it’s public, you can find it. Because…

 

Matt (20:22.318)

Hmm.

 

Matt (20:33.258)

Don’t say that.

 

Nik (20:38.187)

All these things that they are semantically equivalent, and before you had to express them in, I don’t know, 100 different discrete things, and you had to see them exactly in that order in order to find a common path. It says, OK, this class of entities that they can be represented with this vector, they are very close, can be followed by this class of entities that can all compress them in a constellation of vectors, can lead me to something else.

 

Matt (21:07.17)

Mm-hmm.

 

Nik (21:07.743)

That’s why you see the language model when, when you go to open AI and you say regenerate what it does, it can generate the same thing, the same reasoning path by using a little bit different words or, you know, words that they’re semantically equivalent. Okay. And now the thing is that it can do that in this incredible memory of like, I don’t know, up to 32,000 tokens. So even if, you know, you’re saying that, you know,

 

Nik is going to buy something from the store and it will predict that it’s FETA. It’s because it has seen, you know, 10,000 tokens before that, you know, Nik is Greek, okay, he’s hungry, I don’t know, he’s having a, I don’t know, a dinner party and get you over there. Okay, so because when it was trained, it has seen sequences that in the span of 10, 20,000 tokens.

 

Matt (21:49.299)

Mm-hmm.

 

Nik (22:05.415)

You know, Nik associated with party food restaurant, you know, leads you to Feta. Okay. So.

 

Matt (22:13.29)

Yeah, and when you say a token, that’s basically either a word or a couple characters, some like small variation that it’s breaking it down into. Is that correct?

 

Nik (22:21.883)

A token is basically a trick. You know, we could have used, it’s like, you know, the thing all these models have about 30,000 tokens. So they realize that we can break all possible, like, you know, with 30,000 tokens, you can, I mean, you can use character level, okay? Every word can be decomposed to characters, but that would have made, that would have made, you know, the,

 

Matt (22:31.521)

Yeah.

 

Matt (22:41.313)

Mm-hmm.

 

Nik (22:50.111)

The language model is extremely big and inefficient. So it’s like a trick because we kind of like trying to find out, it’s a compression that we’re doing. We could have gone with syllables because syllables are also finite and make all the words. Now we said, you know, look, because there are some combinations of letters that they are so frequent, we don’t really need to decompose them all the time. We know exactly what they mean. So it was a clever engineering trick.

 

Matt (22:53.311)

Yeah.

 

Matt (22:59.734)

Mmm.

 

Nik (23:16.411)

It has to do with the language. It’s related to the language. It was like a statistical, a better statistical analysis of the, um, uh, of the language, I mean, to put it that way, if we were inventing a language from scratch, um, we would start with tokens, you know, and maybe not necessarily letters, you know, it’s a.

 

Matt (23:39.078)

That’s interesting. And so we’ve talked a lot so far about language as the thing at play here, but like you can use this generative AI and all this new technology and advancements with different modalities like images, whether you’re generating images or whether you’re understanding what an image looks like and voices, all kinds of different things at play here. How does that work different when now language isn’t necessarily the output? Is it?

 

looking at the pixels in a way and then association there.

 

Nik (24:11.087)

There is a visual language over there. You know, there’s the visual transformer which tries to predict blocks of the image. There’s also the diffusion models which is something completely different. But so for example, diffusion models, we see them only in images. We don’t see them in text that much. Although there’s been some efforts, but the transformer, it turns out that it behaves equally well for images. But…

 

Matt (24:25.193)

Mmm.

 

Nik (24:40.015)

You know, when you talk about a token in vision, that’s kind of like a block of pixels, I don’t know, 16 by 16, 32 by 32. You know, this is something we knew from before, like even in the days of image compression, they could take parts of the image and compress block by block. But I wanna…

 

Matt (24:59.913)

Mm-hmm.

 

Nik (25:07.711)

make something clear for your audience that language is a way of expressing knowledge, but it’s not knowledge. Okay. The fact that I can come and tell you something, you know, I can go and read quantum mechanics, I can take a passage, I can recite it for you. It doesn’t mean that I know what I’m saying. Okay. And

 

And that’s where the hallucinations are coming into play. So we don’t really have direct access to knowledge. Okay. It’s a language model. It’s not a knowledge model. Okay. So, and there’s been some efforts right now to do the same thing. Like, you know, if we could start, if there was a universal knowledge graph, okay, that I could take and say that from this token of knowledge, I can go to that token of knowledge through that relation.

 

Matt (25:39.507)

Mm-hmm.

 

Nik (26:05.499)

and do reasoning, maybe we could train a knowledge model, let’s call it, or a foundational model that we know that whatever it says, it’s accurate and correct. But language is a possible path over knowledge. It doesn’t mean that it’s correct. Okay? So…

 

So it doesn’t have to do that. So language models are always going to hallucinate and make mistakes, not because there are errors into what they were been training for. The data sets are pretty well curated. Obviously, they will contain misinformation and errors, but the reason of hallucination is not really the errors in the raw text, but it’s on the fact.

 

that this is a possible expression, you know. The same way that, you know, like you are a fiction writer, author, and you can write, like you see things in life and you write a different version. Like take one of my favorites, like Da Vinci Code, okay? Like when you read, that’s what I like about Dan Brown. Or take about Game of Thrones, for example. If you think about Game of Thrones, it has elements of truth from the…

 

Matt (27:01.62)

Yeah.

 

Matt (27:11.58)

Mm-hmm.

 

Nik (27:25.895)

human history. You can see the, let’s talk about it because that’s probably what most of the people know, you know, there’s like the Persian Empire or you can see, you know, the English history or the Greek or there’s some of them, like you can see elements of that in a completely fictional way. So that’s, in my opinion, Game of Thrones was the first generative model, you know, George Martin. Great. Okay. So it could generate something like that, which is completely, it looks…

 

Matt (27:28.459)

Mm-hmm.

 

Matt (27:38.317)

Mm-hmm.

 

Matt (27:48.838)

Ah, there you go.

 

Nik (27:55.771)

you know, Modulo the Dragon. So it could look real, okay, realistic, but it’s wrong. The same thing with Dan Brown, you know, Da Vinci Code. It looks like a real, it could have been a real story about what happened after, you know, this was crucified in the story. It could have been, but we don’t have evidence that it is. Some people follow conspiracy theories, they think that Dan Brown is the real story, but that’s what I’m saying. So yes, it’s a possible truth.

 

Matt (28:07.339)

Mm-hmm.

 

Matt (28:26.646)

Do you think we ever get to that ability where it is true knowledge? You get into this concept of like, you know, your AGI and all that type of stuff. Do you ever think we get to that level of advancement? Or, you know, I always go back to like how the human brain works and like, are we, do we have true knowledge to an extent or are we just doing this same kind of computational thing in our head with probability of what’s, you know.

 

Nik (28:47.975)

Oh.

 

Nik (28:53.419)

Yeah, one of the things that we know is that the transformer architecture and the language model is not how the brain works. This is an engineering, it’s not how the brain works. No, no, no. There are some commonalities, and there are some kind of analogies. But I think it’s wrong to think about or to try to, you know, like when you’re working with language models and you’re trying to tune them or you’re trying to explain or debug them.

 

Matt (28:59.243)

It’s not, okay, yeah.

 

Nik (29:22.623)

to have in your mind how the brain works. Don’t do that. If you are a prompt engineer, if you’re trying to build a model, try to understand how the system is built and use that knowledge. Don’t use the cognitive neurology here. No, unfortunately we are very, the human brain is still much more powerful given the fact that you can eat a slice of pizza.

 

Matt (29:26.359)

Hmm.

 

Nik (29:49.591)

and do very complicated mathematical computations. While if you were trying to do the same thing with GPD4 you need the power of a village or something even for inference. Okay so we are very energy efficient. We use signals that take milliseconds to transmit, not nanoseconds, whatever it takes for a GPU, and we still do things faster.

 

Matt (29:53.099)

The energy consumption, yeah.

 

Matt (29:59.969)

Yeah.

 

Matt (30:12.558)

Mm-hmm.

 

Nik (30:19.359)

There’s a completely different world. Even if we could make an electronic brain, like simulated, I think it would be very different. Biology comes into place. It’s still a mystery, but whether we’re gonna reach AGI, you probably hear that. I leave that to people who have enough money and time to think about it. Okay.

 

Matt (30:29.73)

Hmm.

 

Matt (30:44.4)

There you go.

 

Nik (30:46.943)

So, I mean, yeah, in theory it is possible. I hear like Hinton and Benzio and what’s his name. I think Lacoon is on the other side. And Elon Musk that they say it’s possible for, you leave the language model start free writing the code and unplugging other systems. I don’t know why. I think not to worry that much about it. I worry more about the fact that it’s having right now on the job market.

 

Matt (31:13.055)

Mm-hmm.

 

Nik (31:17.868)

That’s more imminent and more real than the economy, than whether the robots will revolt against us.

 

Matt (31:23.17)

Hmm.

 

Matt (31:28.454)

And what do you mean by that in terms of it taking away jobs and tasks? Or do you think this unlocks new opportunities? Yeah.

 

Nik (31:33.627)

I think it does, yes. You know, as with everything, it happens all the time with the high tech. As technology progresses, the next generation requires less engineers. You can see about this example, I don’t know how many million employees Ford has when the car came.

 

Matt (31:45.46)

Mm-hmm.

 

Nik (32:03.867)

And when you compare that with Microsoft, we came later, compare that with Google, compare that with Twitter, compare that with OpenAI now. That it’s a big chunk of the market they’re getting, like their capitalization, and the small number of engineers that are scientists that they need. OK.

 

And yeah, it’s pretty clear to me that a lot of jobs now can be done with less people. And even for us, the data scientists, for the moment, if you want the work is becoming a little bit boring, you know, in the sense that you have to do what people call like prompt engineering. I don’t know, I find ways to find more to make it more interesting. But yeah, it’s becoming, it’s becoming an issue.

 

Matt (32:36.84)

Mm-hmm.

 

Nik (33:00.455)

I feel like we saw this tech layoff wave for the past two, three years. I think a lot of these jobs will not come back again. Okay. They will need less people for that. And of course, for things like customer service or administrative work, all of them will be done with, I mean, it’s already pretty obvious you can do things with GPT much faster than before. It’s a great assistant.

 

Matt (33:04.971)

Mm-hmm.

 

Matt (33:10.423)

Hmm

 

Matt (33:30.814)

So two more topics I want to hit for you wrap the one you think of these models. There’s this element of it being a black box and we touched on it earlier with relational AI, having this concept of a knowledge graph. What is that? How does that work? And like that kind of gets into the value prop of relational AI to an extent, but we’d love to kind of hear, uh, like how, how the benefits of that, that concept.

 

Nik (33:44.843)

Mm-hmm.

 

Nik (33:51.562)

Yeah.

 

Nik (33:55.839)

So the knowledge graphs and language models have a bi-directional relationship. First of all, this is very simple definition, which I really like about knowledge graph. It’s the language that both humans and machines understand. It’s a way of expressing knowledge in a way that anyone can read it and the machine can consume it. If I write C++ code, it’s very easy for the machine to understand.

 

Matt (34:12.374)

Hmm. I love that.

 

Nik (34:24.927)

but it’s not easy to show it to your executive or to your business analyst. Yeah, so a knowledge graph has the right level of information, it’s complete, and both systems can understand. Now, the problem with knowledge graphs has always been is, it’s great, but where can I find one? Like once you have it, it’s great. It empowers a business. You see, the ROI is huge.

 

Okay. It’s like, you know, you are in your house, you go to your library, to your room and you tidy it up. You know, once you tied up and label everything and you know where everything is, then you’re alive, you’re very efficient. Okay. Um, but you know, who has the time to do that? So, and that was always a barrier for us. Now what happens is with language models, you can automate that. That very easily, because in the past, how did you build the language model? So how did you build the knowledge graph? You had somebody going through documents or databases.

 

Matt (34:53.643)

Mm-hmm.

 

Matt (35:07.935)

Mm-hmm.

 

Nik (35:23.047)

and was trying to find global entities and relations and how things are flows and all these things. Now the language model can do that for you with a human in the loop with supervision. So it accelerates that process very quickly. Now, the other thing is once you have a language model, as I said, you need to inject knowledge and you need to teach it stuff. So the way that I’ve seen it is that, let’s take some simple examples.

 

Matt (35:26.206)

Hmm

 

Matt (35:48.159)

Mm-hmm.

 

Nik (35:53.627)

something which kind of like the Holy Grail, you want to answer a question. You go say, well, tell me all the sales from last month where the people bought more than X, Y, Z. And that translates to a SQL query. So in order to do that translation, like from natural language to SQL, for example, if you have a knowledge graph, we have evidence that this can become faster. In some other cases,

 

The knowledge graph, because the knowledge graph can afford really long and complicated reasoning paths. You have your knowledge graph. You can go and mechanically generate, you know, let’s call them proofs or reasoning paths. And you can take them and go back to the language model and train it and say, you know, when somebody is asking you this, this is what people call the chain of thought. It can be a pretty lengthy. Okay. So

 

The end of course is the hallucination thing where you can think, you know, the knowledge graphs always has the, the correct knowledge and it’s very easy to add and remove, you know, knowledge that it’s valid or invalid anymore. So that’s another part that, you know, helps you keep things in place. So, yes, so knowledge is, language model helps you build a knowledge graph, tidy up your room, tidy up your knowledge.

 

And then the other way, having all that knowledge, you can go and retrain, fine tune, control your language model so that you’re getting, you know, accurate results and better results. Okay. So that’s kind of like the synergy between the two.

 

Matt (37:22.507)

Hmm

 

Matt (37:32.814)

No, that’s really interesting, interesting thing. Evolution there. So as promised, we talked about, you had some use cases in your mind of where you think Gen.AI is gonna like, the most interesting, viable, disruptive, whatever it may be. So curious to see what some of those are.

 

Nik (37:43.164)

Oh yes.

 

Nik (37:52.023)

So let’s close with that. I mean, these are things that, let’s call them historians of technology have observed over the years. So we know that whenever new technology comes, people are trying to use it in the obvious way, which might not really give them the big multipliers. So I think when we met, I mentioned this example of the electric motor. So when it was invented,

 

Matt (37:57.399)

Mm-hmm.

 

Matt (38:14.165)

Mm-hmm.

 

Nik (38:21.803)

Those days, the industry was using the steam engine. And the way that they had, they had a big steam engine in the middle, and they had mechanical systems that they would transmit the motion to other machines around that in order to produce, I don’t know, something. It was an industry. And now somebody comes and says, okay, take this electric motor, which first of all, is not as powerful as a steam engine, by definition, because the steam engine…

 

will produce electricity, something will be lost, and then a motor will use it. And the steam engine was there for centuries before the, at least 100 years, I don’t know, centuries, before the electric motors was more optimized. And all of a sudden now you needed to buy electricity to fit that, while for the other one you had, I don’t know, fossil fuel to use, and you knew where to find it. So people rejected the electric motor at the beginning. They couldn’t know why it was useful.

 

until someone said, well, wait a minute, we don’t need one electric motor for the whole factory. What if we create, because that’s so easy to manufacture, what if we made like 100 electric motors spread in vertical space, I don’t know, so take the whole production and spread it over a bigger space, okay? And all we need is an electric generator that can feed 100 motors.

 

So the big benefit wasn’t by just having one stronger motor. The big benefit was by having, you know, a hundred motors in different levels and making the production, you know, a multi-level and expanding it to bigger space because the problem with the steam engines is that motion couldn’t be transmitted too far away and everything was cramped and limited. I think if I remember that took about 20 or 30 years, okay, to figure that out. And kind of like the same thing with

 

Let’s think about Amazon. In the beginning, the e-stores, they were basically trying to take a brick and mortar store and run it the same way they were running it before running it on the web. And Amazon realized that there’s other things, like there’s recommendations, there’s A-B tests, there’s other things that I cannot do in a brick and mortar store. The tailor, the personalization that brought the big boom.

 

Nik (40:45.451)

of again, it took several iterations of failure. I deal, you know, Amazon and Alibaba and others kind of like dominated the market. Think about Snowflake when Snowflake came and say, we’re a cloud database. I said, what do you mean? I can take my database and put it on the cloud. But the thing is nobody thought about designing a database that it’s going to, you know, you can’t download Snowflake and run it on your machine. It was designed.

 

to be completely cloud-based. Use infinite compute and infinite storage. So it’s a very different thing. People were confusing the cloud hosted, which means that I build something that when I take something that I build it, thinking that I’m constrained by the memory and the compute of a single machine, and I’m just like running it somewhere else on the cloud, versus no, I’m building a system that is going to rely on, you know.

 

Infinite machines and S3, whatever blob storage, which is infinite and available for scratch. So I’m trying to scratch my head here and see what is that. Yeah, there’s the obvious application of Gen.AI, which is, use it as a new UI. So chat bot, we know about that. But I was thinking, I was trying to make this exercise, like we’re looking for these businesses that they only exist.

 

Matt (42:01.196)

Mm-hmm.

 

Nik (42:12.895)

They cannot exist without Gen AI. So I think the very, the one that we’re going to see soon is there’s already a legal battle about that, which is going to blossom and give the new thing. I think it’s gonna change completely the way we’re reading, okay? So you might have seen the fights between authors and OpenAI about infringement, and I think it’s gonna end up in a beautiful relationship over there. So right now,

 

Matt (42:36.695)

Mm-hmm.

 

Nik (42:42.219)

There’s a problem. People don’t read because they have to go and buy a 200, 300, 400 pages book where they only need to, they’re only interested in four or five pages or even a summary of 20 pages that nobody’s providing for them. Okay, they don’t know where it is. So what I’m envisioning over here is, think about Random House taking all their books or all the publishers, training a language model. And they’re saying,

 

I’m asking a question and they’re basically coming up either with two pages and say, you know, actually this thing, you can find it in that book. And you know, here’s a summary and these are the three pages. And I can actually take these three pages and put half a page that has all the information that you might need to read these three pages. Okay. Because that’s another problem. Sometimes you can browse a book and find that chapter. But then as you’re trying to read it, you realize that you need to go and visit others. So basically what’s going to happen is…

 

Matt (43:39.089)

Mm-hmm.

 

Nik (43:42.203)

You know, you’re going to buy pages from books or a summary that was produced based on, you know, 10 pages. So now you will pay, I don’t know, 10 pennies or a subscription or something like that. I think it’s exactly the same thing that happened with streaming. If you remember the legal battles of YouTube and Viacom where people started uploading videos on YouTube and they saw, no, it’s mine, it’s ours, it’s yours. And eventually they came out an agreement that changed completely the way that we

 

Matt (43:47.406)

Uhhhh…

 

Nik (44:11.839)

Listen to music as Spotify was another thing. Okay, but it took some friction So we don’t buy CDs 12 songs or 16 however they had you know, we We listen to you know one song at a time We don’t own the songs anymore You know, we just stream them and all these things So I think that’s one of the applications one Now I have a reservation

 

Matt (44:38.454)

That’s like, as I say, that’s like spark notes on steroids almost. One question, though, I guess, if you’re reading for fun, do you get the same pleasure and benefit from that type? Or is that a different use case where you’re wanting to sit down and enjoy a book? I guess that may be a different type of thing versus getting the learning.

 

Nik (44:57.735)

I think it can help everyone. It can help the bibliophiles, you know, because I often, I like, I have about 2000 physical books, another 2000 electronic books. I like, but I’m always frustrated. You know, audio books was another thing that seems the way that we just know less. But it’s always frustrating when, you know, sometimes it takes like you, if the book doesn’t stick with you for the first, I don’t know,

 

Matt (45:02.295)

Mm-hmm.

 

Matt (45:07.104)

Wow.

 

Nik (45:23.179)

20, 30 pages, then you give it up. And it’s very likely that then if you’re a little bit more patient, maybe after page 50 will become more interesting. But how many people give up before that? So as a bibliophile, it’s going to help me discover more books. But I think the biggest thing is for people who want to learn something, but they don’t want to read the full book. I read somewhere that they said that

 

Matt (45:25.143)

Mm-hmm.

 

Matt (45:48.694)

Yeah.

 

Nik (45:52.639)

Are we out of time? Yeah, so there was this theory that 100 years ago when you were writing a book, you had to make it very big because people didn’t have to do anything else. So they were buying a book to fill their time, because they wanted to spend, I don’t know, a month reading it. Now these days, they say that a book shouldn’t be more than 200 pages, because don’t try to fluff around, because there’s so much information and people don’t have the time to.

 

Matt (45:54.302)

No, keep going, keep going. I was gonna add a point, yeah.

 

Matt (46:15.576)

Mm-hmm.

 

Nik (46:22.439)

They need the essentials and don’t want to spend too much time on other irrelevant stuff. The same thing happened with TikTok. Again, it was a victory of machine learning over there and recommendations trying to narrow the span to a few seconds to what you’re going to consume. Of course, it’s a great commercial success. I personally don’t like it. I don’t let my kids.

 

Matt (46:33.163)

Mm-hmm.

 

Nik (46:50.059)

spend time. I realized that it’s so addictive. You know, YouTube search, you can spend hours just going one by one. It’s it’s dopamine injections. But we’re definitely going to see social networks based completely on Gen.ai and videos. Okay, that’s kind of like another one. The same thing that we found, you know, we had TikTok. And yeah, I don’t know. I mean, we if you are a founder, you have to start thinking about

 

Matt (47:06.539)

Mm-hmm.

 

Nik (47:18.015)

How can I take a sea of content and serve it much better with a language model in a way that people wouldn’t have consumed that before?

 

Matt (47:31.423)

Yeah.

 

Matt (47:36.21)

Yeah. The book example you mentioned, I have the same problem. I do audiobooks and I’ll kind of like try to save the clips of the things that make sense and at that point in time it’s like you have this light bulb moment and then you forget about it but there’s a point in time in the future where, man, that would be super applicable if I could pull that out of my knowledge base. So it’s almost like, to your point, getting those points that are applicable at that point in time but resurfacing them because they’re somewhere in my…

 

memory that I can’t necessarily always retrieve.

 

Nik (48:07.251)

Let me give you a recent example. And that’s why I think this open AI has a big advantage right now over Google. So with all the unfortunate events happening in the Israel-Palestine conflict right now, I remember that I had watched the documentary 20 years ago at Georgia Tech about the whole history of the area. But I couldn’t remember the title of it. So I knew that it was a French production.

 

Matt (48:30.722)

Hmm.

 

Nik (48:36.303)

I remember that it was released somewhere in the nineties because it was right before the Oslo agreement. And I think basically that’s what it was. I can remember it was a documentary. So I was trying to find on Google, I was trying to find on Amazon, I couldn’t find it. But I went on OpenAI and I said, well, I was a documentary, I think it was released early nineties. I know that it was a friend’s production. And you know,

 

Matt (48:54.795)

Wow.

 

Nik (49:03.279)

It had the history from the 1900 until 1990. Can you tell me which one is? Because you know how many, they’re not really that many. I mean, okay. It’s so that I thought that someone should have been able to. And it actually found it. It gave me the title in English and in French and I went to Amazon and I found it. So I think it was remarkable. It was remarkable.

 

Matt (49:18.241)

Yeah.

 

Matt (49:24.054)

Wow. That’s cool. Yeah, and just to wrap on the points you made about the TikTok and everything like that, and just that type of social media, like you wonder to a point, does it get so advanced to where you literally cannot put your phone down? It gets you so zoned in with like the dopamine hits. Like is it engineered to a point where the recommendation of what’s coming next, like it’s kind of scary to think about, you know, in the future.

 

to where it becomes you literally, it’s like a drug in essence.

 

Nik (49:55.603)

Oh yeah, that is going to have, I agree with you. Like if TikTok is a problem right now, when it’s basically trying to find existing content that you’re going to like, think about if it knows exactly what you like and you can give it, you know, feedback, like, you know, so it knows more and more, like you say, what you want. And it really, you know, personalizes things for your content. Then it’s going to…

 

Matt (50:16.876)

Yeah.

 

Matt (50:23.678)

I’d take it a step further too, like what if it’s not just random users generating the content? What if it is a GPT or something like that that’s actually generating the content? Okay, yeah, so that’s, wow, I hadn’t even thought of that.

 

Nik (50:34.471)

Yeah, yeah, that’s what I mean. Yeah.

 

Yeah, generating and it can’t, you know, like think about, you know, like when you were raising a kid where you say, well, you know, we have this inherent thing of, of going to the, taking the path of lead, least resistance and basically, um, things that they’re not good for you. Okay. So that’s why you have to say no to a kid. Imagine now that, uh, also think about it.

 

Matt (50:54.537)

Mm-hmm.

 

Nik (51:06.595)

like society has created like this moral boundaries that, you know, prevent you from doing things that maybe they’re in your mind, but you say, you know, that’s the, I shouldn’t really take that path because that’s, that’s immoral. But what if you are, you know, in your screen, and nobody’s looking and there’s somebody else that says that, oh, okay, tell me what you thought. I can actually, you know, create this for you.

 

Matt (51:16.863)

Mm-hmm.

 

Nik (51:33.119)

and a lot of people are going to get tempted. That’s like a really bad spiral. I mean, these are fears before AGI taking over and leaving the matrix. I think these are bigger fear and we do see it in some applications, in the deep fakes and things like that. I think it can become a-

 

Matt (51:35.434)

Wow.

 

Matt (51:42.593)

Yeah.

 

Matt (51:55.083)

Yeah.

 

Nik (51:58.963)

People have said that this is this kind of addictions like drugs, you know, the same thing, like the screen addiction, especially when it takes parts that they are, you know, problematic. So I would worry about that. We need some strong resistance in that. So, yeah, like I will give you an example. I don’t know, for example, OK, like, you know, let’s take one of the.

 

Matt (52:03.784)

Mm-hmm.

 

Matt (52:18.25)

Yeah. Well, yeah, not.

 

Nik (52:27.447)

most horrifying thing which is like child pornography which is I know that by law even possessing child pornography is a felony okay I don’t know if possessing a deep fake you know of child pornography is a felony so there might be gaps in the in the in the legal system that we have to

 

Matt (52:35.194)

Mm-hmm.

 

Matt (52:39.956)

Yeah.

 

Matt (52:47.966)

And that’s the crazy part about it is it’s this whole, like to your point, our legal system, it’s a whole type of paradigm that we haven’t even really had to encounter. And how do you build laws and it’s, yeah, it is crazy to think about how that’s going to change how we live, how we work, how we, you know, our morality as a species even to a certain extent, right?

 

Nik (53:13.659)

Yeah, so I think the moral issues coming before the, you know, whether we’re going to lose our jobs or, uh, you know, computers taking over control. Yeah. Terminator. Uh, is it Terminator or Matrix? Which one is more scary? Terminator or Matrix?

 

Matt (53:23.498)

The Terminator, yeah. Yeah, yeah. Well, not.

 

Matt (53:30.806)

I don’t know. That’s maybe I’d say maybe the Matrix or at least that’s the more interesting one to me at least. What about you?

 

Nik (53:38.331)

Yeah, I think it looks like because in the Matrix, there wasn’t really any mechanical part. It was purely everything was, you know, there’s a computer running, you know, computers were running. The Terminator was mixing the reality with robots. Okay. Which I think it’s more difficult. It’s an interesting scientific question because if the machines can take over, and you know, and basically

 

Matt (53:44.666)

Hmm.

 

Yeah.

 

Matt (53:52.183)

Mm-hmm.

 

Nik (54:07.891)

control the universe, why do they need the mechanical part? Why do they need to go out in nature and do things? Maybe some of you would say because they need to synthesize energy, so they need the mechanical component. So it looks like, so it might be the case that evolutionary, we will not take that into consideration. They will try to eliminate their creator, but then they will actually.

 

Matt (54:14.07)

Mm-hmm.

 

Nik (54:35.435)

face some type of extinction or shrinking because they will be missing the mechanical component to get energy and all that stuff versus the other which is the hard way. I think in the Terminator you need to create the robot to fight the humans and then you have the mechanical component that can help you. Because at some point even if they could eliminate humans and let’s say they had solar panels

 

Matt (54:43.211)

you

 

Nik (55:05.183)

they would need to manufacture new solar panels. They would have to go and extract minerals to, the chips will go bad after some years, like create new chips, new stuff. Interesting science fiction stories here.

 

Matt (55:08.275)

Mm-hmm.

 

Matt (55:22.514)

Yeah, I think the scarier thing is not the machines taking over, but the humans and bad actors using this stuff in negative ways, at least for me, that’s scarier in my mind. But yeah, so this has been one of my favorite conversations so far, so many interesting topics. I really appreciate you coming on to the Built Right Podcast, Nik. But where can people find you? Where can they find relational AI and learn more about either you or the company?

 

Nik (55:32.458)

Yes.

 

Nik (55:51.832)

I think you can find us on the web. You know, we are a remote first company even before COVID. I think we do have an office somewhere in Berkeley. I’ve been there a couple of times. But our people are all over, I want to say the world. The sun never sets or never with the thing at really simply. We have people all over the world. Yes. You know, I’m here in Atlanta.

 

Matt (56:11.714)

Mm-hmm. Follow the sun, yeah.

 

Nik (56:19.443)

to go to our website, read our blogs, see about our products. Our product, I think we have announced a partnership with Snowflake, so people can use it through there. It’s a limited availability through there, which is going to become a general one, I think sometime probably this summer. It’s coming up, so I don’t have a date. So yes, you can find me on LinkedIn. I’m not really big on social media.

 

Matt (56:32.406)

That’s awesome.

 

Nik (56:48.875)

Plinkton is probably the only one that I spent some time, not much. Yeah, that’s it. Thanks for hosting Matt. Excellent.

 

Matt (56:52.654)

Yeah. Nice. Well, great, Nik. Thanks for joining us today. Have a good one.

The post How Generative AI Works, as Told by a PhD Data Scientist appeared first on HatchWorks.

]]>
Generative AI: Augmenting or Replacing Research? https://hatchworks.com/built-right/generative-ai-augmenting-or-replacing-research/ Tue, 31 Oct 2023 10:00:47 +0000 https://hatchworks.com/?p=30139 Generative AI is making an impact on every aspect of digital product building. But we wanted to delve deeper into how it’s affecting user research and interviews, so we invited Nisha Iyer, CTO of CoNote, onto the Built Right podcast to share industry insights and predictions.  Nisha shares the story of CoNote, an AI-empowered platform […]

The post Generative AI: Augmenting or Replacing Research? appeared first on HatchWorks.

]]>

Generative AI is making an impact on every aspect of digital product building. But we wanted to delve deeper into how it’s affecting user research and interviews, so we invited Nisha Iyer, CTO of CoNote, onto the Built Right podcast to share industry insights and predictions. 

Nisha shares the story of CoNote, an AI-empowered platform helping transcribe and organize user research. We hear her thoughts on GenAI skepticism and how CoNote builds on customer feedback to improve its efficiency. Plus, Nisha tells us her predictions for GenAI in user research and whether it could eventually manage user interviews entirely. 

Read on for the take-home moments or tune into the podcast episode below. 

How GenAI can help user research today 

In Nisha’s previous work in data science, the slow process of performing user interviews, transcribing them, analyzing them and acting on the relevant insights became tedious.  

After Google-searching for an AI solution and creating a few shortcuts herself, Nisha realized no-one was providing quite what she needed. There was a market for an end-to-end generative AI tool that streamlined these processes. That’s when CoNote was born. 

CoNote allows you to: 

  • Upload hours of user research interviews 
  • Transcribe and synthesize them 
  • See the key themes and keywords

Building a moat in the age of AI hype 

Right now, it seems like every day brings a new generative AI tool. With democratization in full flow and more people able to access larger language models, how do CoNote build a moat and stand out from their competitors? 

Nisha says user experience has always been their watchword while it often falls by the wayside for competitors. Development teams may integrate APIs and build their tech but, if you’re building a SaaS product and don’t have an intuitive front end, interest could dry up. 

CoNote’s moat is that they’re not simply consuming APIs. They have other pieces of infrastructure to keep them one step ahead of their competitors. 

Another thing at the core of what they do is a deep understanding of their users. Nisha believes CoNote provides a “simplistic flow” for the user to reach the solution to their pain point. 

How customers shape CoNote’s roadmap 

When building a brand-new tool, product development teams tend to devise a roadmap. But how much of that roadmap is pre-determined and how much is changed along the way, based on customer feedback? 

Nisha says CoNote’s ever-evolving roadmap is made up of around 70% user feedback and 30% CoNote’s own decisions. 

This is evident in the launch of their new feature, Action Items, which is stemmed from repeat customer feedback and highlights the next steps users can take after using the product. 

When running the first round of CoNote interviews at the prototype stage, many of the themes and action items that arose resulted in relevant features being built into the product, such as the use of audio recordings and Zoom integration.  

Nisha says the fact they use their own product as part of their work gives them an even better insight into the changes and features that need to be added. 

Overcoming AI skepticism 

A recent User Interviews survey found that 44.3% of UX researchers are tentative about using AI in their work, as well as 5.5% of CX professionals, 9% of data scientists and 21% of product managers. 

But, in 2023, generative AI is almost inescapable. So how can product development teams fight their fears and use AI in ways that augment their processes – without taking them over? 

Nisha says, rather than fearing its potential, it’s important to focus on generative AI as a way to enhance the tedious parts of your work and do what you could do in a week in a matter of minutes. 

CoNote is a prime example of this. It takes you 85% of the way through the user interview process, leaving you with the materials you need to pull the most useful insights. 

Predictions for GenAI and user research 

Nisha believes there’s still a way to go before AI is taking on interviews all by itself. She sees a future where AI can replicate human-led experiences but says real, personal interaction is still the most efficient way to perform user interviews. 

CoNote has no plans to create AI-led interview experiences, instead focusing on augmenting the cycle and making development teams’ lives easier. 

To find out more about CoNote’s story and how generative AI is changing the face of user research, listen to episode 16 of the Built Right podcast. 

Get ahead in software development with HatchWorks’ AI-driven strategies – learn about Generative-Driven Development™ now.

Matt (00:01.944)

Welcome, Built Right listeners. Today we’re chatting with Nisha Iyer, CTO of Conote. Conote makes qualitative research fast and easy with its AI-empowered platform, helping transcribe, analyze, and organize user interviews. It’s a tool built for user researchers, which at Hatchworks, that’s a big part of what we do, so we can definitely sympathize with that type of tool. But welcome to the show, Nisha.

 

Nisha (00:26.498)

Thanks, great to be here.

 

Matt (00:29.1)

Yeah, excited to have you on. And today we’re going to get into how generative AI, and more broadly, just the democratization of AI will fundamentally change user research and more broadly user experience. Uh, but let’s, let’s start off there. Like Nisha, why, why user research? Why this problem? What part of user research is broken or needs help? And how, how’s Coneaut looking to solve it? What gap do you see in the market right now?

 

Nisha (00:58.31)

Um, yeah, great question. So just real quick intro. I, uh, my background is data science. I’ve been in the industry for about a little over 10 years. Um, and my last company, I was working at a startup. I’ve been there for five years and was, uh, had built a tech team and, um, had come to a point where we were doing product development.

 

So with product development comes user research, right? Like to build a good product, you need to understand your users. That’s how you get to product market fit. That is how you really build what people are asking for versus what you’re building in your own head. So we did a lot of user research there. And I worked directly with, you know, like a small group that did the product development. One person was a UX designer and then engineer and a data scientist and myself.

 

Matt (01:28.745)

That’s right.

 

Nisha (01:49.978)

Um, and we did a bunch of user interviews and went through the process of distilling them and really pulling out insights. And it was tedious. It took a long time. It, um, it took a lot more time than I had expected, you know, just from my technical background. And, um, I was pretty overwhelmed with like the amount of information that we had to consume. Like, you know, you do the interviews first, record the interviews.

 

Matt (02:00.605)

Mm-hmm.

 

Matt (02:13.874)

Yeah.

 

Nisha (02:16.878)

transcribe them and by the time you sit down to really distill what’s what has been said like the important themes the important takeaways You have to pretty much go through the interviews again and go through every transcription, you know, like the basic Affinity mapping technique where you’re taking post-its and grouping themes and it just takes a long time Like it took, you know a week to two weeks because you don’t have like that set aside time to just dedicate to the distilling of research

 

Matt (02:32.49)

Mm-hmm.

 

Nisha (02:46.878)

And so what I found myself doing with my little team was just taking shortcuts, being like, okay, I remember this, I remember that, and being like, and then internally thinking this isn’t the right way to do this. I’m 100% biasing my findings by thinking, hearing the things that I really wanted to hear, obviously, that’s just human nature. So what actually happened is that I had a…

 

Matt (03:06.374)

Yeah.

 

Nisha (03:15.726)

project come up where there was like some kind of commitment to do 20 interviews in a period of two weeks and then distill the research. And I was like, this is insane. Like from my experience with research, I was like, this is a crazy requirement. And I, and I thought like there must be some tool, like there must be some AI platform that does this. Like we, you know, we’re at the age where this should be available. So I started Googling for it and I couldn’t find anything. I was like, this is insane.

 

Matt (03:25.062)

Oh wow.

 

Nisha (03:46.042)

So I called my friend at the time, my coworker, and now my co-founder, one of my two co-founders, and I was like, hey dude, we should build this. We can do it together. Called my third co-founder and we all talked about it and all agreed that it was a huge pain point of not being able to synthesize research in a speedy amount of time. And then also just that unbiased synthesis.

 

So that’s how this came about, honestly. It’s just from personal pain points, which I think is a great way to build a product because you’ve actually experienced it and you’re like, I wanna use this to solve my problems.

 

Matt (04:25.712)

Yeah, that’s a great explanation. And you’re bringing me back to my product days where we would do user research interviews and I would always schedule like an hour after the user interview to like debrief, go through it again. And it’s like, you know, that’s a two hour block there. And then to your point, you got to synthesize the whole thing. You forget stuff, you mentioned bias, but there’s also recency bias where I’m gonna remember the most recent interview more so than the other one. And then you have like for us, we would have these Miro boards.

 

Nisha (04:42.347)

Yeah.

 

Nisha (04:51.446)

Exactly.

 

Nisha (04:55.979)

Yeah.

 

Matt (04:56.132)

were just huge with all these insights and it’s like you’re trying to connect the dots. It’s it can get messy so like I can I can feel that pain. It’s it’s uh bringing back some memories from those days.

 

Nisha (05:00.128)

Yeah.

 

Nisha (05:08.254)

Yeah, exactly. 100%. It’s just like, how do we and then and so like, just to continue on, it was just like all like this journey has been quite serendipitous. Honestly, I ran into my upstairs neighbor, and she now also works for Coneo and are with us and she was a user researcher and I told her the idea and she was like, Oh my god, like this is gonna make my job so much easier. Right. And I and like, I like

 

Matt (05:19.102)

Mm-hmm.

 

Matt (05:29.626)

Oh, perfect.

 

Matt (05:35.004)

Yeah.

 

Nisha (05:37.31)

I’ll stop there, but I just want to like touch on that as well because it’s not like oh my god It’s gonna take over my job. It was more like this is gonna make my job so much easier

 

Matt (05:46.256)

Yeah, and I love the point too, like you’re hitting on the pain points of the speed element, but there’s also the quality piece with the bias. So there’s some core like value points you’re starting to hit on. But I was digging through your LinkedIn and your CEO, James, I’ll mispronounce his last name, but he had this like interesting quote that was out there, a survey by user interviews and said, UX researchers were the most tentative.

 

Nisha (06:06.146)

Prisha.

 

Matt (06:16.44)

of all roles to use AI with 44% saying they’ve tried to use AI, but avoid it in their research. But by comparison, CX professionals at 5%, data scientists 9%, product managers 21%. What do you think is the reason behind that? Why are user researchers in particular less likely to adopt this technology that could potentially make things easier for them?

 

Nisha (06:44.542)

I mean, honestly, I think it all boils down to like fear of the unknown. Um, if you look at like 9%, right? Data scientists are 9%. Like we, most data scientists understand exactly what’s going on at the bottom level. Right? Like it’s, we’re, it’s mathematical. There’s no like magic. It, there’s a lot of, um, inference, um, based on similar words and.

 

Matt (06:58.237)

Yeah.

 

Matt (07:02.931)

Mm-hmm.

 

Mm-hmm.

 

Nisha (07:09.75)

Um, and words transformed into number numeric representations, and that’s where like it all stems from. So I think like the number one thing is fear of the unknown. And, and then it just goes into like, I don’t want this to take away my job. Like it’s not do it. And then like, so then I feel like P I would get on the defensive of saying like AI cannot do my job the way I’m like better than me, it’s not going to replace me, so I don’t trust it. Um, I think instead, like where we could go with this is.

 

Matt (07:16.195)

Mm-hmm.

 

Matt (07:31.589)

Yeah.

 

Nisha (07:37.758)

AI is augmenting my job. Like I can actually focus on the important pieces versus like the tedious nature of things that I could actually like bring to the forefront using a tool that does what I would be doing over a week or two weeks in a matter of minutes, right? And then I can spend the time taking those insights and making more inferences and pulling more information out of it.

 

Matt (07:41.428)

Hmm.

 

Nisha (08:06.002)

I can also speed my research cycles up. So I think that like that fear, like we’ve heard it, we do our own user research with Conote and I think it’s just like what’s going on under there. Like it’s a black box. And I think that like the way that I would talk to people who had that fear is that it’s not a black box. It’s just, it’s like something that I can help explain and walk through. I think that would just get boring though because it’s super technical. But.

 

Matt (08:19.217)

Mm-hmm.

 

Nisha (08:37.34)

It is all related to similarities and semantic understanding, and AI is also not here to take your job. I will end with that.

 

Matt (08:47.46)

Yeah, and that’s an interesting theme we’ve had across several episodes we’ve done lately, is there is that fear of the unknown, that fear that it’s going to take my job, that it’s going to replace me. But this idea of a co-pilot, it’s enhancing my skills, it’s making me better, is a theme we’ve continued to hear. I was chatting the other day with, and I’m trying to define the episode, it was Brennan at

 

I think maybe that’s episode 15. We’ll see where it is what it launches, but it’s hyper context They’re solving this tool for one-on-ones and performance reviews and they the same kind of idea with HR was like, you know AI cannot solve this for me. There’s no way but what was interesting is like with the latest, you know Just craze over the past year, you know chat GBT and all that kind of stuff

 

They were able to play around with it and get a sense of how things can work. And it kind of opened up their minds a bit. I don’t know. Has some of that happened with user researchers as of late where we’ve had this crazy hype cycle with generative AI, where people see some of the power with it, because I think with user research, it’s, it’s so qualitative. I think that’s one of the big hiccups there as well. It’s like, you know, this is, this is qualitative stuff. It’s not ones and zeros.

 

Nisha (10:04.448)

Yeah.

 

Matt (10:04.764)

But with generative AI, it adds that kind of semantic piece to it, to your point.

 

Nisha (10:09.982)

Yeah, yeah, no, I think that there is a growing acceptance and, you know, like, people want to use this when they start seeing the way that generative AI can augment their research versus takeover. I think people are more accepting. Like, I think we just actually spoke to someone recently that’s getting on the platform at a large corporation, and they were a little skeptical at first and then

 

Matt (10:37.053)

Mm-hmm.

 

Nisha (10:37.43)

we introduced Conote as it gets you 85% of the way, right? It’s not doing all the research. It’s just getting you to a point, jumping off point where then you can take those findings and build your own insights. And that helped her feel better. She was like, oh, okay. So it’s not like just giving me this output. It’s more so like giving me stepping stones to get to that output. And I think when put like that, researchers seem to be more open to using the tool.

 

Matt (10:41.372)

Yeah.

 

Matt (10:54.812)

Mm-hmm.

 

Nisha (11:05.506)

are using products like this, like any kind of generative AI products. You know, there are a couple out there in the market that seem to be getting some kind of traction. I can talk to those later. But like I do think that like it’s like and still in like the early adopter phase. Right. Like people are still like weary. And we have to show people at Conote that like the reasons why they should be using it. And I think that’s like, you know, like what we’re doing for that is building a lot of user education.

 

Matt (11:22.915)

Mm-hmm.

 

Nisha (11:35.45)

showing people how they can use the tool to augment their research and giving examples like within Conote of how you can do that.

 

Matt (11:43.568)

Yeah, it’s an interesting kind of product. And then getting into the marketing problem where they may be problem aware, but not really solution aware and trying to migrate them down that path. Let’s get into like kind of short term evolution about AI can impact user research versus like longer term. And in the short term, I’d be curious, and this may be even be like functionality within Codenote or stuff on y’all’s roadmap. Like what’s the short term view of how

 

generative AI or AI in general is helping the user research process? I mean, is it simply just churning through this long interview and it’s spitting out the insights? Like where do you see it today? And then like, what’s like the crazy, like, you know, utopian future of what it could be in the future.

 

Nisha (12:33.09)

So right now, the power of Conote lies within, you know, like we are actually moving pretty fast. We released our initial beta live June, July 18th, and we’ve already had a couple of releases since. The big powerful generative AI piece right now, so like I just wanted to take a step back. Like we, I don’t think Conote is 100% generative AI. We have layered models. We do use traditional machine learning,

 

Matt (12:45.959)

Mm-hmm.

 

Nisha (13:03.146)

like large language models. And I think to that extent, like there’s already like power there. And that’s why we call it an AI engine versus like just like gen AI, right? Um, and what we’re doing, like the big powerful piece right now is that you can upload hours of research, so multiple interviews, and then you can synthesize. So not only can you synthesize, you can transcribe the interviews and see the transcriptions, uh, see the diarization by speaker and then highlight key quotes.

 

Matt (13:12.596)

Mm-hmm.

 

Nisha (13:31.766)

You can then synthesize your interviews, and then under 10 minutes, you will get the key themes, and then the keywords within each theme, and those keywords directly relate to sentences within the transcripts. So let’s say like I get four themes. I can click into those. I can then see where each speaker, like if I had five interviews, I can see where each of those speakers said that, mentioned that theme. I can then click into detailed view, where I can actually hear.

 

Matt (13:38.025)

Mm-hmm.

 

Matt (13:53.768)

Hmm.

 

Nisha (14:01.302)

the speaker saying it so I can get sentiment. And I can also bookmark this and build a presentation that I can send out to a stakeholder that may be interested in some of the key quotes that were said over eight hours of interviews, which is usually, as you know, usually would take so much more time. So yeah, I’ll stop there. That is our current big bang of…

 

Matt (14:17.948)

Mm-hmm.

 

Nisha (14:27.762)

our AI engine and we definitely have some other plans ahead, but just wanted to stop for any questions.

 

Matt (14:34.832)

Yeah, it’s an interesting point too. You mentioned generative AI is the hyped up word right now, but machine learning, and you as a data scientist knows some of this stuff’s been around for a very long time. This is not necessarily a new thing, right? And there’s so much power just in machine learning and a lot of the things there as well. And I’m curious too.

 

Nisha (14:52.555)

Yes.

 

Matt (15:04.616)

It seems like every day there’s another gen AI product coming out there. Like, how do you see differentiating when, you know, I feel like a lot of this, the tools with AI have been a bit democratized to where people have access to these large language models. You kind of mentioned that’s not the only core point to your, your tool, but how do you build a moat when it’s so much easier now to integrate some of this technology into a tool? Like, how do y’all think about that?

 

Nisha (15:33.11)

I think we have to really think about the user, right? Like, sure, everyone can access these APIs and build and integrate them into their product. Are they actually thinking about the UI and the UX? Like that is a key piece of Conote. Like you wanna have, and as you know, like also being in product, like you want to have a really intuitive like journey when you get to an app, right?

 

Matt (15:36.968)

Mm-hmm.

 

Matt (15:47.06)

Hmm.

 

Matt (16:00.082)

Yeah.

 

Nisha (16:00.942)

Uh, so you could integrate an API and build like all the tech and be amazing and stacked and everything. And if you’re building a SaaS product and don’t have like an intuitive front end, people are just going to stop there. Like they’re not going to know how to use get from point A to point B and what co-note my, uh, so I have James Frisha as a CEO and co-founder. And then my third co-founder is Cameron Ridenour and he’s a, he’s chief design officer and so his background is UX. Right.

 

Matt (16:20.285)

Mm-hmm.

 

Matt (16:26.237)

Mm-hmm.

 

Nisha (16:28.914)

And so not only like we live and breathe these problems, we get in touch with people that live and breathe these problems. We have people that also work with Conote that do. And I think that our moat is a like that we’re not just simply consuming APIs, right? We have other pieces of infrastructure around them on the AI side that actually enhance and empower us to be a little ahead of

 

Matt (16:50.494)

Mm-hmm.

 

Nisha (16:56.298)

not a little, a lot ahead of some of these Gen.AI companies that are just simply consuming and using prompts for some of these APIs. And then secondly, just the fact that we have such a deep understanding of the user and are focusing on that when building our product, right? The experience, the interaction with the app. And if you’re listening to this and are curious, please go check out conote.ai. It is live and free and…

 

Matt (17:02.738)

Yeah.

 

Nisha (17:24.798)

Matt, I’m not sure if you’ve checked it out, but when you’re on the application, compared to many other competitors that we’ve checked out and tried out, there is a very simplistic flow to get to the pain point that we’re solving for, which is really being able to speed up your research process.

 

Matt (17:47.332)

Yeah, I think part of the benefit there is you’re very focused in on a particular type of user, which is user researchers, right? I think so many folks, and we see this with a lot of clients too, they’re trying to serve too many different people. And, and then you get into back to user experience. How can you build, you know, not simple, but just intuitive user experience when you’re trying to serve different groups, do you have even within user researchers?

 

a persona within that you’re targeting or is it user research is kind of the core? Is there a type of user research you’re even more granularly focused in on?

 

Nisha (18:23.438)

Um, maybe not a type of user research per se, but definitely a type of user researcher that is, um, you know, uh, interested in synthesizing multiple interviews and has a research cycle they can’t keep up with. Um, or potentially, you know, like where I’m trying, where we’re trying to drive people is the fact that research is more important than people give it. Like it takes so much time. The research cycles are longer than development cycles, right? Like

 

Matt (18:29.671)

Yeah.

 

Matt (18:46.454)

Mm-hmm. Yeah.

 

Nisha (18:52.146)

I like if I’m thinking about dev, I think of CI CD and DevOps in CI CD and just in general agile principles, a sprint is two weeks. There is in no way that like researchers think they can finish a research cycle in two weeks. However, with Cono, you could do a week of interviews and then synthesize and be done and ready with new with new findings for the next sprint. And I think that is a missing piece in the entire like end to end process.

 

I have tirelessly worked with development teams and like led engineering and data science teams. And the missing gap is that they don’t get the user, they don’t understand thoroughly like the user research part, right? Like they, it’s like a game of telephone, like 10 people have spoken to each other before the engineering team hears what they need to build. And they can get so in like, uh, you know, like just in deep in the rabbit hole of like, Hey, this is how we’re going to do it. Technically.

 

Matt (19:31.964)

Yeah.

 

Nisha (19:50.93)

and not be thinking of like the actual user problem. And that’s where I really want to like, that’s where Kono comes in, right? Kono gives you the ability to add continuous research to CICD. So like in my mind, it should be CRCICD. Like that should be instilled in the development process.

 

Matt (19:53.736)

Mm-hmm.

 

Matt (20:11.12)

No, I love that. And you’re speaking our language. When we talk about built right, we talk about building the right digital solution the right way. And building the right one, it’s a key element. It’s user research. And I love the concept you’re talking about, where it has to be continuous. And this is what we preach as well. Like so much of the time, it’s like, all right, let’s go do our research. All right, discovery’s done. Let’s go build the thing. But it has to be.

 

Nisha (20:26.4)

Yeah.

 

Matt (20:36.712)

built into the process. So I love that idea of it’s, you know, you think of CI, CD, same type of thing. You need that feedback loop built in as you evolve the product. It seems like y’all are kind of, you know, dogfooding it a bit by using the product yourself. I’m curious, like how much of the roadmap are y’all like defining as you use the product versus feedback from customers?

 

Nisha (20:47.842)

Right.

 

Nisha (21:01.294)

And we try to definitely take more of the customer’s feedback just because they’re using it as like as their customers. But like I do have to say when I like I listened to a podcast recently and I was like, started to listen to it and I was like, let me just put this through Coneo and see what happens. And it just was able to distill like the key points so fast and going back to like the roadmap ahead you were asking about in our next release actually October 13.

 

Matt (21:06.845)

Mm-hmm.

 

Matt (21:18.182)

Yeah.

 

Nisha (21:29.75)

There’s a really cool feature coming out that’s called action items. So now not only do you get themes like that have been synthesized during the process, but you actually get the items that to action on, right? So like this is what your users have talked about. These are the actions to take that came from us using it and from feedback. Like I think like I wouldn’t say 50 50. I’d want to say like more like 70 users 30 us if we had to put a ratio to it.

 

Matt (21:38.716)

Mm-hmm.

 

Nisha (21:56.138)

But I think we end up all seeing like the great thing is we I think we all end up like coming up with very similar pain points and One of the main pain points we heard is like this is great, but it doesn’t give me items still like where I need to go next So so I ran the initial like the first round of Kono interviews We did before user interviews we did before we had started building our product, right? We had just like a prototype

 

Matt (22:11.037)

Hmm

 

Nisha (22:23.418)

I ran those interviews on dev through the action items feature to see what the action items were. It actually gave me action items that were the features that we ended up building, which is crazy, right? It told us users want audio recordings, users want ability to integrate with Zoom and Google Meads. I think that’s…

 

Matt (22:39.812)

Wow. Yeah.

 

Nisha (22:52.502)

That’s like, I kind of got off on a tangent, but that’s what happens when I get excited. I think that’s something that we’ve heard from users and from we’ve also experienced that we’re really excited about. And then, yeah, like I think it’s cool that we get to use it as we do our process as well, because it definitely makes us realize like, what is like, you know, like sometimes you can just be drinking from the fire hose. Like you think of really cool ideas, but we use it and we’re like, this is annoying. We need to change it. Like we spot the little things too.

 

Matt (23:10.248)

Mm-hmm.

 

Matt (23:20.564)

Mm-hmm. Yeah.

 

Nisha (23:22.923)

So yeah, it’s a good mix.

 

Matt (23:25.252)

Yeah, that’s interesting. And, you know, getting into the, um, let’s get into like the future state, like, you know, way in the future, you know, where do you see the practice of user research going if things continue to evolve where AI is continuing to evolve? Like, is there a future where it’s not even a real person doing the interview? And like, do I, at some point in the future have a, an agent or a bot that’s, you know, collabing with somebody else? And.

 

Performing this research like you ever see a future where it looks like that Like what is where does your mind go when it starts to wander of where things could be way in the future?

 

Nisha (24:05.262)

I mean, I don’t know about like, yeah, like great point. And I think people like wonder about that. But like, for me, I think there’s like a degree to personal interaction. Like if you’re a bot interviewing me right now, I feel like, sure, maybe like in some years, there will be AI that’s able to replicate each of us very well. But I do think like that human to human interaction is important in being able to, you know, like what? 94% of…

 

cues or like communication is nonverbal, right? Like I think there’s a lot to process that’s outside of just like a conversation that I’m sure AI will be able to replicate, but I don’t know if I’m like, yes, like we want to make everything computer like, you know, that like in the age of AI and just take away the human element. I think more so like the way I see Coneaut evolving in the future is being able to scale across, right? Like not like becoming so.

 

Matt (24:59.879)

Hmm

 

Nisha (25:01.262)

I’m so focused on like automating the entire user research process, but being able to scale to all types of research. So like as like to be able to reach product market fit and to really understand our target audience, we want to focus on user researchers right now. But to be able to like scale, I think where we go is just redefining all types of research, right? Like how do we help in the political space? How do we help in academia? How do we go into like?

 

Matt (25:17.064)

Mm-hmm.

 

Matt (25:23.717)

Hmm.

 

Nisha (25:29.662)

specific types of research. And I think that’s where I see Kono moving. That’s where we’re going in the future. Not like, I don’t see us adding a component where we’re gonna build in AI bots that interview people. And so once again, that’s why I feel we’re not taking away anything. It’s more just like, let’s augment the cycles so that people can be more productive and be up to speed with development teams. Just like, I just read.

 

Matt (25:43.642)

Yeah.

 

Nisha (25:58.81)

someone posted today about copilot, like the code AI, right? And just telling engineers that copilot is something people should lean into. They can automate so much of what they’re currently doing, like some of the like tedious, like granular code writing that like you don’t necessarily need to spend as much time on and can focus on the bigger picture. I see that exact parallel to.

 

to co-note with user research.

 

Matt (26:29.636)

Yeah, that’s a great connection point there. We’re using Copilot a lot at Hatchworks, and it kind of just gets the mundane out of the way, so you can think about the bigger problems. But I want to pause here, like, for all the listeners, when you’re thinking about product strategy and product in general, the way Nisha and team are doing it is a perfect example. They’re solving a specific use case for a particular user and user researchers, but you can also see where her mind’s going in terms of like, Tans Gentle,

 

other areas where they could move into in the future, but you kind of build this, uh, you know, this base with user researchers first. And that allows you the opportunity, uh, to expand further out, but you got to do that first. So that’s a great way to think about it. Don’t try to boil the ocean, solve something specific first, but is there an area you mentioned a couple, is there one that you think is like, Oh, this is definitely where we’re going next.

 

uh, from you mentioned like the political side, like these different areas, is there one that excites you outside of just traditional kind of, you know, product technology solution, user research.

 

Nisha (27:39.555)

Yeah, I don’t think I can say that like there’s one, like I think there’s multiple, right? Like this, people have already been using Kono for marketing use cases. So I think that’s probably like the next place to really go, right? Like, hey, we want to distill all of these, uh, these interviews or these, uh, podcasts and find the key quotes. Um, and this is going to help us be able to like.

 

Matt (27:51.004)

Mm-hmm.

 

Nisha (28:00.746)

make our marketing campaigns faster, just being able to pull these quotes out and having people saying them. So I think that’s a place that we can really like either be used right now or expand to immediately. I think political campaigns could be really cool because as we’re coming up also into a big year, like I think just hearing a lot of like if people, you know, like if there’s campaign interviews, being able to distill those and, and then once again, like play clips depending on whoever we’re working with.

 

And then I think that academia is close to my heart and also a really great space to be able to use this. Like, let’s say you’re a master’s student working full time, which I was, and you have like multiple lectures, right? Like that you have gone to and then are recorded. Imagine being able to use Kono to upload these lectures and then to just be able to find the most important themes and use this to study.

 

Matt (28:41.5)

Yeah.

 

Nisha (28:55.87)

Like I think this basically is with some tweaks, of course, right? Like we’re like, once again, like you said, like we have focused on user research for a reason, um, and I see this being expanded into like a line of products potentially, or, you know, Kona academia, Kona marketing, that kind of thing, but, um, just imagine being able to like take your notes and be able to like have like an easy way to, to like search across like hours and hours of lectures.

 

Matt (29:10.563)

Mm-hmm.

 

Nisha (29:24.706)

That would have made my life so much easier, honestly, when I was doing my masters. So I just think, like, yeah, those are like some key areas that I’m excited to focus on. I don’t know if like one will come before the other. I think we still have to like really nail this initial product market fit and group down. But I think that there’s like, the exciting thing about Conote is I feel like there’s so much room to grow. And there’s like so, there’s so many things that I want to act on, which makes me feel excited about it.

 

Matt (29:54.764)

Nice. Well, I think that’s a good stopping point, Nisha. Thanks for being on the Built Right podcast and where can folks find you, find Connaught? What’s the best place to go?

 

Nisha (30:04.546)

You can email me at nisha.conote.ai. And you can also just check out Conote. It’s live, you get five free credits. Go test it out, email me, let me know what you think. So our website is conote.ai. And then from the website, you can log into the app. So it’ll take you straight there and it’s pretty easy. So yeah, we’d love to hear from you all.

 

Matt (30:31.888)

Awesome, great having you on Nisha, thank you.

 

Nisha (30:34.55)

Yeah, thanks Matt.

The post Generative AI: Augmenting or Replacing Research? appeared first on HatchWorks.

]]>
Generative AI Use Case Trends Across Industries: A Strategic Report https://hatchworks.com/blog/software-development/generative-ai-use-cases/ Wed, 25 Oct 2023 18:41:13 +0000 https://hatchworks.com/?p=30149 It’s not a matter of if generative AI will impact an industry, it is a matter of how large the impact will be. McKinsey research found that generative AI (gen AI) features stand to add up to $4.4 trillion to the global economy annually. That is a trillion with a “T”. This advancement in AI […]

The post Generative AI Use Case Trends Across Industries: A Strategic Report appeared first on HatchWorks.

]]>
It’s not a matter of if generative AI will impact an industry, it is a matter of how large the impact will be. McKinsey research found that generative AI (gen AI) features stand to add up to $4.4 trillion to the global economy annually. That is a trillion with a “T”. This advancement in AI is redefining the way industries operate, unleashing a wave of transformative capabilities that were once the stuff of science fiction.
A graphic titled "Generative AI Use Case Trends Across Industries" by HatchWorks.

Generative AI, is a subset of machine learning that focuses on teaching computers to generate content based on existing data. From art and language to problem-solving and creativity, Generative AI is proving to be a game-changer across numerous sectors.

The sheer volume of potential AI use cases generative AI enables can be mind-boggling. Whether you’re a healthcare professional seeking more accurate diagnoses, a financial analyst navigating complex markets, a marketer aiming to captivate audiences, or an educator striving for personalized learning experiences, Generative AI has something profound to offer.

Why does understanding the top use cases of Generative AI matter? For businesses, it presents an opportunity for innovation and efficiency, enabling them to stay competitive in a rapidly evolving world. For individuals, it opens doors to new possibilities, augmenting their capabilities and enhancing their quality of life.

Retail and E-commerce: Elevating Shopping Experiences with Generative AI

Retail and AI:

Staying ahead of consumer expectations is crucial in retail and E-commerce, and Generative AI may be the key to achieving that. The combination of retail and AI is redefining the shopping experience, making it more personalized, efficient, and engaging.

In this section, we’ll explore how Generative AI is revolutionizing retail and e-commerce, from optimizing inventory management to delivering tailored product recommendations and enabling visual product searches.

Inventory Management:

Effective inventory management is a balancing act between meeting customer demand and minimizing holding costs. Generative AI excels in this domain by analyzing historical sales data, demand forecasts, and market trends. AI algorithms can predict future demand with precision, allowing retailers to optimize their inventory levels.

The real unlock is the ability to query these enormous data sets with natural language, making it effortless to draw insights and take action. This not only reduces the risk of overstocking or understocking but also ensures that products are available when customers want them, enhancing overall customer satisfaction.

Shopping Experience:

The shopping experience is about to get a facelift with generative AI. Walmart is already bringing this technology to its customers by helping shoppers in all stages of the shopping experience from search and discovery to making a purchase. This includes features like a shopping assistant, gen-AI powered search, and an interior design feature helping you virtually design your room.

A digital interface displaying a festively decorated room with a focus on holiday decor. The left side contains chat-style interactions suggesting products within a budget and discussing options, while the right side showcases the room with price tags on individual items like a silver wreath, Christmas tree, stockings, a red throw blanket, and more. At the bottom right, buttons are present to show prices, add the room to cart, and save the room design.
An interactive holiday room decor shopping interface, combining chatbot suggestions and visual product pricing for an immersive user experience.

“Generative AI technology is a priority for the company,” said the Walmart spokesperson.

This is going to enable a truly personalized shopping experience that is interactive, conversational, and multi-dimensional.

Visual Search:

Search is about to get a lot easier. This Gen-AI advancement will transform how consumers find products in the digital world. By analyzing images and patterns, AI can identify products similar to those in a user’s photos or descriptions. This enables users to simply snap a picture or describe an item and receive relevant product suggestions.

While Google will certainly be playing at the forefront of this functionality, others are also taking advantage. SnapChat recently announced its rollout of Visual Search. This functionality will allow users to search for products on Amazon simply by simply focusing their camera on a product or barcode and snapping a picture.

A sequence of three mobile screenshots depicting the process of using an image search feature. The first screenshot shows a close-up of someone's foot wearing a white Under Armour shoe. The second screenshot displays a "Searching..." In the third screenshot, a popup from Amazon displays the shoe as "Under Armour Men's HOVR Sonic" priced at $100.00, along with its rating.
Image recognition search process in action: from capturing an Under Armour shoe, identifying it, to presenting the exact match on Amazon.

Visual search enhances the convenience and speed of finding products, making the shopping process smoother and more enjoyable.

The integration of Generative AI into retail and e-commerce is not just about optimizing operations; it’s about creating a shopping experience that resonates with customers, fostering loyalty and driving growth. It’s a testament to the technology’s ability to enhance how we discover and acquire the products we love.

Next, we’ll explore how Generative AI is shaping the landscape of education and e-learning, where personalized learning is paramount.

Healthcare Industry: Improving Patient Outcomes Through Gen-AI

Healthcare and AI:

In the healthcare industry, where precision and speed can be a matter of life and death, Generative AI has the potential to be a powerful ally. However, considering some AI use cases do involve human life, proceeding with caution is paramount. The key is to identify AI use cases that have an outsized benefit relative to the potential risk to the patient.

AI-Driven Diagnostics:

One of the most remarkable applications of Generative AI in healthcare is in diagnostics. Traditional diagnostic methods often rely on human interpretation of medical data, such as images and patient histories. This can take a long time. Generative AI, powered by advanced machine learning algorithms, revolutionizes this process.

By analyzing vast datasets of medical records, images, and patient data, artificial intelligence can identify intricate patterns and subtle anomalies that might elude human perception. This not only accelerates the diagnostic timeline but also elevates accuracy to unprecedented levels. Patients benefit from timely and precise diagnoses, which can be crucial in cases where early intervention is essential.

HCA Healthcare is piloting a solution that extracts information from physician-patient conversations to create medical notes. These notes are then transferred to the electronic health record (EHR) helping eliminate manual entry and dictation freeing the doctor up to focus on the patient.

Drug Discovery and Development:

The process of discovering and developing new drugs is notoriously lengthy, complex, and expensive. Generative AI is poised to change this paradigm. By simulating molecular interactions and predicting potential drug candidates, artificial intelligence expedites drug discovery.

This not only accelerates innovation but also reduces the costs associated with research and development. The result is a faster pipeline for bringing life-saving therapies to market. Generative AI is, in essence, a catalyst for groundbreaking medical advancements.

Personalized Treatment Plans:

Every patient is unique, and their healthcare should reflect that individuality. Generative AI has the potential to play an important role in creating personalized treatment plans tailored to each patient’s specific needs.

By analyzing a multitude of data points, including genetic profiles, medical histories, and lifestyle factors, Artificial intelligence can recommend treatment strategies that are not only effective but also minimally invasive. This level of personalization marks a significant shift from one-size-fits-all approaches, ultimately improving patient outcomes and enhancing their quality of life.

The healthcare industry can sometimes be slow to adopt new technologies, but the impact of generative AI is one that should not be overlooked. It has the potential to reshape the way healthcare professionals approach disease diagnosis, drug development, and patient care. It’s a testament to the remarkable potential of this technology to enhance and even save lives.

Next, we’ll explore how Generative AI is making waves in the financial services sector, where precision and speed are also of utmost importance.

Financial Services: Enhancing Client Experiences and Driving Financial Growth with Gen-AI

Financial Services and AI: Revolutionizing Finance with Gen-AI

The financial services industry has long been at the forefront of adopting cutting-edge technologies, and Generative AI is no exception. The marriage of financial services and artificial intelligence is reshaping the sector, ushering in an era of unparalleled innovation.

It is also democratizing the field that is typically limited to large hedge funds, algorithmic trading companies, and quant funds that have access to large data models. With the latest introduction to publically available large language models, the playing field is being leveled.

Risk Assessment and Fraud Detection:

In the high-stakes world of finance, risk assessment and fraud detection are paramount. Generative AI, with its ability to analyze vast datasets in real-time, plays a crucial role in safeguarding financial transactions.

AI algorithms can detect unusual patterns and anomalies that may indicate fraudulent activities, providing financial institutions with early warnings to prevent potential breaches. Additionally, Generative AI enhances risk assessment by evaluating complex variables and market trends, enabling more informed decision-making in lending and investment processes.

Algorithmic Trading:

Algorithmic trading, which relies on rapid data analysis and decision-making, is a natural fit for Generative AI. AI-driven algorithms can analyze market conditions, news events, and historical data with lightning speed, executing trades with precision and efficiency.

This not only reduces human errors but also optimizes trading strategies to capitalize on market opportunities. The result is a more efficient and responsive financial market that benefits both institutions and investors.

Customer Service and Chatbots:

Customer service is a critical component of the financial industry, and Generative AI is enhancing customer experiences through AI-powered chatbots. These chatbots provide instant, round-the-clock assistance to customers, answering queries, and handling routine tasks such as account inquiries and transaction processing.

Chatbots leverage natural language processing (NLP) to understand and respond to customer inquiries effectively. This not only improves customer satisfaction but also frees up human agents to focus on more complex tasks, such as personalized financial planning.

The financial services sector’s integration of Generative AI technology is revolutionizing how financial transactions are conducted, risks are assessed, and customer interactions are managed. It’s a testament to the technology’s capacity to streamline operations, enhance security, and provide a more customer-centric approach.

Next, we’ll explore how Generative AI is shaping the world of marketing and advertising, where creativity and precision are paramount.

Marketing and Advertising: Redefining Creativity with Gen-AI

Marketing and AI:

Marketing was one of the first industries to quickly adopt and feel the impact of generative AI, acting as a great use case for other industries. It is turning tasks that typically could take hours even weeks into minutes without sacrificing the creativity required in marketing.

This was the one hurdle critics never thought artificial intelligence would cross. But it has with flying colors. In this section, we’ll explore how Generative AI technology is revolutionizing marketing and advertising, from content creation to targeted advertising and gaining insights into consumer behavior.

Content Generation:

Generating captivating and relevant content is a cornerstone of successful marketing. Generative AI is making this easy with tools like ChatGPT and Midjourney to name a few. By analyzing vast datasets of text, images, and video, AI can generate compelling content, including articles, product descriptions, and even advertisements.

This not only saves time but also ensures consistency and relevance in content creation, enabling marketers to engage their audience more effectively.

Targeted Advertising:

Effective advertising hinges on reaching the right audience with the right message at the right time. Generative AI optimizes this process by leveraging data analytics and machine learning algorithms.

For example, Meta which has a $114 billion a year ad platform just announced generative AI features for advertisers. These features allow for quick generation of subtitle copy and design tweaks between ads allowing quicker and more efficient A/B testing. This is just the beginning as the war for eyeballs will commence between the major digital advertising players.

A triptych of digital advertisement examples. The left panel showcases an ad with the caption "Image expansion" featuring food from "Jasper's Market." The middle panel, labeled "Background generation," displays an ad for a bag with various background images. The right panel, titled "Text variations," presents an advertisement with different textual descriptions.
A compilation of digital advertising techniques: expanding images for food items, generating diverse backgrounds for product displays, and experimenting with text variations for ad content.

Customer Insights:

Understanding consumer behavior is crucial for crafting winning marketing strategies. Generative AI excels in this area by analyzing vast datasets of consumer interactions, social media activity, and purchasing habits. AI algorithms can identify patterns and trends, allowing marketers to gain valuable insights into what drives consumer choices.

This data-driven approach empowers marketers to fine-tune their strategies, optimize campaigns, and create more engaging content that resonates with their target audience.

Generative AI’s integration into marketing and advertising is redefining how brands connect with consumers. It’s not just about automation; it’s about elevating creativity and personalization to unprecedented levels.

However, with this innovation comes increased noise as the barrier of entry to create content drops to new lows. The ability to stand out and differentiate will likely get harder. Not easier.

Next, we’ll dive into the manufacturing and industry 4.0 sector, where Generative AI is optimizing operations and driving efficiency.

Manufacturing and Industry 4.0: Transforming Operations with Gen-AI

Manufacturing and AI:

Manufacturing has entered a new era with the advent of Industry 4.0, and at its core is the integration of artificial intelligence (AI). Generative AI, in particular, is revolutionizing modern manufacturing by enhancing efficiency, productivity, and innovation. In this section, we’ll delve into how Generative AI technology is reshaping manufacturing and Industry 4.0, from predictive maintenance to product design and supply chain optimization.

Predictive Maintenance:

Deloitte estimates on average, predictive maintenance increases productivity by 25%, reduces breakdown by 70%, and lowers maintenance costs by 25%. Generative AI has the potential to impact these stats to an even greater degree. Traditionally, machinery maintenance was scheduled at regular intervals, often leading to unnecessary downtime and costs.
A series of yellow robotic arms in a well-lit warehouse aisle, with shelves stacked with boxed goods on either side.
Advanced robotic arms streamline operations in a modern warehouse, surrounded by rows of neatly organized packages.

Generative AI changes this by continuously monitoring equipment through sensors and analyzing data in real-time. It can predict when a machine is likely to fail and trigger maintenance just in time, minimizing disruptions and reducing maintenance expenses.

This proactive approach ensures that production lines run smoothly, optimizing overall efficiency.

Product Design and Prototyping:

Generative AI plays a pivotal role in product design and prototyping. By analyzing design parameters and constraints, AI can generate and refine design concepts rapidly.

This accelerates the design process and also encourages innovation by exploring design possibilities that are overlooked by human designers. Additionally, Generative AI aids in the creation of prototypes by generating 3D models and simulations, facilitating rapid iteration and minimizing costly physical prototypes.

Supply Chain Optimization:

Efficient supply chain management is critical in modern manufacturing. AI algorithms can analyze vast amounts of data from suppliers, logistics, and demand forecasts to optimize the entire supply chain. This includes managing inventory levels, minimizing transportation costs, and ensuring timely deliveries.

Supply chain optimization not only reduces operational costs but also enhances responsiveness to market changes, ultimately improving customer satisfaction.

Generative AI’s integration into manufacturing and Industry 4.0 is driving a paradigm shift in how products are designed, produced, and delivered. It’s not just about streamlining processes; it’s about fostering innovation and adaptability, ensuring that manufacturing remains at the forefront of technological advancement.

Next, we’ll explore how Generative AI is reshaping the retail and e-commerce sector, where personalized experiences are key to success.

Education and E-Learning: Personalized Learning Powered by Generative AI

Education and AI:

Generative AI has the potential to change how we learn and educate across the globe. The fusion of education and AI is reshaping learning experiences, making them more personalized, adaptive, and effective. While some are quick to ban tools like ChatGPT, there is an opportunity to enhance the learning experience for both teachers and students.

In this section, we’ll delve into how Generative AI is revolutionizing education and e-learning, from tailoring learning experiences to automating grading and facilitating language learning.

Personalized Learning:

One of the most profound impacts of Generative AI in education is personalized learning. Traditional classrooms often employ a one-size-fits-all approach, which may not cater to the unique needs and pace of individual learners. Generative AI changes this by analyzing student performance data and learning styles to create customized learning paths.

This ensures that each student receives content and assignments tailored to their strengths and weaknesses, optimizing their learning experience and outcomes.

Automated Grading:

Grading and assessment are essential components of education, but they can be time-consuming for educators. Generative AI automates this process, relieving teachers of the burden of manual grading. AI algorithms can evaluate assignments, quizzes, and exams quickly and consistently, providing instant feedback to students. This not only streamlines the grading process but also allows educators to focus on more meaningful aspects of teaching, such as providing mentorship and support.

Language Learning Tools:

Language learning is another area where Generative AI shines. AI-powered language learning apps leverage natural language processing (NLP) to understand and respond to learners’ speech and writing.

These apps provide personalized lessons, practice exercises, and even conversation partners, enhancing language acquisition. Duolingo, a leader in this space, is using Gen-AI to make language learning more engaging and interactive. Their new feature Roleplay allows users to practice real-world conversation skills with world characters in the app. The best part is they never get tired of talking to you.

A mobile app screenshot showing a conversation interface. A user and a cartoon avatar of a barista are conversing in French. The barista asks, "What drink?" to which the user responds with "Café au lait." The barista then asks for the user's name for the order, and the user replies, "Je m'appelle Megan." The barista confirms with a message saying they'll have the order ready soon.
Mobile chat interface where a user named Megan orders a "Café au lait" from a virtual barista.

Generative AI makes language learning more engaging and accessible, breaking down language barriers for global learners.

The integration of Generative AI into education and e-learning represents a fundamental shift in how knowledge is imparted and acquired. It’s not just about automating tasks; it’s about enhancing the quality and effectiveness of education, ensuring that learners have the tools and support they need to succeed.

Next, we’ll explore how Generative AI is reshaping the world of entertainment and content creation, where creativity knows no bounds.

Entertainment and Content Creation: A Creative Revolution with Generative AI

Entertainment and AI:

The entertainment industry has always been at the forefront of innovation, and the adoption of generative AI is no different. AI is reshaping how we create, consume, and enjoy content.

In this section, we’ll explore how Generative AI is revolutionizing entertainment and content creation, from generating music, art, and literature to enhancing film and video production and even influencing the world of gaming.

Content Generation:

Generative AI has unlocked the door to limitless creativity. It can generate music, artwork, and even literature with simply a prompt. Music composition algorithms can analyze existing melodies and styles to create original compositions. AI artists can generate paintings, sculptures, and digital art that captivate audiences. Readers can explore neverending AI-generated stories and poems.

This not only pushes the boundaries of human creativity but also democratizes art, making it accessible to a broader audience.

Film and Video Production:

Gen-AI’s impact in film and video will no-doubt be huge. It is even creating a battle between Hollywood writers and AI as Hollywood screenwriters held-out through a 148 day strike.

None the less, AI will have a big impact in the future. AI-powered video editing tools can analyze footage, automatically cut scenes, and even suggest the most emotionally engaging sequences. AI-powered video editing tools can analyze footage, automatically cut scenes, and even suggest the most emotionally engaging sequences.

Special effects, once reserved for big-budget productions, are now within reach through AI-generated visuals. From enhancing visual effects to automating mundane editing tasks, AI elevates the quality and efficiency of film and video production.

A German tech entrepreneur is using AI-powered programs like Midjourney to create the footage, sound effects, and voices for a 70s-inspired sci-fi film.

Gaming:

Generative AI is making a significant impact on the gaming industry, influencing both game development and gameplay. AI-driven algorithms can generate game environments, characters, and even storylines.

This not only accelerates game development but also fosters innovation by creating unique gaming experiences. In gameplay, AI opponents can adapt and learn from players’ actions, providing dynamic and challenging experiences. AI also enhances player experiences through features like real-time translation and voice recognition.

The integration of Generative AI into entertainment and content creation is pushing the boundaries of what’s possible. It’s not just about automating tasks; it’s about unlocking new levels of creativity and interactivity, ensuring that entertainment remains a source of wonder and inspiration for audiences worldwide.

Next, we’ll explore how Generative AI is contributing to sustainability and environmental monitoring in the energy sector.

Energy and Sustainability: Transforming the Future with Generative AI

Energy and AI:

The energy sector, a cornerstone of modern life, has the potential to be completely transformed with Generative AI. The combination of energy and AI is redefining how we produce, distribute, and monitor energy, making it more efficient and sustainable.

In this section, we’ll explore how Generative AI is revolutionizing the energy sector, from optimizing energy distribution to advancing renewable energy solutions and monitoring environmental conditions.

Grid Management:

Efficient grid management is critical for ensuring a stable and reliable energy supply. Generative AI plays a pivotal role in this aspect by analyzing vast amounts of data from sensors, weather forecasts, and energy demand patterns. AI algorithms can optimize energy distribution, predict peak demand periods, and even reroute energy flows to prevent outages.

Gridmatic, a company focused on bringing AI to the climate change fight says, “AI is not just useful, but necessary. We use multiple forms of AI, but fundamentally we have built a model of the US electricity grid down to the nodal level. This foundation enables AI-powered forecasting.”

This not only enhances grid reliability but also reduces energy waste and costs, ultimately benefiting both utilities and consumers.

Renewable Energy:

The transition to renewable energy sources is a global imperative, and Generative AI is accelerating this shift. AI algorithms can analyze weather data, solar and wind patterns, and energy consumption trends to optimize the integration of renewable energy sources into the grid.

This ensures that renewable energy is harnessed efficiently and reduces the reliance on fossil fuels, contributing to a more sustainable and eco-friendly energy landscape.

Environmental Monitoring:

Environmental monitoring is crucial for assessing the impact of energy production on the environment. Generative AI aids in this endeavor by analyzing data from remote sensors, satellites, and ground-based measurements. AI can detect changes in air quality, monitor emissions, and assess ecological impacts.

This data-driven approach not only enhances environmental stewardship but also enables timely interventions to mitigate environmental damage.
The integration of Generative AI into the energy sector is shaping a more sustainable and efficient future. It’s not just about optimizing energy; it’s about preserving our planet for future generations, ensuring that energy production aligns with environmental responsibility.

In the next section, we’ll explore how Generative AI is revolutionizing the agricultural industry, where precision and sustainability are paramount.

Agriculture: Cultivating a Sustainable Future with Generative AI

Agriculture and AI:

The agricultural industry, essential for feeding the world’s population, can benefit heavily from advancements in generative AI. The combination of agriculture and AI is revolutionizing farming practices, making them more precise, productive, and sustainable.

In this section, we’ll explore how Generative AI is reshaping agriculture, from precision farming to early disease detection and accurate yield predictions.

Precision Agriculture:

Precision is paramount in agriculture, and Generative AI enhances it significantly. Precision agriculture leverages AI-powered tools to analyze data from sensors, satellites, and drones to create detailed maps of fields.

These maps provide insights into soil health, moisture levels, and crop growth, allowing farmers to optimize irrigation, fertilizer application, and planting patterns. The result is higher crop yields, reduced resource consumption, and improved sustainability.

A drone flying over a field with tall golden crops in the foreground and lush greenery in the background, under a partly cloudy sky.
A drone surveys an expansive crop field on a bright day, capturing the contrast between golden crops and the blue sky dotted with clouds.

Crop Disease Detection:

Early detection of crop diseases is crucial for minimizing crop loss. Generative AI plays a vital role in this area by analyzing images of crops and leaves to identify signs of diseases, nutrient deficiencies, and pest infestations.

By spotting these issues early, farmers can take proactive measures, such as targeted pesticide application or crop rotation, to safeguard their harvests and minimize environmental impact.

Harvest Prediction:

Accurate yield predictions are essential for efficient farm management and supply chain planning. Generative AI leverages historical data, weather forecasts, and satellite imagery to provide precise yield predictions.

These predictions help farmers make informed decisions about harvesting, storage, and transportation, ultimately reducing food waste and ensuring a stable food supply.

SpaceAG, an agtech startup is doing just this with their algorithm that is trained on 11 varieties of blueberries which recognizes the different phenological stages (flower, green, purple, blue). This recognition and counting give farmers a better prediction of their crop yield allowing for improved quality and quantity.

Integrating Generative AI into agriculture is not just about improving farm operations; it’s about cultivating a sustainable future. It empowers farmers to produce more with fewer resources, minimize environmental impact, and contribute to global food security.

In the final section, we’ll recap the diverse industries where Generative AI is making a significant impact and highlight its potential to shape a more innovative and sustainable world.

Future Potential: Navigating the Expanding Horizons of Generative AI

The Expanding Role of Generative AI:

As we’ve explored, Generative AI is already making waves across various industries, transforming how we work, create, and live. Its role is poised to expand even further as AI technologies continue to evolve.

In the future, we can expect Generative AI to play a more significant role in fields such as healthcare, finance, education, entertainment, energy, and agriculture. The boundaries of what is possible with AI-driven creativity, precision, and efficiency are continually expanding, opening new doors for innovation and progress.

Ethical Considerations:

With great power comes great responsibility, and Generative AI is no exception. As its influence grows, ethical considerations become paramount. Questions about data privacy, bias in AI algorithms, and the potential for misuse need to be addressed.

Responsible AI development and deployment, along with robust ethical frameworks, will be crucial to ensure that Generative AI benefits society without harming it.

On the HatchWorks Built Right podcast, Jason Schachter – founder of AI Empowerment Group, dug into how you should consider risk when vetting use cases.

Challenges and Future Trends:

While Generative AI holds immense promise, it also faces challenges. These challenges include addressing the “black box” problem in AI, where decisions made by algorithms are not easily explainable, as well as refining AI’s understanding of context and nuance.

In addition to challenges, several trends will shape the future of Generative AI. These include increased automation in various industries, the development of AI-powered creativity tools for artists and content creators, and the continued integration of AI into everyday life through voice assistants, smart devices, and autonomous systems.

Generative AI’s future is bright, but it must be navigated with care, considering ethical implications and addressing challenges. As we continue to harness the power of Generative AI, we are on a path toward a more innovative, efficient, and sustainable world, where human creativity and AI-driven precision coexist harmoniously.

See how HatchWorks is leading the way in AI-powered software development.

Our Generative-Driven Development™ leverages cutting-edge AI to optimize your software projects.

Discover the difference on our page.

Conclusion: Embracing the Transformative Power of Generative AI

Generative AI is not just a technology; it’s a catalyst for change. It optimizes operations, empowers creativity, and enhances decision-making across numerous sectors. As it continues to evolve, its influence is expanding, promising to reshape even more facets of our lives, from education to agriculture, and energy to sustainability.

The key takeaways are clear:

  • Gen-AI is a disruptive force, revolutionizing industries and redefining standards.
  • Personalization, efficiency, and sustainability are central themes across sectors.
  • Ethical considerations and responsible AI use must guide its development and deployment.
  • Challenges are opportunities for growth, and the future holds exciting trends.

We encourage you, to explore and leverage the power of Generative AI in your respective fields. Whether you’re a healthcare professional, financial analyst, marketer, educator, filmmaker, farmer, or anyone else. Embrace it with curiosity, creativity, and responsibility.

Hatchworks: Your US-Based Nearshore Software Development Partner

At HatchWorks, we understand the importance of leveraging generative AI responsibly and ethically.

As a software development partner, we utilize the power of generative AI to build innovative digital products that meet the unique needs and expectations of our clients tailored to your industry.

Reach out to us to learn more about how we can help you harness the potential of generative AI for your projects.

The post Generative AI Use Case Trends Across Industries: A Strategic Report appeared first on HatchWorks.

]]>
The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management https://hatchworks.com/built-right/how-emotionally-intelligent-ai-is-reshaping-management/ Tue, 17 Oct 2023 06:00:20 +0000 https://hatchworks.com/?p=30045 As more and more companies bolt AI onto their products, the importance of being intentional with our use of it is growing fast.  So what are the best ways to embed AI into management products in a purposeful way?   In this episode of the Built Right podcast, we speak to Brennan McEachran, CEO and Co-Founder […]

The post The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management appeared first on HatchWorks.

]]>

As more and more companies bolt AI onto their products, the importance of being intentional with our use of it is growing fast. 

So what are the best ways to embed AI into management products in a purposeful way?  

In this episode of the Built Right podcast, we speak to Brennan McEachran, CEO and Co-Founder of Hypercontext, an AI-empowered tool that helps managers run more effective one-on-ones, leading to better performance reviews.  

Brennan shares how he and his co-founder created the tool to aid their own work and later found how it could enhance multiple areas of performance management for leadership teams, creating faster, more streamlined processes. 

Read on to find out how AI can improve EQ in the workplace or listen to the podcast episode below. 

The story behind Hypercontext 

Hypercontext has a unique, organic backstory. Brennan and his co-founder, Graham McCarthy, worked together at a previous company, gaining enough experience as builders and sellers of products to become managers. 

As they focused on being manager-operators, Brennan and Graham concluded that their strengths still lay in building great products. They began building small-scale desk tools to make their work easier and, as COVID struck and everyone became a remote manager overnight, they made this their main focus. 

Brennan shares that one of the products that steadily took off was what would later become Hypercontext, helping managers become the best bosses their team has ever seen. 

Initially guiding managers through one-to-one and internal meetings using the superpowers of AI, Hypercontext has branched out into providing useful tools for performance reviews too. 

 

How AI is quietly revolutionizing HR 

Brennan remembers first taking demos out to HR managers and receiving a mixed response. 

Despite loving the concept, these managers were sceptical because of its use of AI and feared that it was too forward-thinking.  

However, the boom of ChatGPT and other AI tools in late 2022 caused a change of heart. Many HR professionals also realized that their managers had been using AI tools for their performance reviews for a while and warmed to the idea that it could be used to enhance their meetings. 

Brennan notes that cultural reservations can stand in the way of progress. With the AI wave tending to hit tech first: “If we’re not ready for it, we’re in for a world of hurt.” 

 

How AI is transforming EQ  

One of the main concerns surrounding AI is its lack of human touch. But Brennan suggests that, used in the right way, it can actually enhance the things that make us human. 

All managers, in HR or otherwise, have those tasks that are regularly cast aside in favor of more ‘pressing’ jobs. If they had “infinite time”, maybe things would be different. Brennan suggests that AI can take these tasks on and streamline the working processes of managers. 

He also explains how Hypercontext can provide the information that makes us more human. From its conversation starters, to the data it gathers about team members, it can actually make reviews and meetings more empathetic. 

Brennan says: “I think a lot of people have fears about AI taking jobs or removing the humanity in certain things. Done right, AI has that potential to make you more human, in and of yourself.”   

 

The future of developers

Did you know that, of the users who use Copilot, around half of the code committed to GitHub is AI-generated? It’s no secret that AI will impact software development, but this fact begs the question – what does the future hold for developers? 

Brennan thinks this is the time for software developers to pivot to a new focus and suggests those doing “wildly different things” could be setting themselves up for success.  

 

Using AI to write performance reviews 

When Brennan realized that AI could be used to write performance reviews, he was hesitant to fight big-name industry players to find a solution. However, he was determined to be the person to do it the right way. 

He explains that he didn’t want to see a bolt-on tool created that “generates superfluous text around nothing” and was eager to make something that genuinely made managers better in their work. 

Brennan explains how Hypercontext allows managers to compile findings from multiple peer- and self-reviews, identify key themes and tee up the conversations to build upon these themes, all in a minute; something a human just couldn’t do! 

He adds: “80% of people feel like our process is both better and faster than their previous one. Who wouldn’t want that?” 

Fuelled by the desire to make this tool the right way and prove that AI can enrich HR management, Hypercontext built a one-of-a-kind tool and set the HR-AI standard in the process.  

For more insights into using AI intentionally to become a better manager, head to episode 15 of the Built Right podcast.

Step into the future of software development with HatchWorks – see how our Generative-Driven Development™ can transform your projects.

Matt (00:02.067)

Welcome, Built Right listeners. Today, we’re chatting with Brennan McEacher, and CEO and co-founder of Hypercontext, an AI-empowered tool that helps managers run more effective one-on-ones, which leads to better performance reviews. And it’s trusted by 100K managers, companies like Netflix, HubSpot, and Zendesk. Welcome to the show, Brennan.

Brennan (00:23.414)

Thank you for having me. So excited to be here.

Matt (00:25.547)

Yeah, excited to talk today. So the topic we have for our listeners is one, everyone really needs to stop what they’re doing and listen to. And today we’re going to get into how you should be strategically thinking about embedding AI into your products in an intentional way. And with the current, you know, AI hype, hype cycle we’re in, or everybody and their mom is bolting AI onto their products. And I don’t mean that in a good way necessarily. This is a conversation worth.

worth having, but, but Brendan, as a way to kind of set context, I love getting into our guests with, you know, what problem they saw in the market. They kind of triggered them to start their, their company, give some good context for the background.

Brennan (01:08.806)

Awesome, can do on the contact side.

I think the story of Hypercontext, the story of us founding it is sort of an organic one. Myself and my co-founder, Graham, had a previous company. We’ve been working together for a little over a decade and that previous company was successful enough or maybe we were successful enough at building and selling product that we ended up becoming managers. We had enough employees and staff around us to help us build a bigger and better business. And as we stopped…

building and selling, we realized that we sort of accidentally fell into a new career of managing and that this new task of being manager-operators is

Matt (01:51.201)

Mm-hmm.

Brennan (01:54.358)

completely different and very hard. So we did what we knew best, which was build. We built some little side of desk tools to help us be better managers and be better bosses. And as a long story short on that business, as COVID kind of came around and wiped out industries temporarily, we were sort of caught up in that mess and that business disappeared almost overnight. But these little side of desk projects that we had built.

Matt (02:18.698)

Yeah.

Brennan (02:23.966)

Uh, exploded everyone in the world became a remote manager overnight in the middle of a crisis and, uh, felt the pain of being a manager, uh, and being a remote manager and all of the problems that come along with that. And these little tools that we put out on the internet went from, you know, a couple of signups here and there to, uh, in some cases, thousands of thousands a week. Um, and so we, you know, made some tough choices, but otherwise we’re able to

Matt (02:29.215)

Mm-hmm.

Brennan (02:54.36)

pivot almost all of our energy towards what you see today. Hypercontext, which is as you mentioned, building tools to make managers the best bosses their team has ever seen. So we start with one-on-ones, we start with internal meetings, team meetings, add goals to it all the way through to now just recently launching performance reviews and I think that sort of leads into trying to build the performance review piece the right way.

Matt (03:18.616)

Hmm.

Matt (03:23.071)

Yeah, I love the, it’s kind of like the Slack story, right? Where you kind of built this thing on the side and powering it like, oh, this thing actually has legs. And I was just chatting with a friend yesterday, same kind of thing. They had like this side thing they had built and people were asking about it. And it’s like, well, maybe this is the thing. And it’s kind of an interesting story when, you know, something like a pandemic just changes your whole business model, right?

Brennan (03:47.626)

Yeah, I think the saying right of like scratch your own itch is sort of relevant here. We, um, we definitely started it as like, uh, something to scratch your own itch and, um, as early as we possibly could try to get, um, external people’s input on it. Um,

Matt (03:51.805)

Mm-hmm.

Brennan (04:07.858)

One of the things that I think I learned in the first business is like what you want and you know, what helps you is not always the exact same thing as what helps other people. So we tried to look for the general solutions, um, to some of these problems instead of a specific ones that would work for me being, you know, uh, you know, a tech guy, a product guy, whatever, we wanted to look for something that had sort of that broader appeal. And that’s actually how we landed on one-on-ones. We, we initially thought, Hey, there’s maybe more meetings that we could.

Matt (04:17.732)

Mm-hmm.

Brennan (04:37.992)

And when we went out and tried to talk to people and figure out a general solution, the amount of build we would have to do was just so big. We ended up looking for, like, what are some commonalities? One-on-ones ended up being really appealing.

Matt (04:47.817)

Yeah.

Brennan (04:53.934)

for a variety of reasons, but one of the main ones is that an engineer having a one-on-one with their manager is very similar to anyone else at any other company having a one-on-one with their manager, almost by definition, right? You’re not supposed to talk about the tactical day-to-day stuff, and so you talk about more of the meta conversations, which can be similar. So that sort of led us down some of these pathways.

Matt (05:04.087)

Mm-hmm.

Matt (05:21.463)

Yeah, I think that’s an interesting point, like just to pause there for anybody in product, you know, we talk about building your solution, the right building, the right solution and building it the right way. Building the right solution, start with a smaller use case. That’s a critical piece. Like you could have boiled the ocean and tried to figure out every meeting under the sun and all this stuff. And then your head would just explode with everything under HR. But you started with the one-on-one because it was one that, you know, universal.

It needed help, right? So you identified this problem in the market. I just love that. And now it’s turning into more as you’ve built proof behind it.

Brennan (06:00.114)

Yeah, you know, we started with just exploring the, the exploration sort of led us to say, you know, and especially coming from the last business where it was a lot of change management to kind of sell the product. Um, we wanted to avoid some of the change management. So we’re like, what is already existing? Um, and the only thing that I could kind of point to as proof was like the calendar. So like

When we’re building some of these products, it was what it already exists on the calendar. Let’s not make people do something new. Let’s look at their calendar first and see if there’s anything we can do on that calendar to make it 10 times better. And so, um, you know, the one-on-one was there. Um, so was the team meetings was the board meeting. So was that, you know, the QBRs, all these other types of meetings. The interesting thing, there’s so many things that are interesting about, um, uh, one-on-ones for us as a business, um, you know, almost every manager has one. So there’s lots of entry points into the organization.

Matt (06:29.495)

Mm-hmm.

Matt (06:54.881)

Yeah.

Brennan (06:55.212)

Which was a key piece of what we thought our strategy would be Very easy to Try because you can try it with one person. You don’t have like a town hall is tough to try You have to do it with your whole company

Matt (07:09.615)

Hmm.

Brennan (07:10.938)

Um, so you, so with a one-on-one, you can pick the most open-minded person on your team or try out the product. If it works well, you sort of get into some of the other things. It’s, it’s, um, very replicable, right? If you have something that works with one person, it should work for other people. Um,

So many other things it can spread, right? Like you have a one-on-one with direct reports. You also have one with your boss, right? You’re all the way up to the CEO and the CEO all the way down to a different department so you can spread it exists on the calendar. So many things that led us to it. And just because you have seven direct reports, seven one-on-ones, um, plus your boss, plus maybe a peer one-on-one just by frequency of meeting it’s, uh, it’s a very high frequency meet meeting. Um, there’s way more one-on-ones than there are team meetings at, at businesses. So.

Matt (07:32.803)

Mm-hmm. Yeah.

Matt (07:56.629)

Mm-hmm.

Brennan (07:57.424)

All of these things as we sort of bumped into it, we said, you know, hey, maybe there’s something here. What would it look like to do a 10x better job? And sort of honed in on that use case. What are people already using for it? What are they, you know, what are the good, the bad? Who are some of the competitors? And for a long time, the only people building tools for this space were the big boring HR companies, right? And like, no one wants to open up.

Matt (08:22.689)

Mm-hmm.

Brennan (08:24.53)

SAP, you know, or ADP and go into like this tiny little module to fill in a text box when you can have Apple Notes or something like that. So, zoomed in on that for sure.

Matt (08:36.241)

I still have nightmares from using one of those, but the time entry system, I’m like what button do I click? Who designed this thing? But they can’t get out of their own way because they have so much legacy, just what’s the word, technical debt that exists there, right?

Brennan (08:53.422)

And like they have to cover so many things or they have to do payroll for globe for, for every culture and company type and all that stuff. And you’re just one tiny module on there. So.

Matt (08:56.895)

Mm-hmm. Yeah. Oh my god. Yeah

Matt (09:05.059)

Yeah, but a lot of great, you know, PLG type of motions there, like you mentioned, the high frequency of using the product, building the habit. I think we talked about the book Hooked, which if anybody has not read that, check that out. It’s great. And there it is, the yellow blur and the background that stands out like a sore thumb, which is another great way of standing out. But I want to, let’s get into now AI, right? So your company was started post pandemic. This was pre.

Gen AI, you know, large language model craziness, even though they’ve been around for a long time, the crazy hype there. And you had AI integrated into the tool, but I’d love to get into this evolution. Cause one thing that struck me, uh, when we talked earlier, it’s like somebody’s going to do this talking about competitors, embedding AI.

but they’re just not going to do it the right way, right? And we want to do it the right way. But talk about that evolution, because so many folks, they just bolt on AI. It’s from a marketing perspective. They just want to key into the hype. But it’s such a bad way to do it from a strategic standpoint.

Brennan (10:04.118)

Yeah.

Brennan (10:10.57)

Yeah, it’s funny the place where we have AI the least right now is actually on the marketing side of something that we’re trying to fix. It’s definitely pretty heavy on the product. No, you’re exactly right. We wanted to build, um, the best, you know, for, for lack of a better term, we wanted to build the world’s best one-on-one tool for managers. Right. Um, and.

Matt (10:18.391)

Yeah.

Matt (10:34.648)

Mm-hmm.

Brennan (10:39.586)

that mission will never truly be accomplished because the market moves so quickly and we always have to serve the managers in that use case. But like largely, you know, quote unquote, mission accomplished. We sort of have the best hyper-connected workspace for one-on-ones for managers, whether it’s, you know, just one-on-ones or you wanna bring that team in once we have it for team meetings.

Matt (11:01.281)

Mm-hmm.

Brennan (11:03.978)

We added goals to it. So if you’re working on professional development goals, you’re working on team goals or OKRs. We have the largest library of goal and OKR examples on the internet built into the product. Like, you know, largely anything a manager, a team lead would need out of a platform for leading their team, sort of built it out of the box, PLG, go try it for free. And…

Matt (11:28.062)

Mm-hmm.

Brennan (11:31.182)

I mentioned some of the benefits of one-on-ones and some of these team meetings and that we get this organizational spread. Well, that started to happen, right? We would start to spread across these organizations through calendar invites. As people sort of discovered our tool and shared our tool, if COVID taught us nothing else, it’s that, you know, look for the super spreaders, right? Like we were sort of looking for the people who would spread our tool internally. And they did. And then

Matt (11:45.067)

Hmm.

Brennan (12:01.09)

Because we say the word manager so often, because we say the word one-on-one so often, when it came time for the organization to look at this tool and say, well, what is this tool used for? It’s often used for one-on-ones and for gold. Managers love it. It sort of fell on the desk of HR. It fell into the budget of HR. And HR looked at it and said, this is great.

this actually might be a sign that our organization is maturing. Maybe we need some more of these HR.

big HR tools, right? Maybe we need a platform for performance management, which it can do all of these goals and can do all these one-on-ones, but can also do surveys and can also do performance reviews and can also do all these other things. And the managers are like, no, don’t get in our way. Don’t ruin our thing. And often they would use us almost as an excuse to buy their tool, to buy the big, boring HR tool, consolidate the money that the company is spending on us and, you know, double it, triple it,

Matt (12:34.18)

Hmm.

Brennan (13:02.192)

something else and the management revolts and stuff like that. We would try to fight back as best we could, but ultimately when we started talking to the folks in HR, they were like, well, I need performance reviews or something like that. We didn’t want to build it, but we started looking at building it.

and sort of taking that fight on. You know, what would it look like if we did round out our platform to incorporate some of the more traditional aspects of performance management? Hate that word, by the way, performance management. That’s like a micromanage-y word, like HR is gonna perform. I think that performance enablement, I think that the goal of HR getting involved in…

Matt (13:27.191)

Mm-hmm.

Matt (13:40.64)

What do you prefer? Is there another word you prefer?

Matt (13:48.707)

Hmm

Brennan (13:52.546)

performance management is to help people be performed. They’re not really there to micromanage performance. They’re not getting fired if marketing misses their KPIs or sales misses their KPIs. So why are they in charge of performance management? Doesn’t even make sense. But enabling performance at the company, that makes sense for HR to centralize. So we looked at.

Matt (13:55.192)

Yeah.

Matt (14:08.163)

Yeah.

Brennan (14:18.454)

you know, what were the other people doing? Maybe there’s integration plays that we can do, et cetera. And one of the first things that popped into our mind was just the quote that HR kept bringing to us, which is, well, if people are doing their one-on-ones properly, if they’re doing their one-on-ones right, then come performance review season, there should be no surprises. No employee should be surprised. So we sort of thought, well, not, you know,

Matt (14:42.327)

But how many managers do that though, is the thing, right? Like I’ve been through that experience. It’s like, I have great intentions come January. I’m gonna document everything that happens. I’m gonna have this great thing at the end of the year. And I’m okay at it sometimes, but I’m not great at it, right?

Brennan (15:01.13)

Well, and that’s like a huge piece of what our core product tries to solve, right? Like, how do we make a one-on-one tool so good that you prefer to use it over something else and can we build in some of these workflows where you can follow through on those great intentions? Um, I think most managers with those great intentions try to, uh, implement them with like a Moleskine notebook, right? They get like a new Moleskine notebook and they’re like this year it’s going to be better and that Moleskine notebook has like four pages and then it’s, it’s tossed to the side. And HR said,

Matt (15:11.461)

Mm-hmm.

Matt (15:22.66)

Hmm.

Matt (15:26.877)

I like that analogy. Yeah. Yep.

Brennan (15:31.164)

well, we’re gonna make it better by giving you like a Commodore 64 and you’re like I’m not gonna use a Commodore 64 for my like You know

notes, that’s insane. I’ve got modern tech over here. So we wanted to build, what would the Apple version of this look like? And you’re exactly right. If we did the daily habits right, things would be much better. We’ve spent so much time on the daily habits that we legitimately help managers to the point where they spread the word internally. So when we went to HR, it was like, well, if they’re doing everything right,

Matt (15:39.808)

Yeah.

Matt (15:47.268)

Mm-hmm.

Brennan (16:07.094)

then there should be no surprises in performance reviews. Can we actually make it so that it’s not just that there’s no surprises, that it’s effortless? What would that look like? And we started exploring around with AI just sort of making like proof of concept demos of, can we take the notes from your one-on-ones? Can we take the goal updates on your OKRs?

Can we take some of the stats our platform can generate and integrate that with your HRIS system? And maybe you calibrate the AI with a couple of quick questions. Maybe the AI can actually write the review for you. Could that actually work?

Tech demo sort of proved that it could. And to the point where, you know, I’m sitting there looking at it being like, I don’t know if I wanna build this. I don’t know if I wanna enter this battle and fight some of these big name players.

Matt (16:45.518)

Mm-hmm.

Brennan (16:58.838)

but someone is gonna do it, and those people are not gonna do it with the right intentions. They’re gonna do it as a marketing play, as a bolt-on thing, they have performance reviews where no one uses the one-on-one functionality, and they are just gonna have an AI blindly, dumbly generate some, you know, supple first text around nothing. And people are gonna, you know, feel wowed temporarily until the gimmick sort of wears off.

Matt (17:03.998)

Mm-hmm.

Matt (17:16.94)

Yeah.

Brennan (17:26.798)

And in order to accomplish, I think, using AI the right way and implementing this sort of AI and HR the correct way, you need the daily use. You need the use from the manager every single day, documented, properly categorized, in order to build on the everyday, to write that end of quarter, end of biannual or annual review.

And we had just so happened to have spent, you know, an extreme amount of energy over years working on those daily habits that we sort of felt uniquely able to build this the right way in a way that it seemed like no one else was even able to attempt. So we sort of threw our hands in the air and said, like, you know, we got to get this out first so that people know, you know, the right way to do this. And then that’s what we launched so far. It’s been amazing.

Matt (18:20.743)

Now, take me back to when that time happened, because if I recall, y’all were trying to do some of this pre having, you know, open AI and others kind of opening their APIs. And then they have that and it kind of just democratize things in a lot of ways where you get access to these large language models that you could then apply to your data, correct? And then it becomes differentiating because it’s unique to you, even though you’re leveraging something that’s, you know,

Brennan (18:33.423)

Yeah.

Matt (18:51.395)

I guess you get into a whole debate of open AI, not technically open source, but it’s all another discussion.

Brennan (18:57.206)

Yeah, that’s right. No, you’re exactly right. We’ve been using machine learning AI for quite a while on the how do we make the meetings better, right? So from categorizing what you’re talking about in a one-on-one, using those with AI into an engagement framework. So if you’re not talking about certain things in an engagement framework, the system’s aware of that. It’s able to use that information to suggest.

Matt (19:05.836)

Yeah.

Brennan (19:23.95)

Content to cover your blind spots So if you haven’t checked on someone’s motivation in a while, we’ll sort of recommend here’s a conversation starter that checks on motivation Because you haven’t checked on motivation with this person a while things that like busy managers Have all the right intentions, but they’re just busy, right? They’re not gonna be able to keep track of like when was the last time I checked on this person’s motivation It’s more like, you know, if things are silent I’m gonna assume things are good and until I get sort of punched in the gut a couple weeks down the line

Matt (19:26.761)

Mm-hmm.

Matt (19:34.723)

Yeah.

Matt (19:51.997)

Mm-hmm.

Brennan (19:53.47)

So we’ve been using some of those things, same with our next steps. You type a next step out, we would automatically figure out the date with machine learning, we automatically figure out who to assign it with machine learning, all that stuff. When it came time to sort of think about using that to a greater extent,

Uh, in, in writing the written formal, um, feedback for the managers. Um, obviously there’s way more data we wanted to pull. It wasn’t just, you know, what have you typed recently? It was like six months of meeting notes. It was six months of goal updates. Um, on top of, you know, data from a reporting on top of a whole bunch of other stuff. So it could generate a lot more, um, high quality feedback, but there’s also little things about like, you know, coaching and training this model.

and when we first took these sort of tech demos out to Folks in HR the reaction was like wow. I feel like I saw the future I just don’t believe it will work like I just don’t believe the techs there yet. I’m like, what do you mean? I just showed this to you. They’re like, yeah. No, I see it. I’m looking at the future, but I Just don’t think the world is there yet Like and I think they were more reacting like culturally like this wouldn’t be

Matt (20:50.456)

Don’t believe it, yeah.

Matt (21:00.792)

Yeah.

Brennan (21:05.962)

You know, they feel like they’ve seen it but like they’re not sure if they’ve just you know, if they’re being tricked or what’s going on and Until the reaction was like overwhelmingly positive Yet very reserved and then when chat GPT came out was like November December and You know obviously took off exploded never went on their uncle was using chat GPT for a bit

Matt (21:12.776)

Mm-hmm.

Brennan (21:34.154)

come January when everyone did their annual reviews, most people in HR found out that like half of their managers had, you know, chat GPT writing reviews for them. And so there’s a few times or some of the HR folks I, you know, talked to in maybe November or October, the prior year, were like, you’re completely right. I got it completely wrong. The world is ready for this. We’re already doing it. The issue is like, obviously the chat GPT is, you know, sending private data to chat GPT, it’s, you know, obviously biased in its own way.

Matt (21:43.147)

Yeah.

Brennan (22:04.088)

have information about it, doesn’t have all this knowledge, sort of came back to say, all right, we should check this out and earn it. So a lot of the stuff I think around AI is sort of like a cultural reservation around are we ready for it. And I think what’s interesting for tech companies to sort of catch up on is like,

Matt (22:17.593)

Mm-hmm.

Brennan (22:25.494)

we sort of have to be ready for it, right? Like the wave hits tech first. And if we’re not ready for it, then we’re in for a world of hurt. So I think playing with some of these things internally feels a lot more palatable than playing with some of these AI tools with like your prospects or your customers, right? That’s a little bit more scary, so.

Matt (22:30.295)

Yeah.

Matt (22:42.678)

Mm-hmm.

Matt (22:46.319)

Yeah, that’s true. I mean, we’re doing this right now at Hatchworks, right? The generative AI, one of the big areas it will impact is software development. So we’re leaning into it, almost trying to disrupt ourselves before competition or somebody else does. So we’re taking a similar approach where, OK, we have this new tool and functionality. How can we leverage it and empower our teams with it, ultimately our clients, at the end of the day?

Brennan (23:13.078)

Yeah, I heard the stat the other day. I think it was the CTO of GitHub was saying of the GitHub copilot users, which is sort of auto-complete within your development editor, about half of all code committed into GitHub is written by an AI. So of the users who use copilot or OpenAI’s code AI.

Matt (23:24.303)

Mm-hmm.

Matt (23:33.771)

Wow, I have not heard that yet.

Brennan (23:41.33)

about half of the code checked in is written by AI. So, I don’t know, if you chart that curve a few more years into the future, some of this stuff is like a year old. Will we have developers in the way we’ve always known them as or we’ve known them recently as, or will developers be, I think they’ll still be around, but will they be doing just wildly different things, right? And obviously the people who are…

Matt (23:53.646)

Mm-hmm.

Matt (24:04.876)

Yeah.

Brennan (24:09.398)

The developers who are doing wildly different things first will have a leg up quite a bit on those who aren’t, or the companies who have armies of developers like that. But for us, it’s even more nuanced in that we’re building an AI tool now, in that we want to use the AI tools to understand what are the interfaces that work for AI right now.

Matt (24:27.997)

Mm-hmm.

Matt (24:35.32)

Yeah.

Brennan (24:35.53)

So a big part of like us building it right is like, we actually have to artificially inflate how much AI tools we use so that we get a sense of like, oh, this pattern, this UI pattern really, really works. This UI pattern.

Matt (24:47.715)

Hmm

Brennan (24:49.334)

um, doesn’t right where we had years of understanding the UI patterns of search. We’ve had years of understanding the UI patterns of like top bar sidebar navigation, how do you interact with an AI? No one knows, right? Like, um, we’re in early, early days of just understanding how you interact with it and obviously the first breakout interface has been chat.

Matt (25:02.626)

Yeah.

Brennan (25:11.606)

like surprise, but there’s a lot more. And so, you know, just rolling these out, you know, even some basic things and getting not only customer feedback, which has been really helpful, but us using tools like GitHub Copilot to understand the auto-complete UI using AI is like a really powerful interface, right? Like it can sort of predict a paragraph of text at a time, which is an incredible time.

Matt (25:39.74)

Mm-hmm.

Brennan (25:41.74)

I mean, half of code checked in and sort of accepted AI code. So if it can autocomplete code in your code base, like imagine what it could do on some of the more monotonous tasks at your workforce.

Matt (25:54.619)

Yeah. Yeah, the QA aspect of it becomes ultra important. But then again, you can leverage AI for that as well. And I think the UI element you mentioned was interesting. One of the best explanations I’ve heard is CEO of HubSpot. He talked about we’ve lived in this kind of imperative approach of like point and click, and that’s how we interact with technology. But it’s potentially move into this more of a declarative approach, which can really change how we interact with technology at a fundamental

level, so it’s really interesting there. I want to get your take here to kind of round out the episode. Your product’s in HR. It’s innately kind of this intimate human thing, right? You’re talking about people’s careers, their goals, what do they want to do? It’s this human thing. Does AI degrade that experience in any way? What’s your view on how…

Brennan (26:46.946)

Yeah.

Matt (26:52.711)

AI impacts that either for the positive or the negative.

Brennan (26:57.598)

Yeah, such a good question. When we, you know, pre AI, when we were first starting out, people used to ask like, you know, using an app for one-on-ones, that seems silly. Management is sort of like looking at people, you know, face to face, eye to eye. And obviously with remote, that becomes a little bit more challenging.

Matt (27:16.024)

Mm-hmm.

Brennan (27:18.95)

Um, and I used to always say like, this was, this sort of feels like the same thing that like the older generation would say to the younger generation about almost every new technological advance, right? Like people used to like to read newspapers, you know, feel books and read newspapers and, um, uh, you know, have, have journalistic integrity and these bloggers, what do they know? And, um, uh, or dating, right? Like, shouldn’t you, you meet people in real life versus like an app. And obviously we know the apps.

Matt (27:35.17)

Mm-hmm.

Brennan (27:49.524)

taking care of the majority of marriages, I think, in North America for a few years now. Why not the workplace? Why not some of these management practices as well? But AI is a whole new angle to that.

Matt (27:53.433)

Hmm.

Brennan (28:04.394)

because if the AI is doing it, then what are we doing, right? Especially when it comes to the things that we think of as innately human. If the AI is writing performance feedback, then what the heck am I doing as CEO, right? And I think that’s where people can get weirded out or scared, et cetera. But I think that the first thing is that the goal, at least the way we’re trying to build it, is to allow

Matt (28:17.043)

Mm-hmm.

Brennan (28:34.082)

the humans to be more human, to have more EQ, to have more time to spend with each other face to face. And so you look at, well, what can AI do? And I think the current state of AI, and this is obviously gonna be out of date, even if you publish it tomorrow, but the current state of AI is if you can sort of train an intern to do it, you know, in their first couple of weeks on the job.

Matt (28:37.017)

Mm-hmm.

Matt (29:02.518)

Mm-hmm.

Brennan (29:03.082)

you can get an AI to do it right now. So the first task is sort of breaking down these little things into small enough tasks that an intern could do it in the first couple of weeks on the job. And most tasks we do at work can sort of be broken down in that way into these repeatable steps. But the difference is when you have AI, you can kind of scale that to the almost like infinite dimension. So…

Matt (29:06.467)

Yeah.

Brennan (29:30.802)

most managers could look through six months of one-on-one notes for all seven people they have to do a performance review on. They could do that, but they don’t zero percent well because they don’t have the willpower to do it. They don’t have the discipline to do it. They don’t have all of these little things that are needed and they don’t have time. Truthfully, they don’t have time. They’re dealing with a fire and that fire is happening in their functional department and HR is like, by the way, you have to get your reviews done.

Matt (29:38.871)

0% will.

Matt (29:44.121)

Yeah.

Matt (29:58.168)

Yeah.

Brennan (30:00.676)

So they’re pretty busy. They could look through six months of gold data. They probably won’t. So biases creep in and some of those biases are okay to have. Some of those biases are less so, and people often talk about biases in AI. But the AI can actually reduce other bias, like can severely reduce recency bias because it can read all of this data. It can severely reduce other sets of biases because you can withhold information

about is this person a male or a female? Is this person named John or some other name, right? That otherwise would lead to bias. You can sort of take some of those things out and the AI doesn’t know about it, so it’s just going to treat everyone the same. And you can inject bias of, you know, making this be harsher, universally harsher or universally softer, and put everyone on a unique playground. But what’s…

Matt (30:36.333)

Hmm.

Brennan (30:58.794)

Further to that is in many of these companies, you’re doing a 360 review. So you have a person you’re reviewing, the manager’s got to do that review, but they’re doing a self-evaluation, peer evaluation, et cetera. So again, the manager for all of those seven people they’re doing these reviews on, they could look at all three peer reviews that they, you know, received on this person and the self-review. And they could analyze the different scores and notes of feedback between these various different peers. And they could, you know, group those,

into themes and psychoanalyze that and understand if there’s a confidence issue happening with this direct report. They’re just not going to do that. They just don’t have time. But all of the things I sort of mentioned there, the AI does in under a minute. So it will analyze what everyone else submitted about a person. It will try to understand if there’s a theme in any of these peer responses that are different from the themes in the self-eval, that are different from the themes in the self-eval.

in your avowl, it will highlight those differences, what are some of the common causes of it, help you frame some of your responses to better tee up a productive conversation instead of like a frustrating conversation, give you prompting conversation starters for what to talk about in your next one-on-one that could help resolve some of these issues. All of these things the world’s best manager would do.

Matt (32:19.311)

Hmm.

Brennan (32:27.014)

Um, if they had infinite time, um, and they don’t, what’s neat about AI is you can give those, those people, all of the people sort of infinite time in certain directions and all of the directions that AI wants to go are the ones humans don’t want to go. And so in a way, bringing the AI into some of these tasks allows you to do the things that are innately more human. Do that way more. Like because you have all this knowledge.

Matt (32:54.326)

Mm-hmm.

Brennan (32:56.688)

can go and be more empathetic with this person, right? Because you now have the notes needed and some of the questions needed to be more empathetic. So yeah, I think a lot of people have fears about maybe AI taking over jobs or AI, you know, removing some of the humanity in certain things. And I think often the stuff that AI might end up doing is the things we knew we should always do, but we got too lazy, right? And now that we’re,

have this, you know, most infinite willpower source to pull from with AI, what are we now able to do knowing that we’re doing the best job ever in some of those places we were previously lazy and often I think that’s being more human, being a better person in many ways.

Matt (33:45.555)

Yeah. And AI has that potential to, if we do it the right way, to actually make you more human in and of yourself. And I love the EQ tie. It’s like AI done the right way enhances EQ for the individuals using it. It’s kind of like this co-pilot, good name for GitHub, right? But it’s like a co-pilot mindset of how AI is used.

Brennan (34:03.73)

Yeah. I’ll give you like such a good example of that. Cause this is something that’s universally come back from our customers is sort of that, right? Like, um, we take your notes, we take your goal updates, et cetera. But we also ask before we sort of do the written feedback, we, we asked for some calibration questions. Um, and those calibration questions might be the same questions from yourself, a valve from your peers, evaluation from your managers, evaluation in there, you might get different scores. You might get different jot notes from your peers and your manager or whatever.

and AI will just go in there. We show these steps.

um, to our users, the AI can be a black box, right? So what we’ve tried to do is like what we were taught in math class, instead of just spitting out the answer, we sort of show the long division step-by-step show your work. So, um, you know, one of the areas we show our work is, is in analyzing that the peer responses or sorry, analyzing sort of those, those calibration assessment questions. We give them to the manager. So the manager can do all of the analysis themselves if they want to.

Matt (34:43.666)

Mm-hmm.

Matt (34:49.931)

Show your work, yeah.

Brennan (35:08.464)

for them and just sort of summarize the insights out of it. And almost universally, everyone who’s seen that has been like, that’s the most valuable thing. I’ve…

Matt (35:17.123)

Mm-hmm. Yeah.

Brennan (35:17.166)

had in my management career, right? Someone, something to read this, analyze it, talk about the surprising, the interesting, the confusing. You know, and like some of the stuff that it gave me and others is like, you know, the person rated themselves low here, their peers rated them high, you rated them, you know, mid to high. And the commentary was sort of overwhelmingly positive. The fact that they’re rating themselves low either suggests that they might have

there’s a misunderstanding of expectations or something else. Like maybe you want to, you know, bring up, introduce this or ask about these, these types of things in this type of way. And every manager is like, holy shit, right? Like that’s incredible. And, and the truth is like, you know, obviously I’m biased in saying, saying that, you know, our tool is incredible and it can kind of present incredible, but like legitimately is this is like what other people have been saying. So

Matt (35:49.767)

Mm-hmm.

Matt (36:01.449)

Yeah.

Brennan (36:16.394)

You know, if you do these things right, if you kind of show your work, you break the steps out, you kind of break things into these tiny steps that AI can do a great job of on, you can sort of build into some pretty incredible stuff. And that’s where we’ve been getting some of the latest stats. I think I shared with you earlier, right? 80% of people feel like our process is faster than their previous one, if not significantly faster. And 80% of the people receiving feedback say it’s better

Matt (36:45.348)

Hmm.

Brennan (36:46.388)

prior, right? So 80% better, 80% faster. Like who doesn’t want this in their work life? And I think, you know, we’re doing that for HR, we’re doing that for performance reviews, but you can sort of tackle any functional area, any pain point and say, all right, how do we make this 80% faster and 80% higher quality as well? And what do you do now as a functional person in that role with that much free time back? And I think the answer is do more human things.

Matt (36:47.82)

Yeah.

Matt (37:16.511)

Yeah. And quick plug for a Bill Wright episode in the past, episode 8. The intern comment you mentioned triggered me. So we had Jason Schlechter, founder of AI Empowerment Group, on. We’re talking about how to identify and vet winning use cases with generative AI. And the struggle, a lot of the times, is framing and how folks think about how they can use AI. And one thought exercise he likes is pretend you literally have an army of interns that you can put to work. Now what could you do?

Brennan (37:38.527)

Yeah.

Matt (37:46.239)

It’s that reframing of how you can empower things with AI, just to start thinking through use cases. So I wanted to do that quick plug, but Brendan, this has been an awesome episode, great chat. Where can folks find you?

Brennan (37:56.258)

Absolutely.

Brennan (38:02.178)

So you can find us hypercontext.com or we’re hypercontext app on Twitter, you know the LinkedIn, similar words. And then myself personally, I’m on LinkedIn, find me Brennan McCackren or Twitter, I underscore am underscore Brennan. Should be pretty universal across there.

Matt (38:20.897)

Awesome.

Well, Brendan, thanks for joining us on Built Right Today.

Brennan (38:27.21)

All right, Matt, thanks for having me. Thanks everyone.

The post The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management appeared first on HatchWorks.

]]>
How Generative AI Will Impact the Developer Shortage with Trent Cotton https://hatchworks.com/built-right/generative-ai-impact-developer-shortage/ Tue, 22 Aug 2023 10:00:36 +0000 https://hatchworks.com/?p=29722 Could generative AI help recruiters fill the gaps in the talent market?   The developer community is facing a shortage of skilled workers, and the needs of the tech industry are growing. To stay ahead of the curve and remain competitive, companies must hire the best of the best. But with a shortage of talent, recruiters […]

The post How Generative AI Will Impact the Developer Shortage with Trent Cotton appeared first on HatchWorks.

]]>

Could generative AI help recruiters fill the gaps in the talent market?  

The developer community is facing a shortage of skilled workers, and the needs of the tech industry are growing. To stay ahead of the curve and remain competitive, companies must hire the best of the best. But with a shortage of talent, recruiters face a tough challenge.   

To share some perspectives on recruitment difficulties, Trent Cotton, our VP of Talent & Culture, joins this episode of the Built Right podcast. Trent explains why we’re facing such a talent shortage, what that means for businesses, and why broken HR processes are holding many companies back.  

Trent explores the growing usage of generative AI in the HR space and how that could help to patch up some of the gaps in the talent market.  

Tune in to the discussion below or keep reading for the top takeaways.  

What the talent shortage means for businesses

A report from Korn Ferry found that by 2030, there could be more than 85 million jobs left unfilled because there aren’t enough skilled people. That talent shortage could result in approximately $8.5 trillion in unrealized annual revenues.  

The talent shortage in tech, especially in the developer space, is more than simply frustrating. It directly results in potential lost revenue. If you don’t have top talent to bring projects to life, this can dampen business growth and make you less competitive.  

Why we’re seeing a shortage of talent 

While the skills gap has often been an issue in industries such as tech, it was intensified during COVID. Many from the baby boomer generation were forced to retire and haven’t re-entered the workforce, and younger generations haven’t been trained to fill those gaps.  

Another reason companies are struggling to fill roles is because the average person changes jobs every three to four years. But tech professionals are doing this 20% more than in other fields.  

To add to this, Trent believes that most recruiting processes are “utterly broken.” It’s hard enough to get the talent, but you’ve also got to worry about regular retraining, and then because the recruitment processes are so long-winded, it takes a long time to fill a role.  

That’s why we’re seeing more HR and recruitment professionals turn to AI to help improve some of their recruitment processes.

The four problems with recruiting 

1. Everything’s a priority, which means nothing is

Recruiting groups are limited by time, focus, and energy. Whenever you’re constantly moving the needle and trying to get ahead, it’s hard to make progress in the areas that are most important. 

2. No opportunity for recruiters to identify obstacles 

Another dysfunction in recruitment teams is that there’s little space for recruiters to stop and think about what’s working and what’s not. This is essential so that you can find ways to scale what’s working well. 

3. Lack of alignment 

There’s often a lack of alignment between recruiters, hiring managers, leaders and candidates, which can often create a lot of conflict in the process. 

4. The feedback loop is broken 

It can sometimes take weeks for candidates to receive feedback – which just leaves them hanging.  

The four principles of sprint recruiting 

To combat these issues, Trent uses the sprint/agile recruiting method, which follows four principles: 

1. Address issues in two weeks and prioritize the most important roles to fill. 

2. The business defines the priority of which jobs need to be filled first. 

3. Work in progress limits to reduce the number of candidates at each stage of the process to enhance the candidate’s experience. 

4. 48-hour feedback mandate for the candidate and recruiters. 

By following these principles, it ensures that everything moves fast, everyone’s informed, and the recruitment workload is manageable.

Where generative AI could help the developer shortage 

Trent believes that the biggest impact in the job market will be on frontline workers. Anything that doesn’t necessarily need a human to do it is likely to be automated first. However, this is likely to create a surplus of unskilled workers. 

AI is also going to help streamline processes and make things more efficient – leaving companies to focus more on client engagement and retention. The same also goes for developer roles.  

If developers can leverage generative AI to replace some of the more tedious or manual tasks, they have more time to spend on upskilling, problem-solving, and more creative tasks. Any chance for developers to level up and improve their skills is going to be a huge plus for the tech industry when there’s such a shortage of skilled developers. 

What companies and industry leaders can do to protect the talent market  

1. Offer training  

With AI becoming a big focus for many tech companies, it makes sense to train people in AI. By offering training for existing and future talent, companies can remain competitive while also helping people enhance their skills.  

2. Nurture the next generation of talent 

Another way to get ahead of the curve is to start training the next generation of talent. That means starting as early as high school to help younger people get inspired and interested in the opportunities in the tech industry.  

This should encourage more people to choose it as a career path, which goes a long way in easing the talent shortage. 

3. Be open to flexible working arrangements

More and more people are working in the gig economy as freelancers. The pandemic made some people realize that they don’t want traditional employment and prefer flexibility.  

However, if your job opportunities are only focused on those who want the traditional 9-5, this could exclude talent who have a lot to offer.  

It may be time for companies to be more flexible when it comes to working arrangements to tap into this wider talent pool. Having a mix of regular employees and being open to hiring freelancers could help businesses remain competitive in the talent market.  

Hear more about the potential impact of AI on the developer shortage and tech job market in the full discussion with Trent.  

Discover how HatchWorks’ Generative-Driven Development™ can accelerate your software projects – explore our innovative approach today.

[00:00:00] Matt Paige: All right. I’m excited to be joined today by Trent Cotton Hatworks, VP of Talent and Culture, and he’s the reason we’re able to attract and retain the best talent in the industry across the Americas. He’s got deep experience and talent management, organizational development, HR tech. Data analytics, I gotta take a breath here.

[00:00:25] Matt Paige: HR process. And his even developed his own unique process to recruiting called Sprint recruiting, which frankly is just completely transformed how we recruited Hatch Works. And oh, by the way, he’s a published author, as you can see from the two book titles behind them. Sprint Recruiting, the first one, and the FutHRist coming out later this fall.

[00:00:44] Matt Paige: But welcome to the show, Trent. 

[00:00:47] Trent Cotton: Thank you. Thank you. It’s uh, a little humbling. That’s quite the introduction. I appreciate it. 

[00:00:52] Matt Paige: Yeah. Lots, lots of good stuff. And I’m sure we’ll hit on some of the, the sprint recruiting stuff in the discussion later today. But the main topic, it’s, it’s a, a meaty one. It’s a hot topic right now in the industry and it’s the tech talent shortage and how AI is gonna impact that.

[00:01:09] Matt Paige: And how do, how do, I know this is a hot topic because our blogs right now on our, our website, those are getting the most traffic. It’s talent shortage and it’s how the impact of generative AI. Those are the most trafficked right now. And in PS you can check those out on the Patchworks website. And I’d say make sure you stick around till the end.

[00:01:27] Matt Paige: We’re gonna, we’re gonna go into whether AI is actually gonna help shrink this talent shortage or make it even larger, but Trent, uh, to set the stage. Mm-hmm. Help us just kind of set the stage of what, what is the current talent shortage gap? What does the future projections look like? And kinda what’s, what’s attributing to that help, help break that down for us to kind of set the stage for today.

[00:01:49] Trent Cotton: Korn Ferry, which is a, a fairly large search firm, they have a fantastic analytic arm. Uh, it’s one of the ones that I go to just to try and get a good feel of what’s going on in the talent market. They estimate that by 2030, so not too far down the road, that 85 million jobs will go on field because of the shortage. So that, that’s a, that’s a huge number Now that’s worldwide. Um, you’re looking at about $8.5 billion, or excuse me, trillion with a t. In revenue loss and just 162 billion of that is here in the United States. So I mean, we’re, we’re, we’re going against something that is borderline unprecedented…

[00:02:36] Matt Paige: for context, the mm-hmm. I was gonna say just for context, that gap in revenue, right. Is because there’s initiatives companies wanna do and things they want to get done, but they just don’t have the talent to do ’em.

[00:02:48] Matt Paige: Right. Wow, that’s insane. 

[00:02:49] Trent Cotton: Yeah. With insane the, the competitive landscape now everything is driven by tech. So if you’re not staying ahead of the curve with the latest tech and, and making sure that something as simple as your website, your apps, your delivery systems, you know, think about Amazon, all of the different technology that’s involved, that’s what made them the behemoth that they are, they can’t keep the roles filled quick enough to be able to stay ahead of the curve, which is a direct translation over into revenue.

[00:03:15] Matt Paige: Hmm. That’s, that’s interesting. It’s really scary. Yeah. Yeah. And so we got this. 

[00:03:22] Trent Cotton: Go ahead. No, I was gonna say, and, and some of the driving things. So, I mean, that’s a, that’s a scary stat and so let’s kind of peel it back and, you know, what’s driving this and some of this has been hyper intensified since covid.

[00:03:37] Trent Cotton: So one of the first ones is the baby boomers. That was a huge generation of the workforce. A lot of them were forced to retire early in covid and they never reentered the workforce. So they reentered at, at kind of a lower skill level. So that greatened the gap that was already there. And then you have this up and coming generations that are not necessarily at that same skill level that’s furthering the gap.

[00:04:00] Trent Cotton: But then you throw technology in the constant evolution of technology and you can’t keep your workers skilled enough, fast enough to be able to evolve as quickly as, um, AI is changing the game. I mean, just think about it this time last year, were we talking about chat, G P T? No. I mean, I, I know I wasn’t, um, we weren’t looking at the impact that generative AI is going to have.

[00:04:21] Trent Cotton: There was some talk about it in theory. Now the rubber’s hit the road and now companies are looking at, you know, it, it just, in the last six or seven months, all the evolution is gone. So that, that’s just a micro vision of what’s going on in the economy and the direct impact of, of a gap that’s already huge in the talent market is just going to exacerbate the issue.

[00:04:45] Trent Cotton: And then two tech professionals, you know, the average person changes the job about three to four years, according to LinkedIn. And tech professionals change 20% more. So it’s, you know, the, the average is anywhere between a year and a half to two years. So not only do you have the gap, not only do you have all of this changing technology, then you then you gotta figure out, how do I keep these people once I get them on board?

[00:05:09] Trent Cotton: So it is, Hmm. For a lot of talent people or talent professionals, we’re fighting a battle on six or seven different fronts. So for anyone that is listening, that is a c e o, uh, go in love when your talent person, they’re exhausted. You know, we get past the pandemic. I love that all this other stuff starts happening.

[00:05:27] Trent Cotton: So, um, I usually know talent people ’cause all of us have circles under our eyes for the last three years. 

[00:05:34] Matt Paige: No. And you, and you’re deep in the industry too. That’s funny. Yeah. Go, go give your love to some talent people. They need it right now. Yeah, but it’s interesting though. It’s, it’s not like one thing, it’s like five, six different things all attributing.

[00:05:47] Matt Paige: To this talent shortage we 

[00:05:48] Trent Cotton: have right now, right? It is, it is quite the perfect storm. Um mm-hmm. Just because you can’t, you can’t deal with just one issue. So let’s go back. Yeah. Four or five years. Um, you were able to, to diagnose one particular area and go in there and fix it. Do some triage and then move on about your business.

[00:06:08] Trent Cotton: There’s no way to do triage whenever you’ve got all of this stuff that is so interconnected and interdependent and constantly changing. So just when you think you can diagnose it, it it’s very much like a virus. You know, you kind of treat the virus just as soon as you think that you have. It nipped the thing mutates, and now you’ve got something different.

[00:06:27] Trent Cotton: That’s the current state of the talent market. And then you add to that, that, that most recruiting processes, Are utterly broken. It, it’s just so you, you can’t get the talent. You have to worry about retaining the talent and then it takes too damn long for the talent to get on board because of the, the broken recruit process.

[00:06:45] Trent Cotton: So there’s a lot of things that companies are trying to do. Um, to leverage AI to help fix some of that, uh, at least from a process and engagement standpoint. Uh, some of the analytics, you know, we use a lot of, um, HR analytics to really kind of get us some sentiment analysis of what’s going on with our, um, with our team members because the, I think the difference for us versus a lot of companies that I’ve consulted or that I’ve worked with is everyone talks about, yeah, retention’s a big thing.

[00:07:15] Trent Cotton: I have never worked for a company like Hatch Works. We’re obsessed. Like we almost take it personally whenever people leave. We want to dig into why did they leave? You know, how do we make sure that no one else in the organization feels that way? And I think that speaks a lot to why we have over 98% retention in our organization.

[00:07:33] Matt Paige: Yeah. That 98% number is insane. And I do want to get to this topic around ai, but you, you hit on something that’s interesting, you know. Everybody kind of sees AI as this, you know, maybe this is the, the solution to solve all our problems. But you mentioned the process point of it and I think it’s worth noting just, you know, ’cause I’ve been amazed at how much it’s helped us, but the sprint recruiting and then we’re gonna go on a full tangent on the sprint recruiting and everything there, but just hit it at a high level.

[00:08:00] Matt Paige: ’cause it’s done so much for our organization. It’s worth noting that, you know, there, there are basic fundamentals with process that are important to have. Mm-hmm. In, in talent, recruiting anything in business, but this is especially true here. 

[00:08:14] Trent Cotton: Yeah. The, uh, so let, let’s tackle the four dysfunctions of the recruiting.

[00:08:18] Trent Cotton: I said that the, the recruiting process is broken. I’ve been in recruiting and HR for 20 years. I’ve done through Hatch Works. I’ve also done some consulting for our clients on their recruiting process. There are four constants. Uh, the first dysfunction is that everything’s a priority, which means nothing is a priority.

[00:08:35] Trent Cotton: Recruiting groups are limited by time, focus, and energy, and whenever you’re constantly moving the needle or or moving their cheese, they’re not able to make the progress that they need. Mm-hmm. The second is that there is no rhythm or opportunity for recruiters or recruiting leaders to stop and go, Hey, what’s working and what’s not, and find ways that you can scale what’s working and, and identify the obstacles and work together with the partners to overcome them.

[00:09:00] Trent Cotton: And then clients and recruiters are. Um, it’s kinda like running a daycare sometimes as a talent leader. ’cause you have the hiring manager saying, this person hit me. And then you got the recruiter saying, well, this person looked at me and there’s just this huge, uh, lack of alignment. And then the last one is the feedback loop was broken.

[00:09:19] Trent Cotton: Uh, whenever I first started this, I, I, I went through agile training, came out of it and I said, okay, there’s got to be something that I can learn from Agile and a apply to recruiting. And the first one was looking at the feedback. Um, yeah. The average amount of time that it would take for us to get feedback on candidates was two to three weeks.

[00:09:37] Trent Cotton: So there’s your four dysfunctions? Mm. We balance that in sprint or agile recruiting with the four principles. The first one, we look at things in two weeks. So if you’ve got 400 jobs, the first part of that funnel is okay. In the next two weeks, what’s realistic and what’s the most important roles that the team needs to focus on and get?

[00:09:55] Trent Cotton: That can be to mitigate risk. That could be to drive revenue. That could be to hit certain milestones within the technology sprint. The next is the business defines the priority, so we go to them and say, okay, outta those 400, you say these 20 are the most important. You have 200 points. I want you to assign a point value to those 20, and we’re gonna work them in order.

[00:10:15] Trent Cotton: The next is that we have WIP limits or work in progress limits, so we limit the number of candidates in each stage of the process because that enhances the candidate experience. It makes sure that we do not lose great candidates. It also stops this fomo that a lot of hiring managers have. If I wanna interview 25 people, Look, dude.

[00:10:32] Trent Cotton: Mm-hmm. There’s not 25 people out there. You know, we need to go and move on these top five. And the last one is that we have a 48 hour feedback mandate. Um, we present a candidate, we want 48 hours. We want feedback. Yes. No. And what this does is it provides almost like a drumbeat, it also provides us metrics.

[00:10:50] Trent Cotton: So we, I know on average, on 10 day sprint, day two, Day seven, day eight, and day 10. That’s usually whenever our offers go out. I don’t schedule meetings with my teams. I block any kind of meeting or anything that’s going to disrupt them from focus on getting those offers out the door. We’re also able to track how many candidates we can almost precisely say, if you need a full stack developer, we can get it done in 32 days.

[00:11:18] Trent Cotton: Or if you just trust a judgment and you want us to hire it for ’em and we place them on the project, we can get it done in probably about one sprint or at least maybe 15 or 16 days. Yeah. There’s not a lot of companies out there that can do that. And we move with speed because now we’re focused so intently on what is important to the company, not just working on everything.

[00:11:37] Trent Cotton: We’ve developed these pipelines of candidates that are just sitting and waiting for us to go and pluck ’em and put ’em on our project. So we’ve really been able to ship, I mean, kudos to our talent team. Uh, this time last year we were not in this space. Now we’re on the offense. We’re we’re ready to go.

[00:11:52] Trent Cotton: Yeah. 

[00:11:52] Matt Paige: I mean, you hit it. It, it changed the way we work. And I love the, the comment it’s, there is a rhythm to it. It’s like the whip and you know, my wife will tell you rhythm’s important and I’m six foot eight left-handed and two left feet, and I don’t have it. So it, it is critically important and it, it, the team has it, you can just, and it gets them excited too, so that mm-hmm.

[00:12:11] Matt Paige: You know, a little bit of a tangent, but it’s, It’s, it’s worth hitting on. Um, ’cause so many people, it’s our sauce. Yeah, yeah, yeah. Alright, so let’s get into this topic of how AI is gonna impact this talent shortage. And I think, you know, one thing to note, like AI’s been around for, for a super long time.

[00:12:31] Matt Paige: It’s nothing new in, in, like, I. Encourage folks to check out our episode with, uh, Jason Schlachter. He’s got a lot of insight on the history of AI and everything there, but what’s been interesting about this latest evolution, like you mentioned with chat gbt, these large language models, the generative aspect of it, it’s almost made it, uh, you know, it, it, it democratized it, it made it accessible to all in a lot of ways where you don’t have to be.

[00:12:59] Matt Paige: You know, in the code doing things to, to leverage it. Uh, but what’s, let’s get into kind of your perspective of how how’s this gonna impact the talent market? Uh, whether it’s, you know, does it shrink it, does it grow it? Or how does it enable people to perform better at their jobs? There’s a lot of angles.

[00:13:18] Matt Paige: We could take this, but we’d love kind of your take of, you know, this, this AI boom that’s going on right now. 

[00:13:24] Trent Cotton: Yeah, absolutely. So I think the, the greatest impact that it’s gonna have is, is some of the frontline workers. I think there’s going to be a lot of, um, A lot of intense look by organizations to say, okay, what can AI do that we, we don’t necessarily need a human to do?

[00:13:40] Trent Cotton: There’s gonna be also, so that that’s kind of the first major impact there, so that that’s not gonna create a skill gap that’s actually gonna create a surplus of un unskilled workers, which is. Again, this is just part of that whole big trifecta that we’re dealing with. But then if you move a little bit upstream, there are gonna be jobs that are highly impacted, that are very manual in process, that ai, um, or even, you know, some of the machine learning, some of all, all of the different technology impacts are gonna look and make.

[00:14:09] Trent Cotton: How can we do this in a more streamlined fashion, uh, more efficient with a focus on client engagement and client retention. I think that’s gonna be one of the very interesting things because you know, whenever you have these manual processes, you don’t have analytics on the background, uh, on the backend companies now, especially since Covid are so.

[00:14:29] Trent Cotton: Obsessed with what is the data telling us? I know in HR we are, um, what is the data telling us and how do we make sure that we stay ahead of the curve? That, that, that’s going to be one of the things that companies go, okay, we, we’ve got to invest in this. So there’s, there’s going to be opportunities for a lot of workers to be able to learn some of these processes.

[00:14:48] Trent Cotton: Maybe not from a technology standpoint. Mm-hmm. But how do they actually. Leverage AI as a partner versus it’s an us versus them. Yeah. And I think this is, this is the part that’s gonna be really exciting for the right part of the workforce that sees this as an opportunity to level up their skills and they go into it with an open mindset.

[00:15:08] Trent Cotton: Um, I always use the example of, um, Because I get asked in, in HR a lot, you know, is, is HR gonna be replaced by ai? And the answer is no, it’s not. Yeah. Well, some of it, yes. Um, I look at AI almost like Ironman. So Rob Stark, fantastic guy, wonderful business guy. Mm-hmm. Billionaire, sexy, charming, all, I mean, he’s a badass all by himself.

[00:15:31] Trent Cotton: You, you put him along with his AI in Jasper and you put the suit on. Now he’s a superhero and if you watch, he’s taking in all this data that AI is able to process incredibly quickly, but he’s still making the decision. I. On whatever action he’s going to do. So I think that the more that that we as talent professionals and leaders in the organizations can look at our workforce and go, how do we take our, our Rob Starks, you know, that are not Rob Stark, that’s Game of Thrones.

[00:16:00] Trent Cotton: I. Oh my gosh. Tony Stark. There we go. Tony Stark. 

[00:16:02] Matt Paige: Yeah. This is like, we’re mixed in, uh, genres here. Like, 

[00:16:06] Trent Cotton: yeah. So now everybody knows I’m one of those kind of nerds and I, I like Game of Thrones and man, um, but you take your, who, who are your Tony Starks and how can we make them better by pairing them up with something that’s going to, um, just enhance their delivery or enhance their, their job skills.

[00:16:22] Trent Cotton: But then you have the whole thing with tech. So it’s, it’s really interesting. Um, yeah. I was talking to a, a professional not too long ago, and, and they were frustrated because they were trying to get some, some roles filled for developers, and the managers were getting so ticked because they were doing like an assessment.

[00:16:38] Trent Cotton: And a couple of the people were using chat G p T to kind of help with some of the basic code. And then they were focused on like the more sexy stuff and the manager was disqualifying them. And to me that that’s the antithesis, that that’s, that’s what tech people should be using ai. What’s the mundane, non-value add, but critical and necessary parts of this coding or, or whatever it is, let AI do that so that way they can work on some of the things that are.

[00:17:06] Trent Cotton: More impactful. 

[00:17:07] Matt Paige: Yeah, that’s, I mean, that’s the whole point around our, our built right method at Patchworks is how do you enable, and I love that co-pilot mindset. ’cause that’s what it is, right? It’s, it’s not gonna take over, it’s gonna make, uh, the folks leveraging it better. Mm-hmm. I think one interesting point though that I’ve heard from some, it’s like, you know, you can’t over rely on it to the extent of if you have a bad developer, You give them AI that could actually make a more tanked mess.

[00:17:35] Matt Paige: Mm-hmm. Versus you have skilled developers leveraging it. I love that idea of it gets rid of the mundane, that’s like the first Yes. Go at this. Um, but it’s, it’s like the co-pilot example. You know, I think Tesla has this, where if you’re in co-pilot mode, it forces you to touch the steering wheel every so often.

[00:17:52] Matt Paige: So you don’t just go completely, you know. Yeah, mindless. And same thing with flying a plane, same kind. Uh mm-hmm. You know, analogy could be applied to technology. 

[00:18:02] Trent Cotton: But I do think that this is going to force companies to, and, and we’ve been looking at this since Covid, how do you mm-hmm. Reskill, upskill and redeploy your workforce.

[00:18:14] Trent Cotton: Yeah. I think now with, with some of the intensity that’s coming driven by artificial intelligence, that is going to, that’s really gonna kind of come to the forefront. I know that. Organization, we’re talking about it. We, um, yeah, we give all of our employees a, a $2,000 a year training budget to use to get certifications or, you know, whatever they want to learn to enable them to be even more productive in an area of interest.

[00:18:39] Trent Cotton: And so, you know, we’re, we’re looking at what are some AI courses, what are some AI certifications that we can offer to our, our employees to make sure that they’re staying ahead of the game. Um, so not just to benefit them, but also to benefit our clients. You know, we want to be that trusted partner, not just someone that you come to and say, Hey, I need three or four software developers in the DevOps.

[00:19:02] Trent Cotton: You know, we wanna be able to come in and add a level of knowledge and acumen that is unparalleled to anybody else in the market. Yeah. 

[00:19:10] Matt Paige: And I think too, the, the other interesting point, and you hit on it, it’s like so many folks are looking externally for this talent when you have this like, workforce sitting there.

[00:19:20] Matt Paige: Yeah. That, that if you give some, you know, love and care to, in terms of upskilling them, you can help evolve them. So that that’s, that’s a great point and a big piece that’s missed a lot I think with a lot of organizations. 

[00:19:33] Trent Cotton: Yeah. And there’s actually, uh, some, there’s a lot of government funding that’s going into boot camps.

[00:19:38] Trent Cotton: Mm-hmm. They’re looking at, you know, some of these low to moderate income.

[00:19:49] Trent Cotton: So there’s boot camps out there that will teach you, uh, some of the basics of, of coding, software development, ai and some of this. So a lot of companies are actually being forced to shift the educational requirements and look more at non-traditional approaches. So it, it’s. It has had a very far reaching and very deep impact on the talent strategy for most companies in the us.

[00:20:13] Matt Paige: Yeah, no, that’s, that’s a great point. So next thing I wanna get into, um, you know, what, what’s your take here? Is, is ai this new evolution with ai, is it going to close this talenting gap or does it make it. Wider, like maybe what’s your take there? Or maybe there’s alternate, you know, parallel universes where it could be the case on that size.

[00:20:36] Trent Cotton: Yeah. It’s prefer more to parallel. Yeah. It, it’s definitely in, um, you know, reference another movie, kind of the metric’s like which pillar are we gonna take here? Uh, and it’s, and it’s a lot of what we’ve been talking about. Do we use this as an opportunity to re-skill some of those that may be replaced by AI or their jobs change as a result of implementing some type of AI practices?

[00:20:58] Trent Cotton: If we do, then I think that that’s gonna shorten. The gap, um, and, and be able to tap into a huge force. And it’s actually gonna help break some of the, the, the poverty cycles because a lot of these frontline workers, you know, they, they just kind of stay in that mode. If we’re able to go in and, and take them and give them a skill that’s actually applicable in the new market, I think that’s gonna help us economically.

[00:21:20] Trent Cotton: But it’s also gonna help from, from a talent gap. If we do not, that gap is going to continue to exponentially, um, Just grow and it’s, it’s terrifying, uh, looking at, I mean, 85 million jobs by 2030. That’s, that’s mind boggling, staggering. Um, I mean that, that’s, that’s more jobs than were added in. I can’t even think of how many years that, that, that we’re just going to lose in the blip of mm-hmm.

[00:21:49] Trent Cotton: You know, a decade or two decades. Yeah. That’s, 

[00:21:52] Matt Paige: that’s crazy. It’s, I, I wanna get your take here. So there’s Andreessen Horowitz, you know, he wrote the. Seminal article of why software’s eating the world and he has this new one out, why AI will save the world. And just to call out a couple points, he has these different AI risk he goes through.

[00:22:10] Matt Paige: Mm-hmm. And I encourage anybody to check this out. Super interesting read here, but it is no point. Number three is will AI take all our jobs? And his perspective is, you know, every new major technology has led to more jobs, higher wages throughout history. Uh, with each wave accompanied with a panic of, you know, this time is different, it’s gonna steal all our jobs and everything.

[00:22:34] Matt Paige: Uh, and he gets into this point that, you know, that that doesn’t necessarily happen. Uh, and then he goes on to call out. You know, if we’re allowed to develop and proliferate throughout the economy, this new kind of evolution with ai, it may actually cause the most dramatic and sustained economic boom of all time.

[00:22:54] Matt Paige: With corresponding like record, job and wage growth. Um, but it’s, it’s interesting point. And, and the last thing I’ll hit on and let’s, let’s chat there. I’m curious to get your take, but he talks about this lump of labor fallacy, which is the notion that we have this fixed amount of labor. To be done. It’s like supply and demand side.

[00:23:12] Matt Paige: Mm-hmm. Um, and either humans are gonna do it or machines are gonna do it, but the fallacy comes into play. He goes on in the article to state that, uh, when you have AI and things like that, making it cheaper to. To do things. It increases purchasing power of people. People have new demands and wishes and all these things, which creates all new types of, uh, businesses and verticals and industries and all these things.

[00:23:41] Matt Paige: And like one point he mentions from Milton Friedman. Humans wants and needs are endless. Like, it’s just kind of an interesting point, but like what’s, what’s your take there on Andreessen Horowitz’s, you know, kind of perspective? Uh, I think it’s a unique one. Um, what, what’s, what’s your perspective there?

[00:24:00] Trent Cotton: Um, so I’m, I’m gonna get to an answer. Um, so I’m gonna equate it to something like the, the economic downturn. So let’s, let’s go back and look at 2008 through let’s 2011, okay? Mm-hmm. Banks failed. Massive, massive recession setback. People are outta jobs. Mm-hmm. Darkest of times you would think. But look at what came out of that prior to 2008 2010.

[00:24:27] Trent Cotton: Did we know that we needed an Uber? Do we know that we could have a social network that, you know, people could go and actually create their online business and be an influencer and make money from that. Mm-hmm. So, I, I agree with what he’s saying that, that this new technology will spawn. An an economy or, or facets of the economy that, that we don’t know that we need yet because we haven’t really, the need has not been created.

[00:24:54] Trent Cotton: So I, I do agree with that from a, from a talent perspective, it’s gonna be really interesting to see. Um, that sounds really, really exciting. Of new, new economic avenues, new job opportunities, new job growth, but I’m a little concerned that we can’t fill the damn jobs that we have now. How are we gonna fill some of these new ones out there?

[00:25:13] Trent Cotton: So, yeah. Does it make it worse? Right. Right, right. So there’s like the personal side of me that gets really excited going, oh, I wonder what’s next. And then the talent part of me kicks in and goes, oh crap. You know, here comes. Mm-hmm. Here comes another level of, of complexity. But I, I do think that this is, this is an opportunity for a lot of organizations.

[00:25:32] Trent Cotton: Uh, we do this to a degree of, of going in and trying to get ahead of the curve. So looking at how do we train up and get high schoolers, we’ll just start high school. How do we get them involved in some of the tech, um, jobs and the tech opportunities that are out there? Because a lot of, I know I did, I thought tech is fun.

[00:25:56] Trent Cotton: I like it as a consumer, but I don’t necessarily wanna sit there and code all day. Well, there’s other things in the technology sector besides just sitting down and coding. Uh, but there may be a kid out there that that’s how their brain works and they love that kinda stuff, but they don’t know that that’s actually an avenue.

[00:26:11] Trent Cotton: Mm-hmm. So our, um, we, we have a philanthropic arm called Hatch Futures, where we actually go in and, and we do that. So anyone in the United States who’s familiar with Junior Achievement, it’s very similar to, uh, junior Achievement, but we do it through stem. So we, we teach them the basics of an algorithm using a.

[00:26:27] Trent Cotton: Couple of pictures saying, Hey, put this in order. Guess what? You just wrote logic. That’s what an algorithm is, and it’s just an opportunity for us to be able to get them excited. So I think more companies that go in and start doing that, it’s going to help. Not in the immediate, but it’s gonna help us in the next five to 10 years as those I.

[00:26:44] Trent Cotton: High schoolers come out and, and they’re on the cutting edge of some of those technology programs. That’s one avenue. The other avenue, it gets back to how are we gonna reskill and redeploy our, our current workforce? And will we have the interest, will we have the, um, commitment to some of our current employees to make sure that they stay abreast of the new technology?

[00:27:08] Trent Cotton: So when those new opportunities do come up, we’re, we’re ready to meet them and we’re ready to push them into it. 

[00:27:14] Matt Paige: Yeah. It, I, you, you triggered one thought in my head too. That’s interesting with, we kind of hit on it earlier, you know, this, uh, evolution of generative ai, it’s democratizing AI in a lot of ways.

[00:27:24] Matt Paige: Mm-hmm. But a lot of folks, especially younger kids coming up, you know, they think of coding as like, I lemme see if I get the sides of the brain. Right. It’s more like right brained, like analytical thinking, all that. Mm-hmm. And it’s like, oh, I’m creative. That’s not for me. But what it does is like the actual physical coding.

[00:27:41] Matt Paige: Becomes maybe less important to, there’s other, other avenues you can leverage from a creative standpoint that I think is a huge unlock. Whether it’s, you know, with a chat G p T or you have like Mid journey and people are creating whole movies with ai, right? Generative ai, and it’s like this whole new world in a sense for like the creative folks out there that thinks can be really interesting to see how that evolves over time.

[00:28:08] Trent Cotton: It is, and, and, and with, with ai there’s, I, I think, um, it was probably a couple of months ago that one of the big articles on LinkedIn was, you know, a company was paying over $300,000 for an, um, chat G p t prompt engineer. Like how do you structure the logic to get AI to do what you want to do? It’s crazy.

[00:28:28] Trent Cotton: So that’s not necessarily coding, but I mean, that is an avenue and you do have to understand the logic behind it. So, I think that there are going to be opportunities that open up that are not the more traditional, as we think of today, um, technical routes. Mm-hmm. And how are we going to educate the youth currently and how are we going to reeducate our workforce to be able to meet those, those demands.

[00:28:51] Trent Cotton: That that’s, to me, that is probably public enemy number one. 

[00:28:56] Matt Paige: Yeah. Do, do you think this whole evolution has an impact on the proportion of folks that prefer, prefer. Like gig type work versus like, you know, gainfully employed by a single employee. Do you think that impacts that in any way? That kind of trend?

[00:29:10] Trent Cotton: It does. If you look at, um, the, it’s called the workforce participation rate, so it measures, mm-hmm. I think it’s from 24 to, don’t hold me. I think it’s 62 or 48. Sorry. It’s looking at what are the, what percentage of the population is actually working. Yeah, it is flatline. It is in the eighties. It was at 80%, you know, it dropped down to 70%.

[00:29:37] Trent Cotton: We have been hovering in and around 60 to 62% over the last three or four years since Covid, because what happened with Covid is that it wasn’t just a recession where just the financial industry or the car industry was impacted. Covid was unilateral in its, um, decimation of jobs. And so a lot of people move to the gig workforce because they don’t, they don’t want to have to depend on their livelihood coming from someone else.

[00:30:02] Trent Cotton: Yeah. This is hyper intensive in the technology space. There are people that just enjoy working on a project. And they wanna do it for themselves. Uh, they’re, they’re a freelancer. They don’t wanna necessarily clock in or have to go to meetings or anything else like that. They enjoy that freedom of just doing the work that they’re passionate about and then clocking out and enjoying life.

[00:30:23] Trent Cotton: We’re starting to see a lot more of that behavior wise, mindset wise. You know, it’s something that we look at internally of, you know, we’ve got people that are highly engaged and really wanna be on that employee side and all the training. And then we’ve got others that just. All they wanna do is do their work and, and call it a day.

[00:30:39] Trent Cotton: Yeah. So, you know, we’ve had to learn to be very flexible and agile, to be able to accommodate both types of mindsets so that way we can retain the top talent in the market. If a company goes in and says It is this way or No way. You are probably going to have more of a talent gap or, or a talent shortage Yeah.

[00:30:59] Trent Cotton: Than your competitor who’s willing to say, you know what, if you just wanna be a contractor, that’s fantastic. Just get the work done, you know, and, and go and live your life the way that you want to. So it’s, it’s, it’s another aspect that’s a result, uh, that was intensified with Covid. Um, there’s 24, 20 4% of the male market left.

[00:31:20] Trent Cotton: From 2020 to current and economists cannot figure out where they went. Now ideally, interestingly enough, if you look at the, the timeline and you look at, uh, average number of hours of gameplay, yeah, it’s almost proportional from when they left hours. That’s playing video games. I think that link, yeah, they’re playing video games, but I think it’s more because they’re doing gig work and they can go and, you know, enjoy games and work whenever they want to.

[00:31:45] Trent Cotton: So there, there’s some benefits, both sides. But companies have got to learn to be. A little less dictorial and a lot more flexible and agile if they want to survive. 

[00:31:53] Matt Paige: Yeah. The old don’t, don’t cut off your nose despite your, your face. Oh, yeah. Uh, yeah. So a couple rapid fire questions for you. Okay. What, what AI technology is most exciting to you right now, whether it’s a tool or anything.

[00:32:07] Matt Paige: In 

[00:32:07] Trent Cotton: particular, uh, um, for me it is the impact on HR analytics. So cinema analysis, forecasting, um, looking at, ’cause for the longest time you could look at what was going on internally, but it then you would have to pull in data from external and it was very manual process. Yeah. Um, now you’ve got these that can just go and scrape the information.

[00:32:28] Trent Cotton: Say, here’s your retention, male, female age groups against what’s going on in the industry. Quicker than it would take for me to actually go and find the information two or three years ago. So the impacts on the HR analytics on talent analytics is, is probably one of the things that I am just, I’m like over there, like a, a kid at Christmas waiting to open up a gift.

[00:32:49] Trent Cotton: Yeah. I’m, I’m ready for it. 

[00:32:51] Matt Paige: Oh, there’s so many tools coming out. It’s so cool to watch. Oh, yeah. Who, who are you watching in the hr, you know, talent or tech talent? Industry. Who are you following that you really find insightful or interesting? 

[00:33:03] Trent Cotton: You know, it, it’s, um, I have a love hate relationship with applicant tracking systems.

[00:33:08] Trent Cotton: Most of them are built for HR processes, not actually built for finding and engaging and nurturing talent. Um, there has been one that I, I, I’ve looked at not too long ago, Luxo, who has got all the AI and machine learning power for sourcing across multiple platforms. It’s got the nurturing and everything that, again, that’s driven by AI and nudges and all that from a.

[00:33:33] Trent Cotton: Candidate relationship management, and then it’s got all the cool little backend stuff with all the analytics. So to me, it’s just interesting to kinda watch some of these thought leaders take these thoughts and actually become advisors, uh, for some of the HR tech companies and, and having an immediate and direct influence on it.

[00:33:50] Trent Cotton: So I think some of the big boys that have enjoyed all the, the limelight and the talent market, like the LinkedIn mm-hmm. Uh, that has gotten so much money invested in it, and I don’t really recognize a change over the last 10 years. You know, there’s a, there’s a lot of people that are coming for him, so I am, I’m here for the show.

[00:34:07] Trent Cotton: I’m kinda like that, that Michael Jackson just popping popcorn going, okay, which one’s gonna win? 

[00:34:12] Matt Paige: That’s right. Uh, all right, last one for you. Uh, what’s one thing you wish you could go back to your former self before you started your career to give yourself some advice? Any, any one piece of advice you would 

[00:34:23] Trent Cotton: give yourself?

[00:34:24] Trent Cotton: Uh, trust the journey. There are so many. So I, I started out as a banker. I put myself through college as a banker. Uh, got into HR because I hated working with HR professionals. And there are so many curves that I took, um, yeah, that did not make sense at the time, but now whenever I look at it, It makes complete sense.

[00:34:45] Trent Cotton: Makes sense. Yeah. The banker, the analytic, the minoring in statistics. That comes in handy in in, in HR now because I can look at data and see what is the story that’s being told. So it is just kind of trust the journey. 

[00:34:58] Matt Paige: Yeah. Trust the journey. I love that. Alright, Trent, thanks for joining us. Uh, before we go, where can people find you?

[00:35:05] Matt Paige: What’s the best spot to go find Trent and all the great insight you have? 

[00:35:09] Trent Cotton: Um, linkedin.com. Trent Cotton. You can follow me at Twitter, uh, at Trent Cotton all one word. Uh, you can follow me on sprint recruiting.com or futurist as. F u t h r i s t.com. And of course you can follow, uh, all the blogs and posts that we do on Patchworks.

[00:35:28] Matt Paige: Yeah. And find the Sprint recruiting out on Amazon, I’m assuming, or I guess on your website directly so you don’t have to pay the piece to 

[00:35:34] Trent Cotton: Amazon. Yeah, it’s, yeah, it is, it is on both, but uh, yeah, you can go and get it on Amazon and it’s on Kindle and then futurist comes out in the fall. 

[00:35:42] Matt Paige: Yep. Great. Trent, I appreciate you joining us.

[00:35:44] Trent Cotton: Thank you. I enjoyed it.

The post How Generative AI Will Impact the Developer Shortage with Trent Cotton appeared first on HatchWorks.

]]>
5 Ways Generative AI Will Change the Way You Think About UX and UI Design https://hatchworks.com/built-right/generative-ai-ux-and-ui-design/ Tue, 08 Aug 2023 10:00:03 +0000 https://hatchworks.com/?p=29698 Generative AI has taken the world by storm. In a short space of time, it has changed the way many people do their work. For UX and UI designers, it has the potential to change the entire design process – but how?  In this episode of Built Right, host Matt Paige sits down with HatchWorks’ […]

The post 5 Ways Generative AI Will Change the Way You Think About UX and UI Design appeared first on HatchWorks.

]]>

Generative AI has taken the world by storm. In a short space of time, it has changed the way many people do their work. For UX and UI designers, it has the potential to change the entire design process – but how? 

In this episode of Built Right, host Matt Paige sits down with HatchWorks’ Andy Silvestri, Director of Product Design, to break down the five main ways generative AI will change the way we think about UX and UI design. 

Keep reading for the top takeaways or tune in to the episode below.  

We’ve identified five main ways generative AI could impact the design process – all the way from those early stages of design to the final product.  

 

1. A shift from an imperative to a declarative design

One way that UX and UI design have changed is that there has been a shift from an imperative, or point-and-click style of design, to a more declarative approach.  

Now, you can declare what you want from a tool, and it will work toward a solution. This allows you to work with what Andy calls a “design copilot,” which could result in machines reading a design brief or refining a narrative. A declarative approach is essentially a dialogue you have with a machine, which has the potential to be a game-changer in the design space, especially for smaller teams with limited resources.

 

2. Getting to proof of concept quicker 

Another way generative AI could change the way we design UX is by allowing design teams to reach a proof of concept much faster. In the ideation phase, design teams will typically grind out multiple iterations of flows, wireframes, and other elements to support a proof of concept.  

But AI tools could shave time off for designers. It could also allow them to explore different ideas in those early phases without needing to spend a large amount of time drawing them up.  

 

3. Exercise caution while using generative AI 

While generative AI can save time and refine the design process, it’s still important to be cautious while using it. Using AI tools will still require designers to check the quality, assess for bias, inaccuracies, or even copyright infringements. 

That’s why human designers aren’t going to be fully replaced by AI anytime soon. We still need human eyes to check for these things. Common issues with generative AI in the design process include shadows, angles, depth of field, and proportions looking “off.” But it will take a human to spot them. 

 

4. Impact on creation and utilization of design systems 

Generative AI can also help in the later stages of the design process by speeding up workflows and helping to finalize designs. For example, you could take the design system and ask it to decrease all color gradations by 10% – which then takes some of the more manual work out of tweaking designs.  

By making the design process more efficient from those early ideation stages to finalizing the designs, designers are freed up to spend more of their time on other elements of the process.

 

5. Keeping users engaged 

Andy believes generative AI will give practitioners a chance to be more diligent in the process. Designers can spend more time thinking about the thing that’s most important in UX and UI design – creating an all-round good experience for users.  

Getting feedback from users and acting on that feedback is still essential, which is why human designers are still so crucial to the process. As of yet, AI can’t really assess a design for value, efficiency and accessibility in the same way a real person could.  

Overall, generative AI has great potential to streamline elements of the design process to save time. However, designers will still need to oversee the work and adjust accordingly to ensure that the design is user-friendly and provides real value for people. 

For more information on how generative AI is changing the design process, check out the full episode.  

Excited about the possibilities of Generative AI in UX/UI design? Make sure to subscribe to the Built Right podcast for more insights and discussions like these. Share your thoughts and experiences with us. Let’s create, innovate, and elevate the world of design together! 

Experience the power of AI in driving software development with HatchWorks’ Generative-Driven Development™.

We know what AI can and can’t do. Our seasoned team members guide you through every aspect of the software development lifecycle and strategically introduce AI when and where it adds real value to your solution.

See how it can benefit your projects.

[00:00:00] Matt Paige: We got a spicy one for you built right. Listeners, today we’re breaking down the five ways generative AI will change the way. You think about UX and UI design. And by the way, number two, I believe we’ll actually create a complete shift in the way we think about designing and building software. So make sure to stick around for that one.

[00:00:28] Matt Paige: But to help me do it today, I’m joined by Hatworks own Andy Sylvester. Who brings 20 years of experience in the digital design space, including graphic design, creative design, user experience, customer experience, product strategy, all the things. And he even ran his own experience strategy and design firm for 10 years prior to leading our product design practice here at Patchworks.

[00:00:53] Matt Paige: Andy’s a returning guest, so check out episode two if you like this one with Andy, but welcome back to the show, Andy.

[00:01:00] Andy Silvestri: Hey, thanks Matt. It’s good to be back.

[00:01:03] Matt Paige: Yeah, good to have you all, and I’m really excited about this topic and this format that we’re gonna get into to today. So we at patchworks have been digging into everything generative AI as of late as it relates to the world of UX and UI design specifically for this conversation.

[00:01:21] Matt Paige: From testing new tools, talking to folks in the industry. And what we’ve done is we’ve distilled all of these learnings down to. The five key points so you don’t have to and this includes how it’s making things easier, better, faster, but also what we need to watch out for. And plus things that may sound completely foreign.

[00:01:39] Matt Paige: Now we believe this is gonna be standard practice in the future, so trying to help you get ahead of that. Without further ado, Andy, let’s get into number one. So I’ll key it up for us and let you take it from there. But number one is, Taking this shift from an imperative to a declarative design approach.

[00:01:59] Matt Paige: Take us through that. What is, what does this mean and how is this going to evolve?

[00:02:04] Andy Silvestri: Yeah. So I think as practitioners, right? We’re coming from this time of you point and click in the imperative fashion of doing everything, right? And now we’re getting into this model of course, where like you can declare what you want from the tool, right?

[00:02:18] Andy Silvestri: I can say I gimme this, gimme that. And the tool will use its use, its AI powers to give you a result, right? I think that that focus on moving more into this declarative kind of approach to design is really an interesting one because that idea of working along perhaps like a design co-pilot of sorts where you have this running narrative with the generative AI tool that really has a potential to be a, pretty big game changer. And not only from the standpoint of like just a singular prompt and response where I’m like, Hey Give me a design that has this, and this, and the thing gives it to you.

[00:02:52] Andy Silvestri: But maybe more so the ability to work through like a design, brief and refining a narrative, tweaking things, adjusting things, that kind of stuff. So really like it’s this kind of dialogue you have with the machine, and yeah, seeing that as having a really big, upside to streamlining the process.

[00:03:12] Andy Silvestri: Especially for smaller teams where it’s maybe a designer of a design team of one. And. Like maybe you’re a startup, you have a smaller budget, right? So you’re really leveraging the tool as almost a again, a co-pilot or another designer on your team.

[00:03:29] Andy Silvestri: There are some tools out there that are embracing this kind of dialogue approach. When there a lot of thoughts. This stuff’s in beta right now. It would be interesting to see where this goes. We’ve seen it from like the, imagery perspective of gimme a still image or gimme illustration. But still in the sense of like screen design, there are some big players who haven’t really weighed in specifically Figma and by extension, Adobe. So it’s gonna be really interesting to see what they bring to the table.

[00:04:01] Matt Paige: Yeah it’s, really interesting. This is this one kind of, as you start to think about it, it blows my mind a bit because there’s been so many best practices established with the way we’ve done things to date.

[00:04:13] Matt Paige: There are heuristics and, some of that will still stay around. But it’s based on the human interacting with a machine in a particular way. And these generative AI tools and way of interacting with a machine via natural language it’s still got room for improvement. But I always look back to like cell phones.

[00:04:35] Matt Paige: Back in the day when we were playing snake or the internet back in the day, we were dialing up on a o l. Like, things have progressed so much. And you already see what’s happening with generative ai, how fast it’s progressing like this has the potential to really shift how we interact with technology, which really flips on its head, potentially how we’ve been doing things in a big way.

[00:04:57] Matt Paige: Date.

[00:04:58] Andy Silvestri: Yeah, a hundred percent. There’s always that kind of room for improvement, room for advancement. I think that we’re seeing that right now, in this moment with all these tools that are coming out. And you’re really the hard part, I think right now is staying on top of it.

[00:05:10] Andy Silvestri: All right. Yeah. It’s been an interesting ride for sure.

[00:05:15] Matt Paige: I think it’ll be interesting too. You think of this concept of the, innovator’s dilemma. Take a Salesforce or a HubSpot or that’s how their whole solution’s built. Most solutions today are how are they going to adapt this new way of interacting?

[00:05:31] Matt Paige: With technology do they adapt and make the shift potentially upset some existing customers, right? Yeah. To try, to stay ahead, or is it gonna be what we’ve seen so many times, newer competitors coming into the space without the bloat and that, that gets back to an interesting point you just made.

[00:05:48] Matt Paige: You may not need an army of people to do some of this stuff so that it changes the game a bit.

[00:05:55] Andy Silvestri: Yeah, it’s interesting subset of that example for what Adobe is currently doing with their AI tools, right? They’re slow rolling things into their existing product based, right?

[00:06:05] Andy Silvestri: So you’re seeing like a feature here and a feature there come into Photoshop or come into Illustrator, right? I think just today they released a. A color the modification tool piece within, the illustrator, so that’s one approach. Is like just tease it out, get a little bit of proof of concept, get a little bit of traction around something before just throwing a huge new application right in front of people.

[00:06:28] Andy Silvestri: Like kind of meet them where they are in their current workflows. Yeah. Yeah, it’s an interesting aspect of how all this stuff is coming together.

[00:06:35] Matt Paige: So that’s number one, shift from imperative point and click to more of a declarative chat focused. Design, how that’s gonna impact user experience and something to stay on top of number two.

[00:06:48] Matt Paige: And, this is the one I called out earlier. I think this is one of the most interesting ones. I think it’s the ability to get to proof of concept quicker. And some of this is obvious, but we’ve been playing around with some tools that, you know, whether it’s that tool or that concept or idea and somebody else adopts it, but there’s big potential here to really accelerate this process.

[00:07:08] Matt Paige: So take us through this one.

[00:07:10] Andy Silvestri: Yeah, this is great. And this is probably, at least in my opinion, one of the biggest upsides to at least currently using generative AI in design. And from both the standpoint of like low fidelity, medium fidelity, high fidelity all those things. I think the, idea is why not use these tools when the stakes are low, right?

[00:07:29] Andy Silvestri: If you’re in a a, a. A ideation phase or concepting phase, grinding out multiple iterations of flows, wire frames, interface concepts components, anything to support a proof of concept, it’s not really gonna be a slog anymore. Like you want to get using these tools to get directionally correct.

[00:07:49] Andy Silvestri: Correct. Will take a lot less time. And probably in a, in an interesting way people are afraid of this idea of, oh, it’s gonna replace designers completely, but I think it’s actually going to. Open up the door for designers to become even more exploratory in those early ideation and concepting processes.

[00:08:06] Andy Silvestri: Because you’re gonna have more time. ’cause you’re not necessarily doing all the heavy lift in the backend. So if, for example, I can say to a tool, Hey, I need three concepts of this, and this. Give it some requirements and, work through that prompting. And then take what it gives me and and then leverage what it’s given me to refine it further as needed. That’s a big, that’s a big step up from just, okay, I gotta make three concepts and I gotta think through every single Yeah. Every single piece. Yeah, I think this is gonna be a really big a big lift in terms of workflow.

[00:08:42] Andy Silvestri: And again, like in that earlier phase, I think is when it’s gonna be most profound in terms of just kinda saving time.

[00:08:49] Matt Paige: Yeah. You think of how we interact with our clients today, that upfront piece, like it gets shortened so much. You almost could think of this world in the future where you’re in a workshop and.

[00:09:00] Matt Paige: You’re actually getting an idea, like as, you’re working through the workshop. And then there’s the debate of, okay, the is whiteboards still superior because if you start to get real looking designs, the discussion immediately shifts to color and placement and not the, core functional piece of things.

[00:09:23] Matt Paige: And I think that’s

[00:09:24] Andy Silvestri: where like kind. Yeah, like the tactful use of those tools in those scenarios, I think is what’s gonna be paramount, right? Because you can think about that of yeah, we might be going through and doing a workshop, getting some stuff on the whiteboard with a client.

[00:09:37] Andy Silvestri: We come back and we say, okay give us a week to turn around some, rough proofs of concept that might be, let’s do it right now with what we just ideated through and see what we can get. But of course, like to your point, leveraging it at the fidelity that makes the most sense, so that, yeah, you don’t jump too far ahead or get down a rabbit hole and get.

[00:09:54] Andy Silvestri: Kind of distracted from like the discovery task at hand, right? But, yeah that’s a really, that’s a really good way to think about it. Of let’s put the concept and let’s create like the artifact for it much more in a much lighter lift and, use that to further the conversation.

[00:10:11] Andy Silvestri: So that’s what I mean by getting it getting quicker and getting getting to the concept and, getting it out the door in an efficient manner.

[00:10:20] Matt Paige: Yeah. And you mentioned 1.2. So it’s two things in, my view. I’ve heard some others express this too. The minimum bar is gonna be raised or lowered in, in whichever way you look at it in terms of it’s gonna be so much easier to do stuff, right?

[00:10:39] Matt Paige: But I think to your point, it elevates. The true practitioners, the true designers that skillset, it’s like back to the back to the Renaissance it’s the artist. It’s those folks that start to become more empowered, and I think that’s, the future. Is those type of folks become even more important, critically important as this new technology starts to take, shape.

[00:11:05] Andy Silvestri: Yeah, for sure. A hundred percent.

[00:11:09] Matt Paige: So that’s number two, getting to this proof of concept quicker. Multiple iterations of proof of concept testing. That whole idea of the agile mindset, it just accelerates things. But getting to number three, this is more of the cautionary tailwind.

[00:11:25] Matt Paige: So exercise caution when leveraging generative AI tools and solutions. So take us through the the more doomsday one on the list here.

[00:11:36] Andy Silvestri: Yeah. Yeah. And I think this is a good one to think about in that it highlights the need for cur curation, I think like now more than ever.

[00:11:46] Andy Silvestri: And I mean that from the sense that it’s not just about proofing the quality of what comes back from these tools proofing for bias, proofing for inaccuracies copyright infringements been in a lot of the conversation around using. Some of these tools, right?

[00:12:00] Andy Silvestri: There’s still a very real need, I think, for a designer’s eye to want going back to what you just saying about the, nce, right? There’s still a need for this skillset. And we will still need to employ, I think, a very good bit of common sense when we’re using these tools, right?

[00:12:17] Andy Silvestri: Like right now, whenever we are looking through things that we see in the space that are generated via ai, there’s a bit of a tell or like a look and feel. That I think this imagery is taking on, right? Like maybe there’s odd angles. The shadows are off, depth of field is not quite right.

[00:12:32] Andy Silvestri: Proportions are weird, the thing I think about, it’s like when when Photoshop became more mainstream and Photoshopped turned into a verb. Like you could tell that something had been photoshopped, right? And I think people that have been doing this for a while are seeing that kind of tell of oh, AI is, doing this and something’s not quite right.

[00:12:54] Andy Silvestri: I. That’s one way to think about it. I think, I guess the scary thing is just in what we’ve seen in the last couple months of this this stuff becoming mainstream is how good will it get, right? If you see the increase in quality that just a few months of since Chad T p D dropped the market, right?

[00:13:11] Andy Silvestri: And there’s this increased focus on more sophisticated prompting and prompt engineering is becoming like a real thing and people are learning more about how to use these tools and interact with them. So very soon, like very, soon, That tell may be indiscernible from reality. So that’s the one thing that’s okay, let’s use curation.

[00:13:30] Andy Silvestri: Let’s let’s curate this. Let’s, make sure that we are using a bit of common sense. And, maybe that goes into how we actually, I. Disseminate the work in an honest fashion, right? Somehow indicating that this was used this was made using AI tools to, help with the design, that kind of thing.

[00:13:48] Andy Silvestri: So it’s very interesting times. We’re still in that gray area, right? But I think that with, good practice, we’ll get there.

[00:13:57] Matt Paige: Yeah. And there’s, gonna be instances we saw recently. This is more chat, G p T focus, but the lawyer that was creating a legal brief off of chat G P T, and it just, it completely hallucinated just completely false information.

[00:14:12] Matt Paige: And what does that begin to look like on more of the design side? I think that’ll be interesting to see, to your point does it look like it’s been produced through generative ai? Is that a negative connotation there. But you, mentioned it earlier, it’s this concept of co-pilot that’s, the important piece.

[00:14:32] Matt Paige: It’s your co-pilot. It’s not set it and forget it. What was the, was that the George Foreman Grill back in the day? The SA or one of those? Yeah, something

[00:14:40] Andy Silvestri: like that. Yeah. Yeah, but you said it it’s interesting, right? Just as like Photoshop coming to the masses, allowed a lot more people to.

[00:14:52] Andy Silvestri: Manipulate their imagery and quite frankly do better work with the types of photos they were taking. This is a similar kind of effect where it’s not all bad, right? Just because something has been AIed doesn’t necessarily mean Yeah. That it’s going to be a bad thing for just to, again, access.

[00:15:08] Andy Silvestri: Raising that baseline, getting more people into design, getting familiar with the tools. I think that’s a a silver lining to this kind of cautionary doom and gloom outlook to AI replacing jobs and things like that. Again, very interesting taking the good with the bad, but we’ll see where it goes.

[00:15:25] Matt Paige: So, this generative AI kind of de democratizing AI and potentially design related things, good or bad, in your opinion.

[00:15:36] Andy Silvestri: I think it’s a good thing, right? I think the more we as society can get better with the tools of course there’s always a negative side to, new technology and there’s people that are gonna Yeah.

[00:15:49] Andy Silvestri: Use it for bad actors, negative things and bad actors. Yeah. All that stuff’s gonna happen. But I think generally speaking the more empowered people can be with these tools and actually again, at the end of the day, if it’s delivering value to folks right, and they’re seeing a really.

[00:16:03] Andy Silvestri: And a, benefit to using this technology, then I’m all for it. But again yeah there’s, two ways to look at it, of course. So

[00:16:12] Matt Paige: yeah you, hit on the core thing. It’s, all about value at the end of the day, and we talk a lot about that at Patchworks delivering and owning the outcome, but It, goes back to the.com era, you slap.com on anything.

[00:16:27] Matt Paige: The concept of value just completely got ignored. And there’s some of that going on today where it’s the hype train of there’s a new generative AI tool out every day, multiple ones. But do they provide value? Are they defensible? Do they have some kind of differentiated mo. When you can spend something up on a weekend, probably not.

[00:16:47] Matt Paige: So I think that it’s, that back to first principles is still gonna hold true even with this. Yeah.

[00:16:55] Andy Silvestri: Right on.

[00:16:57] Matt Paige: Alright, so that is number three, the cautionary Tale with generative AI tools. Number four this is a, I like this one. So this is impact to creation and utilization of design systems.

[00:17:11] Matt Paige: For me, this gets, to like easier standardization, just like mundane tasks. Get ’em outta here, but take us through this one.

[00:17:19] Andy Silvestri: Yeah, sure. And this is a click deeper to what we were talking about earlier around developing whole concepts via prompting. So like this is more I think when you get into that stage of finalizing a design there’s a lot of work that now has to be done for preparing that to be taken further in the delivery lifecycle, right? So leveraging the automation aspects of what these tools have is. I think potentially a, big kind of speed up in terms of a designer’s workflow, right?

[00:17:51] Andy Silvestri: If you could imagine just simply making a design and then prompting a, tool to say generate the components for color typography. The focus states from what, I just designed. And it’s boom, there it is. Or take the design system, decrease all the color radiations we have by 10%, and boom, it’s all done.

[00:18:10] Andy Silvestri: You don’t have to go in and do all that, individual legwork. So while it sounds like little stuff, all of that from the standpoint of a a design team’s effort can, really add up and, shave a lot of time. So I think this is really a, powerful kind of aspect of that, ladder stage of finalization design.

[00:18:29] Andy Silvestri: Not just within like pulling components apart for the sake of delivery, but also in kind of refinement It’s very similar, I think to a time when the concept of reusable symbols and components were introduced in interface design. Yeah. Programs like Sketch, right? Like it really changed the way we as designers thought about our workflow.

[00:18:49] Andy Silvestri: It’s oh, I can create this one component or this one symbol that I can use an infinite number of times within my design system. Great. It was a huge efficiency gain for a lot of people’s day to day. And again, I think like we’re in. We’re in that moment with these tools coming to light of how does it come into a workflow?

[00:19:08] Andy Silvestri: How does it shave off or make you know, things that much easier? And I think that’s again, where like the real value and the real adoption’s gonna take place. So whomever in whatever fashion comes out with the killer app when it comes to design, again, this is why I was talking about Figma earlier.

[00:19:23] Andy Silvestri: ’cause a lot of the industry is waiting to see what their move is when it comes to. Generative AI and baking it into their, whole their software suite. These are interesting times and it’s, the stuff that will impact a hundred percent of how we approach our workflows as as designers here at hatworks too.

[00:19:45] Andy Silvestri: Really interesting stuff, really looking forward to learning more and kicking the tires on even more tools. Yeah. And

[00:19:52] Matt Paige: Everybody that’s listening. The whole summation there, it’s speed to value, right? It’s helping increase speed to value, efficiency, all of that.

[00:20:01] Matt Paige: But for the listeners that may not be acquainted with what, a design system even is, like what’s, what is a design system gives just some some context for what that is, why you have design systems, the value of them. So I think it’s a piece that’s overlooked sometimes. If you’re building product and you’re not as acquainted with user experience UI design, things like

[00:20:26] Andy Silvestri: that.

[00:20:27] Andy Silvestri: Yeah. So at its core, a design system is really the foundations of what makes up the the visualization or the interaction interactive elements of your product, your solution, whatever it is that you’re designing, right? You can think about there’s the atomic method of breaking things down to the atomic level of this is our our, very small elements that build up into molecules that build up into organisms, et cetera, et cetera. Having a system of all those components within your design is, really again, the foundational part. And I can, as a designer or someone who owns that design system for a specific product or a brand shepherd that and, work through, okay, do we need to make updates to colors?

[00:21:13] Andy Silvestri: Do we make, updates to our typography suite? All of that kind of thing. Those are the nitty gritty pieces that all you know, again, funnel back up to what in the final interaction, the final interface. So again, that’s why there’s a lot of components to design systems that need to be managed and overseen.

[00:21:30] Andy Silvestri: And again with, these AI tools coming to light, There’s a lot of opportunity for a lot more efficiency to, be interjected into that whole workflow. Yeah.

[00:21:40] Matt Paige: Yeah, and I think to your point, what’s gonna be interesting, at least what I’m interested in, is it gonna be your Figma obviously your Adobe’s the, large players.

[00:21:53] Matt Paige: Are they going to adapt to these things and win in the market, or are you gonna have smaller players? This is like a Figma, like they have a moat. Established people are. Bought in and they use that solution are, new players gonna be able to come in by advancing and kinda leapfrogging here, or the plugging into those type of solutions?

[00:22:15] Matt Paige: Do you think it, it is gonna be the big players out there that ultimately win, it may take a little more time to actually build this into their solutions, or do you think it’s more of a, newcomer.

[00:22:27] Andy Silvestri: Space. Yeah. Yeah. The established players definitely have a leg up, right? Especially Figma, who now has Adobe in their corner, right?

[00:22:35] Andy Silvestri: So they have the resource to push this stuff through more quickly. They got the money but that’s not to, that’s not to say that a, new player couldn’t come out of left field and really nail all of the core value prop that a Figma does and add on top of it with something related to generative AI and beat them to the punch.

[00:22:52] Andy Silvestri: Yeah. It’s completely feasible. And there are groups out there I’ve seen buzzy I’ve seen you wizard, all these kinds of things that are they are their own ecosystems, but maybe, this is maybe, this is the time that a new player comes out and takes the crown.

[00:23:09] Andy Silvestri: I don’t know.

[00:23:11] Matt Paige: Yeah. I, guess we look back as far as Figma and they, yeah. Or they’re, a little bit older, but they definitely came in and just. Stole the show. Alright, so that is number four. The impact in, in creating and utilizing design systems. It’s gonna be big there. It’s getting, rid of all the mundane kinda tasks that designers really they can focus on more important stuff.

[00:23:34] Matt Paige: The last one, this is I mentioned the, p o c one is the most impactful, but this one’s critical. I, love this one. It’s the importance of keeping your users actively engaged. When they’re in copilot mode so they don’t crash the plane. What, does this mean? And this is thinking about the users of Generat generative ai, or your products and services that leverage generative ai.

[00:23:59] Matt Paige: Take us through this one.

[00:24:01] Andy Silvestri: Yeah. And I love this notion. I, think it’s I. This idea of, oh, generative AI is just gonna allow us to just prompt, do whatever and then go about our day and do the really important things when, but I think it’s really more kind of an opportunity for us as practitioners to be diligent.

[00:24:20] Andy Silvestri: And I think quite frankly, this is why I think we’ll always have jobs as designers because even with the assistance of generative ai all of these potential big wins we’ve talked about in terms of automation. A human is still responsible for creating a good experience for other humans, right?

[00:24:40] Andy Silvestri: You, have to factor in what it is that other people want in a service of product, right? And, make sure that you’re delivering on that. So no matter how quickly or efficiency or efficiently you get there you still have to test concepts. You still have to gather feedback from real people.

[00:24:56] Andy Silvestri: You have to synthesize that feedback. You have to act upon the feedback, put it into a roadmap, work on it, right? I think generally one way to think about it is that just ’cause we have all these tools that are taking the place, you know of, other humans, you’re not going to take the human out of human-centered design, right?

[00:25:14] Andy Silvestri: Yeah. So we’re still gonna need to be able to be diligent and, oversee and shepherd this work, as I was mentioning earlier.

[00:25:22] Matt Paige: Yeah. As long as the solutions we’re building are for humans, I think that’s 100% true. I, wonder if some point though you do have these AI type agents and you’re building experiences and solutions for them, what does that look like?

[00:25:37] Matt Paige: And maybe they’re just completely headless. It’s just the a p i layer in making the everything easy and accessible. That’ll be a interesting evolution if, things change in that way.

[00:25:48] Andy Silvestri: Yeah with the speed we’re going again, I was going back to what we were talking about earlier with how quickly it’s evolving how, much things are adapting to people prompting better, doing better things.

[00:26:02] Andy Silvestri: Like how are we going to what is it gonna be in three months, six months, another year from now? We’re, talking, we’re looking back over the course of only a handful of months right now. And so much stuff has come to the market. So many ideas are out there. The, opportunity to really embrace this area and lean into it as practitioners is really, fascinating.

[00:26:23] Andy Silvestri: And so again may the best, tool win, right? That’s the way I see it, is if it’s gonna be improving our workflows as designers, if it’s going to be increasing value to end users, if it’s going to be making really any aspect of the design process. Just a little bit easier, even if it’s that little bit of an efficiency gain, I think it’s worth it and it’s something that we that we can continue to work through and, learn from.

[00:26:48] Andy Silvestri: So yeah, really exciting times really and, a lot more to, to look at. We have a list as Matt, you mentioned. We have a list of tools that we’re looking through that’s just it seems like it gets longer and longer by the day. So

[00:27:00] Matt Paige: yeah, we got a notion board of about a hundred tools I feel like that we’re going in and testing.

[00:27:05] Matt Paige: And, one thing we’re doing too, off topic, but. We’re creating, what are our guidelines to using these tools which is important for organizations to think through. If we’re just testing a tool internally for an internal project or solution we’re building, great. That’s one level.

[00:27:22] Matt Paige: If we’re doing something on a client project that is potentially exposing client data or something like that’s another level of consideration. So that’s one thing we’re defining at. Patchworks and I encourage every other organization to do the same. And it’s about creating a standard set of practices and just agreed upon rules of how you engage with these tools.

[00:27:49] Andy Silvestri: Right,

[00:27:49] Andy Silvestri: Yeah, a hundred percent. I think it’s super important getting, back to that kind of curation beyond just the quality of what you’re getting back from these tools. It’s about. Is it ethical? Is it accurate? Is is it not infringing on copyrights? All of that kind of stuff.

[00:28:05] Andy Silvestri: And then, yeah is it not exposing you to risk? There’s a very real factor there that I think a lot of there’s just all this gray area that everybody’s gonna have to take their own path through and decide what’s right for, their organization. But I think, yeah a lot of those kinds of considerations, the more you can define them, talk through them, make sure you have a plan, or at least an approach.

[00:28:27] Andy Silvestri: Formulated the better off you’ll be because we’re not gonna see the end of new tools coming out. And, yeah, new, weird, deep fakes and stuff like that. And people, again, being bad actors and leveraging the tools in, not so nice ways. So

[00:28:40] Matt Paige: yeah. Another offshoot too, like we’re playing around with Firefly, Adobe’s version of like Mid Journey and you can actually now prompt in there the type of camera lens you’re using and things like that.

[00:28:52] Matt Paige: It’s just taking it to a whole nother. Level for, better or for worse. But to your point earlier, it’s what jobs are gonna exist? That, that don’t even exist today. Like you think back before pre-internet, you know this, all kinds of jobs that exist now that never existed previously are, do you think we’re gonna have that kind of stepwise change from generative ai?

[00:29:14] Matt Paige: Is it that level of impact or do you think it’s somewhere less, somewhere more? Where do you think. Play fortune

[00:29:21] Andy Silvestri: talenting for us. Yeah. I think within a year or two you’re gonna see prompt engineering on everybody’s resume, right? I think that’s the next step is maybe not that, it’s wholly different job titles or, roles, but it’s more skillset building, right?

[00:29:38] Andy Silvestri: So I think in the near term, Years, the more and more that becomes mainstream in terms of understanding how to interact with these tools. As we were talking through the more valuable that’s gonna be as an individual. If I have that skill and I’ve invested in that, and also to an organization who’s leaning into these tools.

[00:29:56] Andy Silvestri: Like understanding how to talk to the robots is gonna be a really big deal. So, yeah, that’s all I can say. Right now though, yeah, we might see in five years from now that there is a a prompt designer, I don’t know, like something like that, or an AI designer, something, somebody who’s got their little their, sidekick.

[00:30:13] Andy Silvestri: That’s the ai, I don’t know.

[00:30:15] Matt Paige: And I guarantee you’ll see required skills, 10 years of experience with the tool that’s been out there one year. So that’s, guaranteed to be out there. All right, so the five, just to recap the, number one shift from imperative point and click to a declarative chat focus design.

[00:30:31] Matt Paige: It’s changing how we think about user experience ability to get to proof of concept quicker. Again, speed to value, speed to testing. Agile taking to the next level. Number three, exercising caution when leveraging generat generative AI tools and solutions. Number four, impact to creation and utilization of design systems.

[00:30:52] Matt Paige: Get the mundane stuff outta there. And then number five, don’t crash the plane. Be considerate of users actively engaging with the solution. And at one point, see, like you see this with Tesla, they require you to, I think, touch the steering wheel ever so often. Yep. But it’s just to remind you that you are engaged with the, thing.

[00:31:10] Matt Paige: But really, enjoy the conversation today, Andy. And for those that are looking to go a little bit deeper on generative ai, we got the blog out there on generative ai. We’re doing one specific to software development. We’ll have another one coming out. Specific to UX and Design, and we’ll link to some of those in the show notes.

[00:31:27] Matt Paige: And then we also have episode, I believe it’s episode eight, it was our built right live podcast we did with Jason Slacker. Go check that one out. This guy is insanely smart. Leads AI empowerment group right now used to be leading AI products at Ance Health. Tons of experience with a lot of crazy stuff.

[00:31:49] Matt Paige: And there we get into the, idea of how do you validate and identify winning generative AI use cases for your business. So don’t miss that. But thanks for joining us today, Andy.

[00:32:02] Andy Silvestri: My pleasure, Matt. Thanks as always. And yeah let’s, talk more about a, because I think in another six months we’re gonna have a whole nother little layer of topics to talk about, so it’ll

[00:32:11] Matt Paige: keep evolving.

[00:32:12] Matt Paige: We’ll, look back at this and we can test our, hypotheses we’ve had. Right on. All right. Thanks Andy. Thank you.

The post 5 Ways Generative AI Will Change the Way You Think About UX and UI Design appeared first on HatchWorks.

]]>
How AI Revolutionizes Recruiting https://hatchworks.com/podcast/ai-in-recruitment/ Wed, 12 Jul 2023 13:35:17 +0000 https://hatchworks.com/?p=29672 In the recent episode of The Elite Recruiter Podcast, recruitment and AI expert Trent Cotton joins host Benjamin Mena to explore the influential role of AI in recruitment. Trent begins by sharing his journey, emphasizing the power of human connections in successful recruitment and how AI can facilitate these. He inspires listeners with his personal […]

The post How AI Revolutionizes Recruiting appeared first on HatchWorks.

]]>

In the recent episode of The Elite Recruiter Podcast, recruitment and AI expert Trent Cotton joins host Benjamin Mena to explore the influential role of AI in recruitment. Trent begins by sharing his journey, emphasizing the power of human connections in successful recruitment and how AI can facilitate these. He inspires listeners with his personal transition from finance to HR, highlighting the importance of staying present and embracing unexpected career changes.

Listen to the podcast

Trent further delves into the practicalities of using AI in recruitment, focusing on the automation of mundane tasks to allow recruiters to concentrate on high-value activities. He warns HR leaders to carefully evaluate and implement AI solutions. Wrapping up, Trent shares invaluable lessons from historical military leaders applicable to the HR landscape, his favorite productivity tools, and strategies to remedy common recruitment process dysfunctions.

Key Takeaways:

  • AI tools can significantly enhance the recruitment process, enabling recruiters to focus on high-value tasks and connections.
  • Effective use of AI is not limited to developers but is a valuable asset for anyone involved in recruitment.
  • Embracing change and learning from historical lessons can significantly impact recruitment outcomes.
  • Careful evaluation and implementation of AI solutions are critical for successful adoption in HR.
  • Various productivity tools and principles can counteract common dysfunctions in the recruitment process.

HatchWorks is pioneering a new era in software development with our Generative-Driven Development™.

Our innovative approach can streamline and enhance your projects.

Learn how we can transform your project today.

The post How AI Revolutionizes Recruiting appeared first on HatchWorks.

]]>