software Archives | HatchWorks Your US-based Nearshore software development partner Fri, 19 Jan 2024 19:21:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://hatchworks.com/wp-content/uploads/2021/04/hatchworks-favicon-150x150.png software Archives | HatchWorks 32 32 Generative AI Statistics: The 2024 Landscape – Emerging Trends, and Developer Insights https://hatchworks.com/blog/software-development/generative-ai-statistics/ Fri, 19 Jan 2024 19:21:05 +0000 https://hatchworks.com/?p=30636 With 2023 dubbed the year of Generative AI, what advancements, trends, and use cases might we expect in 2024 and beyond? To find out we need to look at recent research and AI stats. In this article, we’re analyzing the statistics and trends informing the adoption and use of AI. Throughout, we’ll comment on what […]

The post Generative AI Statistics: The 2024 Landscape – Emerging Trends, and Developer Insights appeared first on HatchWorks.

]]>

With 2023 dubbed the year of Generative AI, what advancements, trends, and use cases might we expect in 2024 and beyond?

To find out we need to look at recent research and AI stats.

In this article, we’re analyzing the statistics and trends informing the adoption and use of AI.

Infographic on "Generative AI Statistics: 2024 Trends and Developer Insights" with icons representing code, user login, and analytics on devices.

Throughout, we’ll comment on what those AI statistics mean as well as add insights from some of the developers on the HatchWork’s team who are part of our Generative-Driven Development—a method that has led to a 30-50% productivity increase for our clients.

What you’ll be left with is a clear overview of the state of Generative AI and its future.

The Current State of the Global AI Market

🗝Key takeaway: AI is a growing industry, with projections showing an annual growth rate of 37.3% between now and 2030. It’s largely fueled by advancements and the adoption of Generative AI.

AI tech and its potential has mesmerized the world for decades.

Film and TV have long projected a world where artificial intelligence is a facet of everyday life—to both sinister ends (iRobot) and peaceful coexistence (the Jetsons).

We’re closer than ever to finding out just how well humans and artificial intelligence can live side by side. And as a whole, we’re investing big in its development.

In 2022, Precedence Research found the global artificial intelligence market was valued at USD 454.12 billion. It showed North America having the biggest piece of the pie, with their AI market valuation hitting USD 167.30 billion.

In the image below you can see how much money is being invested into AI by geographic area.

Bar chart of AI private investment in 2022 by country, with the US and China leading.

And the AI market is only set to grow. In fact, McKinsey projects an economic impact of $6.1-7.9T annually.

Behind much of this growth, high valuation, and investment is the development and increased use of Generative AI.

Gartner reports that in the last 10 months, half of the 1,400+ organizations they surveyed have increased investment in Generative AI.

They’ve also found that 44% of organizations are piloting generative AI and 10% have put it into production. Compare that to 15% and 4% respectively in just March and April of 2023.

The rapid adoption of generative AI demonstrates its potential to revolutionize how we work, the skills we need, and the type of work we will do.

What’s driving our need for AI? It’s a mix of:

  • Increased Demand in Various Sectors: AI solutions are increasingly sought after in healthcare, finance, and retail. Check out our guide on use cases across various industries.
  • Advancements in Generative AI: Innovations in neural networks are propelling AI capabilities forward.
  • Big Data Availability: The rise in big data availability aids in training more sophisticated AI systems.
  • Complex Data Analysis: AI’s ability to analyze complex datasets is invaluable in numerous applications.
  • Digital Transformation and Remote Work: The shift towards remote work and digital operations has accelerated the adoption of AI technologies in business.

What Tools Are We Using? Core AI Technologies and Generative AI Systems

🗝 Key takeaway: With systems like ChatGPT, AlphaCode, and DALL-E 2 leveraging vast datasets, industries are witnessing a shift towards more intuitive, creative, and efficient processes.

Generative AI relies on core technologies like deep learning and neural networks.

These technologies empower AI systems to learn from vast datasets and generate new, original content. This capability extends across domains, from language processing to visual art creation, and code development. It’s changing how tasks are approached and executed on a daily basis.

Generative AI : A Brief Definition 📖

Generative AI refers to artificial intelligence systems that can create new content or data, which they were not explicitly programmed to produce.

These systems use advanced machine learning techniques, such as deep learning, neural networks, and transformer technology to analyze and learn from large datasets, and then generate original outputs.

This can include tasks like writing text, composing music, creating images or videos, and even generating new ideas or solutions.

Among the most notable tools leveraging generative AI is OpenAI’s ChatGPT, known for its ability to engage in human-like conversations and provide informative responses. It exemplifies the advanced natural language processing capabilities of these systems.

Here’s a list and description of other core Generative AI tools people across industries are adopting:

  • AlphaCode: An advanced tool designed for programming challenges, utilizing AI to write and optimize code.
  • Mid Journey: Specializes in generating detailed and imaginative visual narratives based on text prompts.
  • Copilot: Developed by GitHub, this AI system transforms natural language prompts into coding suggestions in various programming languages. It’s complemented by similar systems like OpenAI’s Codex and Salesforce’s CodeGen.
  • Katalon: A comprehensive tool for automated testing, integrating AI to enhance the efficiency and accuracy of software testing processes.
  • Amazon Bedrock: A robust AI platform by Amazon, designed to provide deep insights and analytics, supporting various AI applications and data processing tasks.
  • CodeGPT: A specialized AI tool for coding assistance, offering features like code completion and debugging suggestions based on Generative AI models.
  • Hugging Face: Known for its vast repository of pre-trained models, Hugging Face is a platform that facilitates AI development, especially in natural language processing.
  • Llama by Meta: An AI system developed by Meta, aimed at pushing the boundaries in various aspects of AI, including language understanding and generative tasks.
  • Make-A-Video: A revolutionary system that enables the creation of videos from concise text descriptions, opening new possibilities in media production.
  • AI Query: A tool designed for streamlining data analysis and simplifying complex data interactions using AI.
  • Bard: Focuses on content generation, leveraging AI to assist in writing and creative tasks.
  • DALL-E 2: OpenAI’s image generation AI, known for creating detailed and artistic images from textual descriptions.
  • Copy.ai: Aims at automating content creation, particularly in marketing and advertising, using AI to generate high-quality written content.
  • Murf.ai: Specializes in voice synthesis, enabling the creation of realistic and customizable AI-generated voices for various applications.

This list is truly the tip of the iceberg. Every day new tools are launched into the AI ecosystem.

Time will tell which of them become indispensable to the modern work landscape or who may fall into the deep abyss of almost forgotten memory—anyone remember Ask Jeeves? Or AIM? We do…just barely.

Developer Insights on Generative AI: How is it Impacting Software Development

🗝 Key takeaway: Generative AI is already a fixture in the work processes of forward-thinking software developers with data on productivity proving its a worthwhile addition to the industry.

A recent McKinsey report claims Software Engineering will be one of the functions most impacted by AI.

The data and lived experiences of developers back that claim up.

ThoughtWorks reports software developers can experience 10-30% productivity gains when using Generative AI.

While GitHub has run its own studies on the use of CoPilot by software developers and seen positive results on productivity and speed of task completion.

Across two studies (1 and 2) they’ve found developers who use Copilot are:

  • 55% faster in general
  • 88% more productive
  • 96% faster with repetitive tasks
  • 85% more confident in code quality
  • 15% faster at code reviews

At HatchWorks, our integration of AI has revolutionized our Generative-Driven Development™ process, resulting in a 30-50% productivity boost for our clients.

By utilizing these tools, our engineers have streamlined coding and minimized errors, fundamentally transforming our project delivery methods.

These advancements highlight the significant role of AI in enhancing efficiency and spurring innovation in our field.

To delve deeper into this transformative journey, HatchWorks’ engineers have shared with us their perception of Generative AI tools and how they’re using them to enhance their work.

Key Statistics and Trends in Generative AI

🗝 Key takeaway: The world is divided in its trust of AI but businesses are using it to fill shortages and increase productivity in the workplace.

We’ve covered the state of AI, highlighted some core tools and technologies, and talked specifically about how Generative AI is impacting Software Development.

Now we’re covering other key artificial intelligence statistics and trends that are defining the opinions, use, and impact of Generative AI.

Trend: Programming/Software Development is Seeing the Most Impact on Productivity

Stat: AI improves employee productivity by up to 66%.

Across 3 case studies by the Nielsen Norman Group:

  • Support agents who used AI could handle 13.8% more customer inquiries per hour.
  • Business professionals who used AI could write 59% more business documents per hour.
  • Programmers who used AI could code 126% more projects per week.

What it means: It’s not just one industry or function that stands to benefit from AI. It’s all of them.

AI tools likely assist in faster query resolution, provide automated responses for common questions, and offer real-time assistance to agents, thus reducing response times and increasing the number of inquiries handled.

They also can assist in tasks like data analysis, content generation, and automated formatting, enabling professionals to produce higher volumes of quality documents in less time.

In the case of programming, this leap in productivity could be attributed to AI’s ability to automate routine coding tasks, suggest code improvements, and provide debugging assistance, allowing programmers to focus on more complex and creative aspects of coding.

Trend: Adoption of Generative AI is Explosive

Stat: ChatGPT reached 100 million monthly active users within 2 months of launch, making it the fastest-growing consumer application in history.

What it means: Word of mouth marketing and an impressive display of the capabilities of Generative AI likely fueled such fast and widespread adoption.

It suggests we’re hungry for tools that optimize our work while reducing time and money spent elsewhere. It wasn’t a case of if we’d be adopting AI but rather a case of when and for what.

Even Bill Gates has been impressed by the capabilities of Generative AI. He recently wrote a piece titled, The Age of AI has begun. In it he claims to have seen only two truly revolutionary advancements in his lifetime; one being the graphical user interface, the second being ChatGPT.

He even wrote upon witnessing the capabilities of ChatGPT, ‘I knew I had just seen the most important advance in technology since the graphical user interface.’

So not only is the adoption of generative AI explosive in its numbers, it’s explosive in what it can do.

Trend: The East is Generally More Accepting of AI as a Benefit

Stat: In a 2022 IPSOS survey, 78% of Chinese respondents agreed with the statement that products and services using AI have more benefits than drawbacks.

Those from Saudi Arabia (76%) and India (71%) also felt the most positive about AI products. Only 35% of surveyed Americans agreed that products and services using AI had more benefits than drawbacks

What it means: Notably, the US exhibits more skepticism towards Generative AI than other leading nations.

Earlier there was a stat showing Americans are privately investing the most in AI, followed by China. It’s interesting to see the countries that most trust and least trust AI are the ones investing the most heavily in it.

What comes of this could be reminiscent of the US’s space race with the former USSR. The biggest difference is that Generative AI is accessible to the world’s population in a way space technology never was (or likely will be).

And it prompts questions about whether AI technology is more or less dangerous in the hands of everyday people compared to governments. And whether American skepticism of the AI space is rooted in the potential for government overreach, foreign interference, job security, or how autonomous AI thought can become.

Trend: Trust in AI is Divided Among Those with Geographic and Demographic Differences

Stat: Another survey shows that 3 out of 5 people (61%) are wary about trusting AI systems, reporting either ambivalence or an unwillingness to trust. They looked at geographical location as well as generational divides. This time India came on top and Finland bottom.

When we break it down by generation and education we see the young are more trusting of AI as are the higher educated. Those in manager roles are also more trusting.

Bar chart comparing trust and acceptance of AI by age group and education level.

What it means: Younger people are typically more accepting of advancement in technology than their older counterparts. This stat is thus unsurprising. It’s also unsurprising that managers see the value of AI given their job is to make their teams and departments more efficient and productive. AI is a proven way of doing so.

Trend: Generative AI is Being Used to Fix Labor Shortages

Stat: 25% of surveyed companies are turning to AI adoption to address labor shortage issues, according to a 2022 IBM report.

What it means: The fact that companies are turning to AI in response to labor shortages suggests that AI is increasingly seen as capable of filling gaps left by human workers. This could be in areas like data processing, customer service (through chatbots), and even more complex tasks that require learning and adaptation.

To learn more, watch or listen to Episode 11 of the Built Right Podcast, How Generative AI Will Impact the Developer Shortage.

Trend: Businesses Believe in Generative AIs Ability to Boost Productivity

Stat: A significant 64% of businesses believe that artificial intelligence will help increase their overall productivity, as revealed in a Forbes Advisor survey.

What it means: The belief in AI’s role in increasing productivity suggests businesses see AI as a tool for driving growth. This may involve automating routine tasks, optimizing workflows, or providing insights that lead to more informed decision-making.

This statistic also reflects a response to the rapidly changing market demands and the need for businesses to remain competitive. AI adoption can be a key differentiator in responding quickly to market changes and customer needs.

Worryingly, we should watch that human contribution and value aren’t overlooked to the detriment of the company. Sometimes it’s our humanity that is our best differentiator and businesses should be wary of passing on too much, too quickly to our AI sidekicks.

The Impact of Generative AI on Employment and Skill Development

🗝 Key takeaway: AI isn’t replacing our need for human intelligence, it’s freeing human intelligence up to do other work which puts demand on us to upskill in the use of AI.

The emergence and growth of generative AI are shaping job markets and skill requirements and will have significant implications for employment and workforce development over the coming years.

Job Market Dynamics:

Increase in Gen. AI-Related Job Postings: A notable trend is the increase in Generative AI-related job postings. LinkedIn reports that job postings on the platform mentioning GPT or ChatGPT have increased 21x since November 2022, when OpenAI first released its AI chatbot into the world.

Job Creation vs. Displacement: A McKinsey report forecasts that AI advancements could affect around 15% of the global workforce between 2016 and 2030. This statistic encompasses both job displacement due to automation and the creation of new jobs requiring AI expertise.

Skill Development and Educational Trends:

Evolving Skill Requirements: With AI’s growing integration across industries, the skill requirements for many jobs are evolving. There’s an increasing need for AI literacy and the ability to work alongside AI systems as evidenced by the earlier stat showing a rise in AI related postings.

Educational Response: In response, educational institutions are adjusting curricula and offering specialized training in AI and related fields. They’re also finding ways to introduce AI as a tool the teachers and students use. This shift aims to prepare the upcoming workforce for a future where AI plays a central role in many professions.

Ethical Considerations and Regulatory Landscape

🗝 Key takeaway: The recent advancements in AI have made us all question its use and regulation. Governments are finding ways to control it while encouraging its use to advance the world.

The use of AI raises a range of ethical considerations, including concerns about its accuracy, the extent of its capabilities, potential misuse for nefarious purposes, and environmental impacts. And with ethical considerations come questions over how we’ll regulate its use.

Let’s look at how public opinion and emerging research highlight the complexities and challenges in this rapidly evolving field.

Incidents and Controversies:

The number of AI-related incidents and controversies has surged, increasing 26x since 2012.

Additionally, the number of accepted submissions to FAccT, a leading AI ethics conference, has more than doubled since 2021 and increased by a factor of 10 since 2018.

Notable incidents in 2022 included a deep fake video of Ukrainian President Volodymyr Zelenskyy and the use of call-monitoring technology in U.S. prisons. This trend highlights both the expanding use of AI and the awareness of its potential for misuse.

Interestingly, it’s those who use tools like ChatGPT often that lose our sense of skepticism in its accuracy. Ben Evans gave a talk on Generative AI and showed the following slide:

Chart on misconceptions about AI accuracy based on user awareness and experience.

The data from Deloitte shows that use correlates to trust.

Challenges in Reliability and Bias:

Generative AI systems are prone to errors, such as producing incoherent or untrue responses, which raises concerns about their reliability in critical applications.

Issues like gender bias in text-to-image generators and the manipulation of chatbots like ChatGPT for harmful purposes underscore the need for cautious and responsible AI development.

Environmental Impact:

AI’s environmental impact is a growing concern. For instance, the training run of the BLOOM AI model emitted 25 times more carbon than a single air traveler on a one-way trip from New York to San Francisco.

However, AI also offers environmental solutions, such as new reinforcement learning models like BCOOLER, which optimize energy usage.

Public Expectation for Regulation:

A substantial 71% of people expect AI to be regulated.

This sentiment is widespread, with the majority in almost every country, except India, viewing regulation as necessary. This reflects growing concerns about the impact and potential misuse of AI technologies.

In fact, President Biden has already “signed an ambitious executive order on artificial intelligence that seeks to balance the needs of cutting-edge technology companies with national security and consumer rights, creating an early set of guardrails that could be fortified by legislation and global agreements.”

Source: AP News

Looking Forward: Where Is Generative AI Going Next?

Despite the ethical and regulatory considerations outlined earlier, the future of Generative AI appears promising from a growth perspective:

  • Goldman Sachs predicts Generative AI will raise global GDP by 7% ($7T).
  • McKinsey projects an economic impact of $6.1 – $7.9T annually
  • Precedence Research believes the AI Market size will hit around USD 2,575.16 billion by 2032.
Bar graph of AI market growth projection from 2022 to 2032 in billions USD.

At HatchWorks we’re most focused on the future of AI as it relates to software development.

We expect the use of AI will only advance over time with further improvements to developer productivity, new use cases for how developers use AI to assist software development, and an evolution in the skills and capabilities businesses hire for (internally and through outsourcing).

And we expect that because we’ve already witnessed it firsthand among our own developers:

Further reading: Generative AI Use Case Trends Across Industries: A Strategic Report

We’ll continue to optimize our approach and inclusion of these AI tools in our processes and equip our Nearshore developers with the education and resources they need to be efficient with them.

If you want to learn more about how our Generative-Driven Development™ services have led to a 30-50% productivity increase for our clients, get in touch here.

Built Right, Delivered Fast

Start your project in as little as two weeks and cut your software development costs in half.

The post Generative AI Statistics: The 2024 Landscape – Emerging Trends, and Developer Insights appeared first on HatchWorks.

]]>
24 Nearshore Software Development Statistics to Know in 2024 https://hatchworks.com/blog/nearshore-development/nearshore-software-development-statistics/ Tue, 02 Jan 2024 20:04:43 +0000 https://hatchworks.com/?p=29239 Nearshore software development has become a sought-after solution for companies looking to optimize their outsourcing strategy. In this blog, we delve into 24 key statistics about the nearshore software outsourcing market, the cost savings and performance benefits, the quality of the workforce, and the management processes in place. Whether you’re already using nearshore or considering […]

The post 24 Nearshore Software Development Statistics to Know in 2024 appeared first on HatchWorks.

]]>

Nearshore software development has become a sought-after solution for companies looking to optimize their outsourcing strategy.

In this blog, we delve into 24 key statistics about the nearshore software outsourcing market, the cost savings and performance benefits, the quality of the workforce, and the management processes in place.

An infographic titled "24 Nearshore Software Development Statistics to Know in 2024" by Hatchworks.

Whether you’re already using nearshore or considering it as an option, these statistics will give you a deeper understanding of the nearshore landscape and how it can benefit your business.

These statistics are organized into four categories:

Market Growth and Trends

Two professionals analyzing financial data with documents and a laptop displaying graphs.

🔑 Key Takeaway: The demand for software development services is increasing, with a notable shift towards nearshore software development in North America and Latin America emerging as a key destination. Growth forecasts for the global outsourcing market emphasize the rising importance of regional stability and cultural alignment in nearshore software development.

📈 1. Software development accounts for 64% of outsourced services worldwide.

Source: Statista

There continues to be significant demand for software development services in the global market. With the growing reliance on technology in various industries, companies are increasingly outsourcing as a cost-effective and efficient solution for their various software development services needs. This trend shows no signs of slowing down.

📈 2. 80% of companies in North America are actively considering nearshore.

Source: Bloomberg

There’s been a shift in outsourcing strategies. Businesses want more efficient and cost-effective solutions. A competitive global marketplace will make using the right nearshore partner an increasingly important consideration for companies in North America. Here are some simple reasons to consider Nearshore for Software Development.

📈 3. 21% of small businesses outsourcing intend to hire a nearshore company, up from 15% the previous year.

Source: Clutch

Businesses are embracing nearshore development teams, a testament to their potential to boost the competitiveness and success of small businesses. With the right partner, nearshore outsourcing can help small businesses achieve their goals and compete with larger companies.

📈 4. US-based companies seek nearshore talent because of a close workday overlap, strong cultural fit, and high English language proficiency.

Source: Accelerance

Our guide to Nearshore covers even more benefits of nearshore and how they compare to onshore and offshore. Give it a read to learn which specific projects are best suited for nearshore development.

📈 5. Nearshore outsourcing will add $78B to the export sector in Latin America after 2023.

Source: IADB

It’s time to think outside the (geographical) box for your next project and tap into Latin America’s thriving nearshore outsourcing scene while you still can! The region is poised to become a hot spot for software development outsourcing.

Businesses are eager to:

  • Access top-notch talent

  • Benefit from cost-effective solutions

  • Support growth with flexible scaling options

  • Streamline processes and speed up time-to-market

📈 6. Some of the top ten nearshore software development companies’ destinations are Peru, Colombia, Brazil, and Costa Rica.

Source: The Wall Street Journal

For a comprehensive overview of the key factors you should consider when choosing a nearshore outsourcing destination, including the availability of talent, language proficiency, time zones, cultural compatibility, and cost, check out our guide to the the top nearshore destinations in Latin America.

📈 7. Experts project the global outsourcing market to grow by $40.16 billion by 2025.

Source: ReportLinker

Regional stability and security are top concerns in business partnerships. As developers remain in high demand, company culture and attrition rates will be increasingly important for nearshore software development providers. For more information on how to select the right nearshore team outsourcing partner, check out 5 Tips to Help You Select the Right Nearshore Development Partner.

📈 8. Experts project the global outsourcing market to grow by $40.16 billion by 2025.

Source: Lightcast

In 2022, there was a notable 70% jump in the number of South American remote workers hired by North American companies. This uptick is part of a broader movement towards global remote employment, spurred by Latin America’s rich pool of tech talent. The tech industry, in particular, is seeing a significant increase in contributions from Latin American professionals to North American ventures. This shift is reshaping the way the global workforce operates, enhancing international cooperation and driving economic growth.

Cost and SavingsA conceptual image showing piggy banks surrounding a calculator with the display reading "SAVING".

🔑 Key Takeaway: The demand for software development services is increasing, with a notable shift towards nearshore software development in North America and Latin America emerging as a key destination. Growth forecasts for the global outsourcing market emphasize the rising importance of regional stability and cultural alignment in nearshore software development.

💰 9. 87% of IT businesses considered nearshore outsourcing to cut costs.

Source: Deloitte

During a recession, there is increased pressure to reduce costs and maintain competitiveness. This makes nearshoring a more appealing option. By outsourcing software development to nearby countries with lower labor costs, you can lower your expenses without sacrificing efficiency or quality (location doesn’t matter, talent does!). Nearshoring is a smart choice for companies that want to weather the economic storm and emerge stronger on the other side.

💰 10. Rates surged by 24% in Latin America, due to escalating labor costs and an influx of clients exiting partnerships in Ukraine.

Source: Accelerance

It’s clear that businesses are seeking alternative solutions for their software development needs as a result of the conflict in Ukraine. Nearshoring is a hot commodity and a more competitive marketplace than a few years ago.

💰 11. 59% of companies choose nearshore software development as a cost-cutting tool.

Source: Deloitte

A majority of companies are turning to as a way to reduce costs, including travel and communication expenses. Nearshoring helps free up internal resources and allows the company to concentrate on its core competencies.

Additionally, nearshoring can solve capacity issues, allowing companies to quickly ramp up their software development projects or efforts without having to make significant investments in infrastructure or staffing. This can be especially useful for companies facing rapid growth or changing market conditions.

💰 12. Average nearshore software development rates are 46% lower than onshore rates.

Source: HatchWorks

You read that right – nearshore offers a staggering 46% lower hourly rate compared to onshore!

This kind of cost savings is simply too good to ignore. And while offshore development may be more competitive on the lower end of the spectrum, there is almost no difference on the high end.

So, why settle for anything less when you can get the best of both worlds with nearshore?

Did I mention nearshore developers work in your time zone?

PS. Curious about how much you could save with nearshore development for your specific project? Use our Nearshore Budget Calculator to build your own team and discover the cost benefits for yourself. Get a tailored estimate that aligns with your project’s unique requirements and see the savings firsthand. Try it now!

Project Performance and Quality

Two women collaborating and writing on a glass wall with markers, with a focus on the woman in the foreground smiling.

🔑 Key Takeaway: Nearshore outsourcing is chosen for efficiency and expert access, with businesses reporting satisfaction with the financial and quality outcomes. Positive relationships with nearshore developers contribute to enhanced project performance and quality.

🌟 13. 24% of small businesses outsource to increase efficiency and 18% outsource to work with experts.

Source: Clutch

Project performance and quality are critical factors for success, and nearshoring can provide a boost in both areas. When businesses incorporate nearshore, they gain access to a highly skilled workforce that can help them achieve their goals faster and more efficiently. This highlights the trend of nearshoring as a strategy to achieve growth and compete.

🌟 14. Nearly 59% of businesses are pleased with the financial benefits provided by outsourcing.

Source: Zippia

Before outsourcing, it’s important to weigh the pros and cons. Outsourcing saves costs and improves efficiency, but also comes with project management and coordination challenges.

To avoid these risks, companies must have clear communication and collaboration plans. Nearshoring can minimize these challenges and improve the process by taking advantage of cultural similarities and a compatible time zone.

This leads to a more seamless integration of outsourcing into their operations and a more efficient and effective overall development process. Learn more in our brief guide, How to Select the Right Outsourced Development Team.

🌟 15. Over 75% of businesses report a positive attitude towards their outsourcing partners.

Source: Zippia

Having the right nearshore development partner can lead to improved project performance, a higher level of quality in the work produced, and a more stable and long-term working relationship. 

By following the tips outlined in our blog post, 5 Tips to Help You Select the Right Nearshore Development Partner, companies can increase their chances of finding the right nearshore development partner and establishing a positive and productive outsourcing relationship.

Workforce and SkillsA person working on a computer with code and data on the screen, possibly engaging in software development.

🔑 Key Takeaway: The challenge of talent acquisition and the doubled demand for software engineers highlight the critical need for skilled nearshore software developers. Latin American developers are highly rated for their skills and English proficiency, with an increasing focus on roles like Software Developer, Graphic Designer, and Product Manager.

👩🏻‍💻 16. 50% of executives identify talent acquisition as a top internal challenge in meeting their organization’s strategic priorities.

Source: Deloitte

Having a strong and dedicated workforce is essential for the success of any organization. A shortage of qualified employees can lead to delays in projects, reduced productivity, and lower levels of innovation. Furthermore, a lack of talented employees can make it more difficult for companies to adapt to changing market conditions and meet customer demands. Companies need to have effective strategies in place to attract and retain top talent.

👩🏻‍💻 17. The demand for software engineers has doubled since 2020.

Source: Hired

The exponential increase in demand for software engineers shows that technology is an essential driver of business growth and innovation. Investing in software development and modernization is required to stay ahead of the competition and meet demand. Our blog offers more insights into the ongoing Software Developer Shortage.

👩🏻‍💻 18. Software developers from Latin America are among the top-rated globally.

Source: HackerRank

This is no surprise! Software developers from Latin America possess strong technical expertise and dedication to delivering quality results. They’re educated at well-respected universities, producing a large pool of truly talented and skilled individuals. A strong work ethic and commitment to meeting the needs of clients have made LatAm one of the top software development regions globally.

👩🏻‍💻 19. Latin American countries consistently rank moderate to high on the English Proficiency Index.

Source: Statista

High levels of English proficiency ensure effective communication, cultural alignment, and aligned workplace expectations. This also helps facilitate better collaboration between development teams and more efficient delivery of software solutions.

👩🏻‍💻 20. English speakers in Latin American countries have the fastest rate of improvement in the world.

Source: Education First

Central and South America are on a roll in the tech world! With an increase in English proficiency, the region is keeping pace with the latest advancements in software development. Government investments in STEM education and technology startups are paying off.

👩🏻‍💻 21. Demand for tech talent for product and design roles is shifting from the US to countries like Argentina.

Source: Deel

Organizations are looking for more than just engineering by moving more product design and UX / UI capabilities abroad too. Working in a distributed environment is the new normal. Aligning by time zone is now the most important factor in having a productive Agile team that can share the day’s work.

👩🏻‍💻 22. The top three most popular roles in Latin America are Software Developer, Graphic Designer, and Product Manager.

Source: Deel

The Latin American job market is booming for tech and creative talent. The increasing importance of technology and visual communication is driving the demand for software development and graphic design expertise. Meanwhile, the growing focus on product strategy and customer experience is reflected in the popularity of Product Manager roles.

👩🏻‍💻 23. Full stack engineers saw the highest increase in interview requests.

Source: Hired

This talent shortage is no surprise! Companies value problem solvers. Full stack engineers can tackle a wide range of software development challenges. Their versatility and understanding of front-end and back-end development are an excellent addition to any well-rounded software development team.

👩🏻‍💻 24. The most used programming languages among developers worldwide are JavaScript, HTML/CSS, SQL, and Python.

Source: Statista

Knowing these top programming languages gives developers a competitive edge. These languages allow you to work on a wide range of projects and stay relevant in the constantly changing tech industry.

Nearshore Software Development Destinations: Exploring Latin America

An aerial view of Bogotá, Colombia showcasing dense urban architecture with numerous high-rise buildings.
When considering nearshore software development destinations, Latin America stands out as a region of burgeoning talent and opportunity.

The blend of cultural alignment, similar time zones, and a highly skilled pool of software engineers makes it an attractive option for businesses looking to outsource their development needs.

To dive deeper into why Latin America is becoming a hotspot for nearshore development, and to explore the specific benefits and countries leading this trend, check out our detailed analysis in our blog post: Top Latin American Countries for Nearshore Software Development [2024].

Final Thoughts

🔑 Key Takeaway: With a growing market, cost savings, high-quality projects, and top talent in your time zone, nearshore is a compelling option for companies outsourcing their software development.

These 23 statistics provide a snapshot of Nearshore and the benefits companies can expect. Whether you’re a startup looking to build your first product or an established company looking to reduce costs and improve time-to-market, Nearshore software development can help you reach your goals.

Hatchers work all across Latin America.

Hatchworks: Your US-Based Nearshore Software Development Partner

HatchWorks is a US-based Nearshore software development partner that combines Generative-Driven Development with the affordability and scale of Nearshore outsourcing, all in your time zone.

Our teams are fluent in English and have a 98.5% retention rate—meaning your project won’t be interrupted or delayed.

After a 5-step screening process, we fit you with the right talent for the job and build your digital products one of three ways: through Staff Augmentation, Dedicated Agile Teams, or Outcome-Based Projects (a full service software development solution).

Getting Started with HatchWorks Is Easy

Start your project in as little as two weeks and cut your software development costs in half.

The post 24 Nearshore Software Development Statistics to Know in 2024 appeared first on HatchWorks.

]]>
Top Latin American Countries for Nearshore Software Development [2024] https://hatchworks.com/blog/nearshore-development/nearshore-latin-america/ Tue, 19 Dec 2023 16:42:41 +0000 https://hatchworks.com/?p=29194 Latin American countries are popular for software development outsourcing due to their proximity to the United States, their strong pool of talented engineering professionals, and their favorable time zones. The software outsourcing services industry in LatAm has been growing significantly in recent years, and the region is home to many globally recognized tech companies and startups. […]

The post Top Latin American Countries for Nearshore Software Development [2024] appeared first on HatchWorks.

]]>

Latin American countries are popular for software development outsourcing due to their proximity to the United States, their strong pool of talented engineering professionals, and their favorable time zones.

The software outsourcing services industry in LatAm has been growing significantly in recent years, and the region is home to many globally recognized tech companies and startups.

Infographic titled "Top Latin American Countries for Nearshore Software Development in 2024.

In addition to its technical capabilities, Latin America has a rich cultural heritage and a strong tradition of entrepreneurship. Many Latin American countries have business-friendly policies and favorable climates for investment.

The cost of living and doing business in many Latin American countries is also lower than in the United States and Europe, making it an attractive option to save on development costs without compromising on quality and time to value.

Why do companies outsource to Latin America?

Think of nearshoring as the perfect date and offshoring as a long-distance relationship.

Aerial view of downtown Bogotá, Colombia, with high-rise buildingsImagine finding the perfect date – the right partner who shares your interests and goals, has a strong understanding of your needs, and is always there to support you, in your time zone.

This is what nearshore is like for companies. It provides a level of proximity and cultural affinity that allows for efficient and effective collaboration. Plus, you’ll work with talented developers who have a strong understanding of the local job market.

On the other hand, offshore software development feels like a long-distance relationship – it may seem attractive due to lower labor costs, but it often comes with its own set of challenges such as language barriers and time differences. While it may work for some, maintaining a strong working relationship requires a lot of effort, patience, and some sleepless nights.

Want to dive deeper into the differences between these two outsourcing market options? Read our comprehensive comparison of Nearshore vs. Offshore software outsourcing.

In this blog, we’ll explore specific outsourcing destinations and why they could be the perfect match for the needs of North American companies.

Popular destinations for Nearshore: Latin America

Some popular Latin American countries for Nearshore include:

The flag of Costa Rica.

Costa Rica

Costa Rica is a small but developed country in Central America with a growing software development industry, especially strong in custom software development. Its favorable time zone (same as Eastern US) and cultural compatibility with the US enable easy communication and collaboration between teams and client companies.

The cost of living and doing business is generally lower than in the United States, which is advantageous to save on development costs. The country’s strong infrastructure and stable political environment make it attractive for foreign investment. Despite recent challenges posed by rising energy and food costs and tight financing options, the country’s GDP (Gross Domestic Product) is projected to reach a growth rate of 3.2% in 2024.

The IT industry is supported by several universities and research institutions, as well as many business incubators and accelerators. The workforce is highly educated with a high percentage of degree holders. The country ranks fifth in Latin America on Coursera’s 2022 Global Skills Report. which measures proficiency in business, technology, and data science.

Its strong tradition of entrepreneurship and innovation culture has helped create many successful tech startups. The country has a business-friendly environment and is home to a number of multinational corporations that have established software development centers, including Intel, HP, and IBM. These companies helped Costa Rica become a hub for software development in LatAm, with cities like San José and Heredia setting the standard for supportive business environments.

The flag of Brazil.

Brazil

Brazil, the largest and most populous country in South America, is making its mark as a significant player in the global arena. It boasts a robust software industry with over 6,000+ software development companies and a software market recently valued at $11.3 billion. Brazilian software developers are known for their expertise in Java and .Net, making them sought after by foreign companies in need of specialized skills.

Thanks to favorable time zones that align with the US, companies can enjoy seamless communication and collaboration with Brazilian teams during regular business hours. This, along with its cultural compatibility, makes it a top destination, though Brazil stands apart from its neighbors in terms of language and cultural similarities. Portuguese is the official language, while Spanish is dominant throughout the rest of Latin America.

Brazil has a robust supply of skilled technology professionals, with over 200,000 STEM students graduating annually from its institutes. Sao Paulo, Rio de Janeiro, and Belo Horizonte are among the most prominent tech hubs in Brazil, known for their thriving startup ecosystems and supportive business environments. Some of the biggest names in the global tech industry, including IBM, HP, and Capgemini, have established a strong presence in Brazil.

The flag of Colombia.

Colombia

Colombia has a strong ecosystem of universities, research institutions, and a government that supports Latin America software outsourcing to businesses worldwide. The Colombian Ministry of Information and Communication Technologies (MinTiC), is actively working to curb an impending talent crisis in the outsourcing software development industry by implementing various initiatives to encourage the growth and development of the country’s tech workforce and attract more talent to the industry. These efforts could prevent a projected shortage of over 100,000 developers by 2025.

While nearly all Colombians speak Spanish, English is an official language in parts of the country like Andrés, Providencia, and Santa Catalina Islands.

Colombia is a rising major player in the global outsourcing market and the Latin American IT market, ranking fourth in size after countries like Brazil. Colombia has a number of successful tech startups and globally recognized tech companies, including MercadoLibre and Acelero, which have established software development centers and business operations in the country.

Bogotá, Medellín, and Cali are rapidly emerging as key tech hubs in Colombia and South America at large, with significant growth in the technology sector. These cities are attracting a growing number of tech startups, multinational companies, and skilled software engineers and developers, due to their favorable business climates, supportive government policies, and thriving tech communities.

The flag of Argentina.

Argentina

Argentina is known for its strong culture of innovation and entrepreneurship, and there is a vibrant startup ecosystem in the country. With a GDP of nearly $500 billion, Argentina is one of the largest economies in Latin America and South America.

The country has a highly educated workforce. Argentina ranked 34th in a list of global education rankings.

Argentina has a well-developed software development industry and a strong reputation for producing high-quality software professionals. Their long tradition of excellence in computer science and software engineering has paid off as revenues are expected to rise to over $2.7 billion by 2026.

The country is home to a number of successful tech startups and globally recognized tech companies, including MercadoLibre and Despegar. There are also a number of business incubators and accelerators that provide resources and support to entrepreneurs. The Aceleradoras BA Emprende, a Buenos Aires initiative that co-finances high-impact ventures to encourage growth, has invested over $3.5 million in local entrepreneurs.

The flag of Uruguay.Uruguay

This nation of 3.4 million has made a name for itself in the software outsourcing industry, with over 1,000 software development companies driving nearly $1 billion in software exports annuallymostly to the US. Its high per capita software export figures have cemented its position as one of the world’s premier software exporting nations.

Uruguay has a number of universities focused on software development and related technologies. The country has a highly educated workforce and a business-friendly environment. On a global ranking assessing a country’s technology infrastructure, technology adoption, and investment from both businesses and governments to determine its digital readiness, Uruguay ranked second among all Latin American countries.

It is also home to a number of successful tech startups, including MercadoLibre, dLocal, and Ona. According to the European Center for Digital Competitiveness report from 2021, Montevideo, the capital of Uruguay, has been named the world’s second-fastest-growing city in the Fintech sector.

The Flag of Peru.Peru

Peru is a top-notch software outsourcing destination, ranking among the best in Latin America. Its economy is surging with a 3.5% year-over-year growth in the first half of 2022, a testament to its resilience and post-pandemic recovery.

Peru also has a couple of top-ranked universities and a thriving startup culture. Peruvian developers are highly skilled and educated, with over 25% holding STEM degrees, the highest in Latin America. When it comes to ReactJS and UX/UI design, Peru is home to some of the most talented developers and designers in the world.

The Peruvian government is investing in technology development through initiatives such as ProInnóvate and Startup Peru. The country is home to many multinational tech companies like IBM, Amazon, and Microsoft, which have established offices due to the top talent available.

The flag of Mexico.Mexico

Mexico boasts a vast developer talent pool, with over 225,000 developers making their mark in the tech industry. This impressive number is a testament to the country’s commitment to fostering tech talent. In 2023, projections indicate that the revenue for the software industry in Mexico will reach a staggering $3.9 billion, showcasing its rapid growth in the tech sector.

The country has firmly established itself as a leading tech innovation hub. With over 20 tech parks, including renowned ones like the Creative Digital City and Guadalajara Software Center, Mexico is at the forefront of technological innovation. By the end of 2022, the startup scene in Mexico witnessed the inception of 453 new startups, and this number is expected to rise further in 2023.

Recent initiatives have seen the Mexican government opening 120 tuition-free tech universities. This move further bolsters its tech talent pool, ensuring a steady influx of skilled professionals. Every year, the country celebrates the graduation of over 130,000 engineers, further solidifying its position as a tech powerhouse.

The flag of Chile.Chile

Chile, especially its capital Santiago, is emerging as a significant technology hub in Latin America. Often referred to as “Chilecon Valley”, the country is a hotspot for elite developer talent. With a robust developer community, Chile is home to over 61,000 software developers and more than 4,000 software development companies.

In 2022, the software industry in Chile made headlines by generating a revenue of $1.4 billion. This achievement reflects the country’s dedication to technological advancement and innovation. Chile’s high ranking on the Global Innovation Index, securing the 53rd position out of 132 economies in 2021, is a testament to its innovative spirit.

The country is also renowned for its quality tech education. Several Chilean universities rank among the top 500 globally, emphasizing its focus on science and engineering. With nearly 25% of Chilean graduates earning degrees in these fields, the future of tech in Chile looks promising. Additionally, Chile’s proficiency in the English language ensures smooth communication, making it a preferred destination for nearshore outsourcing and software development.

Closing Thoughts

As the global demand for continues to grow, nearshore has emerged as a popular option for infrastructure development for US businesses. By choosing the right nearshore software development partner with a strong pool of talented developers in these countries, companies can take advantage of the many benefits of nearshore development while minimizing the challenges and mitigating risks. To learn more, read our essential guide.

Hatchers work all across Latin America.

HatchWorks is the Right Nearshore Partner

Partner with HatchWorks for top-tier outsourcing services, where you’re not just getting a provider but a true partner. Experience the best of both worlds with a US-based solutions practice with the affordability and scale of Nearshore.

Our team not only understands your needs and speaks your language but also operates in a similar time zone for seamless collaboration.

Our rigorous 5-step screening process ensures only the finest talent for your project, contributing to our impressive 98.5% retention rate.

By choosing to outsource to Latin America, you benefit from lower labor costs compared to local rates, without compromising on quality or efficiency.

As one of the leading outsourcing destinations, Latin America offers a wealth of opportunities for your business. Whether it’s software development, customer service, or any other outsourcing need, HatchWorks stands ready.

Ready to harness the transformative power of Nearshore outsourcing? Contact HatchWorks today and elevate your project from concept to reality.

Hatchworks: Your US-Based Nearshore Software Development Partner

Start your project in as little as two weeks and cut your software development costs in half.

The post Top Latin American Countries for Nearshore Software Development [2024] appeared first on HatchWorks.

]]>
2024’s Comprehensive Guide to Generative AI: Techniques, Tools & Trends https://hatchworks.com/blog/software-development/generative-ai/ Tue, 19 Dec 2023 01:43:41 +0000 https://hatchworks.com/?p=29510 Major tech companies like Microsoft, Google, Coca-Cola, and Spotify are championing AI, integrating it into various aspects of their businesses, from content generation to product innovation. This groundbreaking technology is reshaping traditional workflows, enabling unprecedented levels of innovation and efficiency across a diverse range of sectors. In this guide, we’ll introduce you to the burgeoning […]

The post 2024’s Comprehensive Guide to Generative AI: Techniques, Tools & Trends appeared first on HatchWorks.

]]>

Major tech companies like Microsoft, Google, Coca-Cola, and Spotify are championing AI, integrating it into various aspects of their businesses, from content generation to product innovation.

This groundbreaking technology is reshaping traditional workflows, enabling unprecedented levels of innovation and efficiency across a diverse range of sectors.

In this guide, we’ll introduce you to the burgeoning world of generative AI. We’ll explore its capabilities, dive into its many applications and use cases, and share tips on making it a seamless part of your projects. Plus, we’ll tackle the ethical and security challenges that come with this groundbreaking technology and provide insights on responsible AI deployment.

A cover for Hatchworks' guide on "2024 Generative AI Techniques, Tools, and Trends".

Generative AI is transforming industries and redefining how we create and build products, as evidenced by the projected growth of the AI market to an astounding $110.8 billion by 2030. 

At HatchWorks, we embrace new technologies to deliver top-notch custom software development services. That’s why we’re harnessing generative AI to build digital products that surpass customer expectations and redefine the future of digital product development.

Are ready to unlock the potential of generative AI? Let’s dive in!

Exploring generative AI algorithms

Artificial intelligence has come a long way in recent years, with advances in deep learning propelling generative AI adoption at unprecedented rates. For example, ChatGPT, an OpenAI language marvel, impressively hit 1 million users in just 5 days, while its sibling, DALL-E, which generates images, reached the same milestone in a mere 2.5 months.

In comparison, other innovative products outside the AI category took significantly longer to gain traction. Facebook, for instance, reached 1 million users in 10 months, and it took Netflix 3.5 years to achieve the same milestone.

A chart showing the adoption rate of three AI tools, ChatGPT, DALL-E, and GitHub CoPilot, over time. The chart displays the percentage of users adopting each tool, with ChatGPT having the fastest adoption rate.

At its core, generative AI is powered by deep learning algorithms that analyze vast amounts of data to make predictions, generate content, and even create new data.

Let’s dive into some of the most influential algorithms and see how they’re shaping the future of digital innovation.

Deep learning

One of the most striking examples of deep learning’s influence on generative AI is natural language text generation. By processing and understanding the structure, syntax, and semantics of human language, these advanced algorithms generate coherent, contextually appropriate, and sometimes creative text that seems to have been written by a human.

This ChatGPT meme, featuring Will Smith from the movie I, Robot, humorously pokes fun at the challenge of creating truly original content.
This ChatGPT meme, featuring Will Smith from the movie I, Robot, humorously pokes fun at the challenge of creating truly original content.

Take ChatGPT, for instance. This large language model is a prime illustration of deep learning’s potential in crafting human-like text. Its rapid adoption showcases the incredible demand for AI tools that can seamlessly interact, communicate, and generate content with an increasingly human-like touch, revolutionizing the way we work, learn, and connect with one another.

Moreover, ChatGPT is transforming our relationship with search engines, as it fosters more declarative and conversational interactions, making the process of seeking information more intuitive, efficient, and engaging.

OpenAI‘s GPT-4 has made remarkable improvements over its predecessor, GPT-3.5, boasting higher scores on nearly every academic and professional exam, even surpassing 90% of lawyers on the bar exam. Additionally, GPT-4 can now accept images as inputs, expanding its potential applications.

Another example is the recent formation of Google DeepMind, a powerhouse union joining forces to responsibly accelerate AI development. This dynamic partnership is set to conquer the toughest scientific and engineering obstacles while paving the way for AI to revolutionize industries and propel science forward.

Reinforcement learning

Taking a step further, reinforcement learning brings another dimension to generative AI. This approach involves training algorithms through trial and error, allowing them to learn from their mistakes and improve their performance over time.

Reinforcement learning has found numerous applications in generative AI across various industries, unlocking innovative possibilities and transforming how we approach problems.

These models have seen so much data… that by the time that they're applied to small tasks, they can drastically outperform a model that was only trained on just a few data points."

The AI toolbox

When it comes to selecting the right algorithm for a specific use case, it’s essential to consider the strengths and weaknesses of various AI tools.

Some popular generative AI algorithms include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and Transformer models like GPT-4.

  • GANs excel at generating realistic images and can be used for tasks like image-to-image translation and generating artwork.
  • VAEs, on the other hand, are particularly well-suited for data compression and can be applied in areas like anomaly detection and image denoising.
  • Transformer models have been a game-changer for natural language processing, powering state-of-the-art text generation, translation, and summarization systems.

Armed with the knowledge of these algorithms, you’re ready to explore their creative applications and unleash their potential.

Unleashing creativity with generative AI

All across various domains, generative AI is sparking a creative revolution. 

Music generation

While it’s unlikely to replace human creativity entirely, generative AI is making waves in the music composition world. It serves as a powerful tool for enhancing the creative process. By generating unique melodies, harmonies, and rhythms that adhere to given text descriptions, AI models like MusicLM inspire musicians to explore new ideas and push the boundaries of their art.

Take, for example, the recent news of a trending song called “Heart on My Sleeve,” written and produced by TikTok user ghostwriter977. The vocals for the song were generated by artificial intelligence and made to sound like Canadian musicians Drake and The Weeknd.

Despite its growing popularity, Universal Music Group (UMG) requested the removal of the song from various music platforms and called for a block on AI using copyrighted songs for training purposes. This incident highlights the ongoing debate surrounding the ethical and legal implications of AI-generated content in creative industries.

Text generation

Language models like GPT and BERT are revolutionizing content creation and automation. With the power of Natural Language Processing (NLP) techniques, AI models can generate coherent and contextually relevant text for a wide range of applications.

Text prompts can be used as inputs to guide AI-generated text, ensuring the output aligns with desired context and themes. This technology is not only automating content creation but also helping writers overcome writer’s block and enrich their writing.

These models can even be prompted to generate code. AI-generated code snippets and templates are streamlining the development process for companies, allowing them to more rapidly prototype and build high-quality software solutions for their clients.

A screenshot of GitHub's CoPilot AI assistance, showing a code editor with a suggestion for a code snippet.
Introducing CoPilot, GitHub’s AI-powered code assistant! CoPilot helps developers write better code faster by suggesting relevant code snippets based on the context of their code.

One notable example is GitHub Copilot, an AI-powered code assistant developed by GitHub and OpenAI. It integrates with popular integrated development environments (IDEs) like Visual Studio Code, Neovim, and JetBrains, offering auto-completion of code in languages such as Python, JavaScript, TypeScript, Ruby, and Go.

By leveraging the capabilities of OpenAI Codex, GitHub Copilot makes it easier for developers to navigate unfamiliar coding frameworks and languages while reducing the time spent reading documentation. Furthermore, a research study conducted by the GitHub Next team revealed that GitHub Copilot significantly impacts developers’ productivity and happiness. Surveying over 2,000 developers, the study found that between 60-75% of users feel more fulfilled, less frustrated, and are able to focus on more satisfying work.

Image generation

A campaign image for our podcast featuring two muscular men in a gym joking about the podcast's name. The image was generated using Midjourney AI.
Thanks to Midjourney AI, we were able to create this hilarious campaign image featuring two muscle-bound guys promoting our podcast, Built Right.

AI-generated art is transforming the creative and design industry by enabling artists and designers to create unique visuals using image generators. From photorealistic images generated using GANs to medical images for research and diagnostic purposes, generative AI is revolutionizing the world of visual content.

According to Everypixel, “More than 15 billion images were created using text-to-image algorithms since last year. To put this in perspective, it took photographers 150 years, from the first photograph taken in 1826 until 1975, to reach the 15 billion mark.” This staggering statistic underscores the transformative power and rapid evolution of AI in the realm of image generation.

At HatchWorks, we’re all about diving into the exciting world of Generative AI, and we wanted our blog to really capture that energy. So our fantastic marketing designer, Luis Leiva, opted for generative design to whip up a unique banner image for our blog post.

We fed the Midjourney AI model this prompt: “A Brave New World of Deep Learning, Reinforcement Learning, and Algorithmic Innovation, vector, illustration, happy, vibrant, teal, orange.”

We fed the Midjourney AI model this prompt: "A Brave New World of Deep Learning, Reinforcement Learning, and Algorithmic Innovation, vector, illustration, happy, vibrant, teal, orange."

Generative AI isn’t just about number-crunching and problem-solving; it’s also about unleashing creative flair. We hope to inspire you to ponder the broader applications of generative AI and explore the endless possibilities it offers in both practical and artistic realms.

Some more groundbreaking applications of image generation include:

Personalized marketing

Generative AI can create tailored visuals for marketing campaigns. Platforms such as Jasper, enable teams to generate personalized and brand-specific content at a much faster pace, leading to a tenfold increase in productivity. By leveraging AI-powered tools, businesses can craft captivating social media posts, advertisements, and marketing copy, considerably boosting the efficacy of their marketing strategies while maintaining a more targeted approach.

Icon and Logo Design

Having unique and tailored branding elements, such as icons and logos, is essential for products to stand out. AI-generated icons and logos offer an innovative solution to this challenge.

Transforming the world of icon and logo design, numerous new tools utilize AI-driven innovation to elevate the creative process. Magician for Figma uses AI to generate unique icons from text inputs, streamlining the icon creation process. Adobe Firefly focuses on providing creators with an infinite range of generative AI models for content creation.

By utilizing these cutting-edge tools, designers can effortlessly generate custom vectors, brushes, textures, and branding elements, leading to more distinctive and memorable designs.

Data Visualization and Analysis

AI-generated charts, graphs, and other visual representations of complex data sets enable companies to present information in a clear, engaging, and insightful manner, enhancing their product’s user experience.

Tools like Ask Viable could play a crucial role in this process, offering AI-powered analysis that turns unstructured qualitative data and feedback into actionable insights, allowing businesses to make data-driven decisions and optimize their performance.

User Interface Design

AI-generated interface mockups and dynamic design elements are revolutionizing the way companies create intuitive and visually appealing user experiences for their applications.

AI-generated interface mockups and dynamic design elements are revolutionizing the way companies create intuitive and visually appealing user experiences for their applications.

Tools like Genius are at the cutting edge of this transformation, offering an AI design companion in Figma that understands what you’re designing and makes suggestions using components from your design system. These AI-driven solutions allow designers to explore a multitude of ideas, iterate more efficiently, and ultimately deliver more engaging user interfaces.

Tips for integrating generative AI into your projects

To make the most of generative AI in your projects, it’s crucial to understand the best practices for selecting, training, and implementing AI algorithms. Here are some valuable tips to help you navigate the integration process and maximize the benefits of generative AI.

Selecting the Right Algorithm

  • Identify your project goals: Clearly outline the objectives of your project and the desired outcomes before choosing a generative AI algorithm. This will help you determine which algorithm best aligns with your goals.
  • Consider your data: Assess the type and amount of data you have available. Certain algorithms may require large datasets, while others can work effectively with smaller amounts of data.
  • Evaluate algorithm performance: Research the performance of various generative AI algorithms and compare their success in generating high-quality, relevant content. Select the one that best meets your quality and creativity requirements.

Incorporating generative AI into your workflows

  • Prepare your data: Ensure that your data is clean, well-structured, and diverse to provide a solid foundation for training your generative AI model.
  • Seamless integration: Design your workflows to accommodate generative AI output, making it easy to incorporate generated content into your projects.
  • Human-AI collaboration: Emphasize the importance of human-AI collaboration, using AI as a tool to enhance creativity and productivity rather than replace human input.
  • Iterate and refine: Continuously test and refine your generative AI implementations, gathering feedback from users and stakeholders to improve the overall quality and effectiveness of AI-generated content.

Assessing AI output quality and effectiveness

  • Establish quality metrics: Define clear metrics to measure the quality and effectiveness of your generative AI output. This can include factors such as coherence, relevance, and creativity.
  • Regular evaluation: Periodically evaluate the performance of your generative AI models against your established quality metrics and make improvements as needed.
  • Seek user feedback: Gather feedback from end-users and other stakeholders to understand how well your generative AI output meets their needs and expectations. Use this feedback to refine your AI models and workflows further.

By following these tips, you can successfully integrate generative AI into your projects and make the most of this powerful technology.

📌 For an in-depth exploration of how generative AI is revolutionizing various sectors, read our comprehensive report on Generative AI Use Cases Across Industries.

To see how HatchWorks is leading the way in AI-powered software development – visit our Generative-Driven Development™ page now.

Navigating the ethical and security challenges of generative AI

Generative AI, like any powerful technology, brings a set of ethical and security challenges that must be addressed proactively to ensure responsible deployment. Here, we’ll provide guidance on how to navigate these challenges effectively and maximize the positive impact of generative AI.

First, address the potential misuse of generative AI by developing and enforcing strict guidelines for its ethical use within your organization. Encourage a culture of accountability and monitor generative AI usage in your projects to prevent misuse.

Secondly, mitigate the risks of biased or uncontrolled AI-generated content by training AI models on diverse and representative datasets. Be aware that earlier models like GPT-3 have demonstrated biases related to gender, race, and religion, which can influence the output. Implement mechanisms to detect and mitigate harmful or offensive content and educate your team and end-users about potential biases and limitations, promoting responsible usage and critical evaluation.

Protection against the malicious use of generative AI is essential. Implement robust security measures, monitor AI-generated content for signs of malicious activity, and collaborate with industry partners and stakeholders to develop and promote best practices for mitigating malicious use.

In addition to security measures, prioritize transparency in your generative AI deployments. Openly communicate the use of AI-generated content and the methodologies behind it. Stay informed about the latest ethical and security developments in the generative AI field and adapt your strategies and practices accordingly. Foster a strong culture of responsibility and ethical awareness within your organization.

Lastly, invest in education and training. Provide your team members with education on generative AI technology, its potential risks, and ethical considerations, fostering a culture of informed responsibility. Encourage continuous learning to stay updated on the latest advances in generative AI and its ethical and security implications. Contribute to public awareness and understanding of generative AI, promoting informed decision-making and responsible use.

It’s predicted that AI could impact 300 million full-time jobs worldwide, so it is crucial to emphasize responsible and ethical use. By proactively addressing these challenges, you can ensure the responsible and beneficial use of generative AI in your projects, leading to a more innovative, efficient, and ethical digital product development process.

Frequently Asked Questions about generative AI

Generative AI is a form of artificial intelligence that uses algorithms to create new data, content, or predictions based on existing data. Unlike discriminative AI, which focuses on classifying and predicting outcomes, generative AI generates new instances, such as images, text, or music, based on learned patterns and structures.

Generative AI is a subfield of machine learning, which is an overarching discipline that deals with teaching computers to learn and make decisions based on data. Generative AI specifically focuses on the creation of new content by learning from existing data.

A Generative Adversarial Network (GAN) is a type of generative AI model that consists of two neural networks, a generator and a discriminator, that work together in a competitive manner. The generator creates new content, while the discriminator evaluates the content’s quality and authenticity.

Generative AI can explore a vast range of design possibilities, optimize solutions, and help designers create innovative, functional, and aesthetically appealing products.

Discover how our Generative-Driven Development services can transform your business by visiting https://hatchworks.com/generative-driven-development/.

Businesses can use generative AI to automate content generation, optimize decision-making, and create personalized experiences for customers, ultimately improving efficiency and reducing costs.

Some limitations of generative AI include the need for large amounts of training data, high computational resources, potential bias in generated content, and difficulty in controlling the generated output. Additionally, generative AI models may struggle to understand and generate content that falls outside the scope of their training data.

No. While generative AI can produce impressive results, it is not a replacement for human creativity. AI-generated content is based on patterns learned from existing data, meaning it cannot replicate the full range of human emotions, experiences, or intuition that drive creativity.

Summary

Generative AI has immense potential to revolutionize how we create, design, and innovate in the digital realm. By harnessing the power of AI tools and technologies, we can unlock new creative possibilities and enhance the quality and efficiency of our projects. By emphasizing responsible and ethical use, we can ensure that generative AI continues to have a positive impact on the industry and contributes to a more vibrant and creative digital landscape.

Generative AI has immense potential to revolutionize how we create, design, and innovate in the digital realm. By harnessing the power of AI tools and technologies, we can unlock new creative possibilities and enhance the quality and efficiency of our projects.

Balancing ethical concerns with responsible use, we can ensure that generative AI contributes to a more vibrant and creative digital landscape while mitigating its potential negative impact on the job market.

At HatchWorks, we understand the importance of leveraging generative AI responsibly and ethically. As a software development partner, we utilize the power of generative AI to build innovative digital products that meet the unique needs and expectations of our clients. Reach out to us to learn more about how we can help you harness the potential of generative AI for your projects.

Hatchworks: Your US-Based Nearshore Software Development Partner

At HatchWorks, we understand the importance of leveraging generative AI responsibly and ethically.

As a software development partner, we utilize the power of generative AI to build innovative digital products that meet the unique needs and expectations of our clients tailored to your industry.

Reach out to us to learn more about how we can help you harness the potential of generative AI for your projects.

The post 2024’s Comprehensive Guide to Generative AI: Techniques, Tools & Trends appeared first on HatchWorks.

]]>
The Best of Built Right: A Season 1 Lookback https://hatchworks.com/built-right/the-best-of-built-right-season-1/ Mon, 04 Dec 2023 16:25:44 +0000 https://hatchworks.com/?p=30435 As the year draws to a close, so does season one of the Built Right podcast. In this podcast, we’ve covered a lot of ground – from the rise of generative AI to the importance of good user experience design. We wanted to round off season one with a special episode that celebrates all the […]

The post The Best of Built Right: A Season 1 Lookback appeared first on HatchWorks.

]]>

As the year draws to a close, so does season one of the Built Right podcast. In this podcast, we’ve covered a lot of ground – from the rise of generative AI to the importance of good user experience design.

We wanted to round off season one with a special episode that celebrates all the brilliant insights, breakthrough ideas, and shared wisdom from our guests. In this episode, we look back at our top ten moments from the podcast. While it certainly wasn’t easy to pick just ten, these are some of our standout insights from our guests.

We’ll be back next year with a brand new season, so stay tuned for updates. In the meantime, keep reading to see which moments were our favorite or listen to the episode in full below.

10. The creative element of generative AI

We had a great conversation with Jason Schlachter, Founder of AI Empowerment Group and Host of the We Wonder podcast, in episode 8 about the creative element of AI. Creativity has always been hard to define, and with the abilities of generative AI, it leads to questions like, “what is art?”

Jason explores how generative AI can be used in different ways in the product development world and how to vet winning use cases.

Check out episode 8 with Jason: Generative AI Playbook: How to Identify and Vet Winning Use Cases

9. Why you need a new approach to modernization

For our fourth episode of the podcast, we sat down with HatchWorks’ own Joseph Misemer, Director of Solutions Consulting, to discuss why the MVP approach doesn’t always work. When modernizing a solution rather than creating a new one, there’s no need to start with the MVP approach.

In this episode clip, Joseph gets into why starting from scratch to modernize a solution might upset users who already love your product.

Watch episode 4: The MVP Trap: Why You Need a New Approach to Modernization with Joseph Misemer

8. Evaluating the value of generative AI

With so many new AI tools on the market, it’s important to be picky when choosing what to use. So, remember to ask yourself, does this provide true value?

In episode 10 with HatchWorks’ Andy Silverstri, he and host Matt Paige discuss different ways generative AI could change how we think about UX and UI design. For this clip, Matt likens the AI wave to the dot-com boom, where the concept of value was sometimes ignored in favor of following the trend.

Listen to episode 10 in full: 5 Ways Generative AI Will Change the Way You Think About UX and UI Design

7. Carrying the weight of product development (and sparing your customers)

For our seventh pick, we revisited episode 9 with Arda Bulut, CTO and Co-Founder of HockeyStack. Arda shares his thoughts on how to build and scale a product while keeping the customer experience front of mind.

In this clip, he explains why it’s often either the developers or the users who shoulder the difficulties when building and using a product. But in Arda’s case, he always prioritizes the user’s experience, even if it makes his work harder. And we think that’s a great way to think about product development!

Listen to episode 9: Listen, Prioritize, and Scale to Build a Winning Product

6. Testing quality products

If you’ve ever heard the phrase “shift left” when it comes to testing digital products, you may find episode 7 an interesting listen. Erika Chestnut, Head of Quality at Realtor.com, explored what it takes to build a high- quality product, and why testing is such a crucial point in development.

In the clip we picked, she explains that when a product is deemed of low quality, it’s reflecting poor quality testing. But she believes phrases like “shift left” are often buzzwords, when instead, we really need to dig into the impact of what that means when testing.

Learn more about testing for quality in episode 7: Quality-Driven Product Development with Realtor.com’s Erika Chestnut.

5. Generative AI and its impact on UX design

Andy Silvestri, Director of Product Design at HatchWorks, explored how generative AI is impacting the world of UX design in episode 10.

We picked a clip with Andy explaining how generative AI is influencing design practices and why it could open more doors for developers to explore new concepts in the early ideation stages.

Check out episode 10: 5 Ways Generative AI Will Change the Way You Think About UX and UI Design

4. Software is a must, not a nice-to-have

In our very first Built Right episode, we welcomed HatchWorks’ own CEO to explore what a “built right mindset” is and how it should influence every stage of development. Brandon Powell breaks down the top three questions everyone should ask before building a product: is it valuable? Is it viable? And is it feasible?

In the clip we selected, Brandon explains how software has already shaped every industry and why digital tools aren’t just a nice-to-have. They’re a must in today’s world.

Look back at episode 1: The Built Right Mindset

3. Developers want to take ownership of the product they’re working on

A good leader needs to be able to manage change effectively and solve the adaptive challenges that come with it. To talk more about that, Ebenezer Ikonne, AVP Product & Engineering at Cox Automotive, joined the podcast for episode 14 to break down six adaptive leader behaviors to adopt and why.

For the clip we picked, Ebenezer says that we need to “give the work back to the people.” Leaders need to let those who are working on the product have greater ownership, and sometimes that means stepping back.

Catch episode 14: The 6 Adaptive Leader Behaviors with Ebenezer Ikonne

2. The human brain vs. AI: Which is more efficient?

With everyone sharing their thoughts on generative AI, we wanted to dive more into the science behind it in episode 17. We invited Nikolaos Vasiloglou, Vice President of Research ML at RelationalAI, to give us the PhD data scientist perspective.

Nikolaos explained why he disagrees with comparisons between the human brain and AI systems – and why the human brain is ultimately more efficient and effective in many ways.

Learn more from Nikolaos in episode 17: How Generative AI Works, as Told by a PhD Data Scientist

1. Could AI help us become “more human”?

For our top pick, we look back at episode 15 with Brennan McEachran, CEO and Co-Founder of Hypercontext. In this episode, Brennan spoke about the AI-EQ connection and how emotionally intelligent AI could help teams boost performance and create faster, more streamlined processes.

In our top clip, he explains why, despite the fears of AI, it could help us refocus on more human-centered tasks.

You can listen to episode 15 here: The AI-EQ Connection: How Emotionally Intelligent AI is Reshaping Management

After so many fantastic episodes, it was tough to pick just ten clips! If any of the above piqued your interest, you can revisit any of the episodes from season one on our website.

For our listeners, we want to share a big thanks from the HatchWorks team. We’ll be back after a short winter break for season two, with more great episodes and guests to talk about building products the right way.

Explore the future of software creation with HatchWorks’ Generative-Driven Development™.

Leveraging advanced AI technologies, we’re setting new standards in the industry.

See how our approach can revolutionize your development process.

The post The Best of Built Right: A Season 1 Lookback appeared first on HatchWorks.

]]>
Listen, Prioritize, and Scale to Build a Winning Product https://hatchworks.com/built-right/build-a-winning-product/ Tue, 25 Jul 2023 12:00:36 +0000 https://hatchworks.com/?p=29676 From the idea stage to product-market fit, building a winning product is no easy feat. That’s why we asked Arda Bulut, Co-Founder and Chief Technology Officer at HockeyStack, to share his tips, tricks and insights. Arda explains how responding to customer feedback, prioritizing the right things and keeping customer ease of use front of mind […]

The post Listen, Prioritize, and Scale to Build a Winning Product appeared first on HatchWorks.

]]>

From the idea stage to product-market fit, building a winning product is no easy feat. That’s why we asked Arda Bulut, Co-Founder and Chief Technology Officer at HockeyStack, to share his tips, tricks and insights.  

Arda explains how responding to customer feedback, prioritizing the right things and keeping customer ease of use front of mind allowed him to build a successful SaaS analytics and attribution platform.  

Plus, he highlights how he dealt with early setbacks, details their journey to product-market fit and tells us the piece of advice he’d give his former self. 

Listen to the full podcast below or read on for the top takeaways. 

HockeyStack’s initial vision 

HockeyStack’s journey began during the pandemic. Arda and his colleagues wanted to build “a product analytics tool that focused on ease of use,” providing easy-to-understand analytics with the aid of generative AI. 

Their focus on ease of use has remained to this day, but feedback from customers and figures in the SaaS community led HockeyStack to change direction and optimize their product. 

Listening to the customer 

Arda explains that initial customer feedback was strong, but the comments weren’t backed by the sales and growth required to meet their product-market fit. 

He explains that “blind faith” inspired him and his colleagues to continue their journey. They weren’t sure they were tackling the right problems and meeting the right audiences, but they were passionate about building something. 

He drew up a pros and cons list to find out what was working and what wasn’t, quickly realizing that value was what customers cared about most. 

The journey to product-market fit 

This led to the second iteration of the company which involved showing customers the journeys of their users and providing them with simple-to-use dashboards. 

This began gaining traction, but HockeyStack still felt work was needed and optimized their product once again. Upon speaking to SaaS and eCommerce leaders, Arda realized they were often only interested in specific components of their service, namely attributing revenue back to blog posts.  

The feedback provided extra clarity on their target audience and the pain points they should target, setting them on the path to achieving their product-market fit. 

Iterating, building and delivering new features fast 

HockeyStack are able to strategize, build and provide new features and functionalities regularly. But what’s their secret sauce? 

Arda says, rather than aiming to produce a complex feature, they prioritize their customers’ needs, using simple tech stacks to develop their product. This gives them something to show off to customers before they develop it into the finished version.  

Why less is more 

When your product is succeeding, it’s understandable to want to produce brand-new features all the time. But sometimes less is more. 

Arda says there are plenty of benefits to investing less in your product. If new features don’t work or aren’t appealing to your customers, it takes more time and effort to remove them. 

Simplicity is key when creating a product that won’t use up your customers’ brain calories! As long as you are providing a service that addresses their pain points, they will be happy. 

Identifying priorities 

Even when simplicity is your watchword, you still need to introduce new features. But how do you prioritize what to add? 

Arda says to ask yourself: 

  • Does the customer need this? 
  • Have they asked for it? 
  • How many people/groups have expressed interest in it? 

Prioritization is crucial in the way Arda works. He advises you should optimize the 8-10 hours of work you do in a day. Getting your priorities straight makes your work better and faster. 

You’ll leave your customers asking: “How are they doing this so fast?”  

Developers should do the heavy lifting 

When building a scalable product, hard work is always involved! But Arda says it’s important most of that hard work lies with the developers, not the customers. 

HockeyStack makes sure ease of use is at the forefront of everything they do. They make it simple to set everything up, allowing customers to avoid configurations and excessive form-filling. 

For more expert advice from Arda on prioritizing, listening and scaling to reach your product-market fit, tune into the full episode. Subscribe to Built Right for more interesting conversations on how you can build your products in the right way! 

[00:00:00] Matt Paige: Today we’re chatting with Arda Bull, co-founder and CTO of Hockey Stack and Hockey Stack’s, a SaaS analytics and attribution platform, the unit’s website, CRM ad data so that marketing and growth teams can actually. Measure marketing’s roi. Nowhere to invest more and see account based 10 signals. And y’all been experiencing substantial growth as of late, attracting notable customers like Airme, lavender, cosm, to name a few, and I’m pumped to get into this story.

[00:00:38] Of Hockey Stack today. So it’s a story of multiple pivots on their journey to product market fit the holy grail of product market fit. So many great learnings for product and engineering leaders in this episode, including some insights towards the end. You’re not gonna wanna miss with Arda and what he’s learned on his journey of building hockey stack.

[00:00:59] But welcome to the show, Arda. 

[00:01:02] Arda Bulut: How are you? I’m excited Doing good 

[00:01:04] Matt Paige: here as well. Yeah. Excited to get into it. Hockey stack’s doing some awesome stuff right now. And hockey stack, I’ve been following y’all as of late and it’s such a great example of a product that’s built right and the way we think about that, you gotta.

[00:01:20] You gotta build the right thing, right? That’s valuable for your end user, viable for the business, feasible from a technological perspective. And then you gotta build it the right way, which is a lot into your wheelhouse on the CTO side, in terms of being maintainable, scalable, secure, and usable. And the problem that you’re solving is a big one, especially now in, recession.

[00:01:40] Hyper attention on budget. I know, I’m feeling that. And you’re going after a. Problem in the market. But to start though, I want you to take us back to the beginning. When you started, it was the height of the pandemic. You had an initial vision of what you wanted to build, which is actually different than where hockey stack is today.

[00:02:00] But take us through that first part of your journey. 

[00:02:03] Arda Bulut: You had a great description there, but you know what I say, it hasn’t always been like this the first year, especially like it wasn’t easy as you said there, there has been a, and the first product, even though it was like always an tics product, wasn’t anything like this.

[00:02:19] When we started it was like at the height of the pandemic and like we were trying to do other projects and one of the key things that we noticed there, We couldn’t really like measure product usage and like we tried to mix panel lamp to those kind of classic product A tools and maybe it was our fault, but we couldn’t really get time to work.

[00:02:40] We couldn’t really set set them up easily. So the initial, like the very, very first idea that we had was actually building a product analytics. That focused on ease of use, that focused on actually like giving insights automatically so that you won’t have to look at anything yourself. We want to use artificial intelligence, which was like, it’s weird.

[00:03:01] It was always like at the height of AI as well. Then it’s also it’s also trend right now as 

[00:03:06] Matt Paige: well. That’s the weird journey, A new height, that you’re we’re going into with generative ai, right? Yeah. But it’s interesting. Going back to that point, you mentioned you built an analytics. With the focus on ease of use.

[00:03:18] Yeah. I think this gets into part of the learning it like nowhere in that statement did I hear like the target customer or the problem you were going after? Maybe go deeper there on that initial kind of thing you were building and where you hit some roadblocks. I. 

[00:03:32] Arda Bulut: Yeah, I guess like you also had a great printer from the beginning.

[00:03:36] One of the key things that we want to do was build a product that was easy, that was like, from the setup perspective, from the usability, it had to be like intuitive for whoever we were selling to. The AI was just a way to for us to just say that there’s gonna be some magic there that’s gonna give you the numbers easily so that you won’t even have to like, analyze the data yourself.

[00:03:57] But like the. The actual first product that I mentioned now, we tried working on it for about five to six months. We were talking with people like the usual talk with your customers, talk with people, potential buyers, et cetera. We thought we were talking with them. We were getting all these like great feedback or that’s a cool product, that’s a cool idea, you should do that or something.

[00:04:21] But as we’re. One thing we noticed was no one really wanted to put the script on their website to actually track the data. No one wanted to share their like current data stack with us. So even though they were saying like cool product, et cetera, it didn’t really mean much. Then you had to talk business with them.

[00:04:38] No one gave any money to this product. Yeah, 

[00:04:41] Matt Paige: that, that’s a key piece too, right? Is that this concept of, you can get customer feedback and they may say how awesome it. But when push comes to shove, when it comes, like you mentioned, putting the, with your tool, it’s putting a script on their website, we’re actually paying for the solution.

[00:04:58] If you’re not getting those positive signals it may not be actually good enough to replace status quo of how they do it today. Yeah, 

[00:05:05] Arda Bulut: exactly. The actual validation comes v p people use the product. Not when they say they can use it or that it’s so interesting or something. That was the first key learning.

[00:05:17] We tried to get that work. As I said, we were like five to six months or something, but at the end, like we realized it, it wasn’t going anywhere plus us three. Like we didn’t have any LinkedIn press or something done, so it was like three unknown people coming from Turkey. How are you gonna trust that basically?

[00:05:35] So after that, like we realized we had to change something about the product, like we. At the same time 

[00:05:42] Matt Paige: let me pause there actually. So you’re in Turkey. It’s you and your other two founders. And you’re at this inflection point, right? And so many folks, when they get to this point, they scrap it and go find a day job.

[00:05:53] You know what, where you’re what, maybe early 2021 at this point, and you’re at this inflection point of, do we keep going? Yeah. Yeah. And what was the. What was the trigger for y’all to keep going? Was there was it somebody in the founding team that’s like, all right, we’re gonna keep doing this.

[00:06:11] Did you have an insight that kind of led you to go down another angle? What? What pushed you to keep building? 

[00:06:18] Arda Bulut: Yeah. I think it was just blind faith, yeah, 

[00:06:23] Matt Paige: sometimes you need that, right? Like 

[00:06:25] Arda Bulut: sometimes, yeah. We weren’t sure if it was gonna work. We weren’t sure, if we were actually tackling the right problem, the right audience, whatever.

[00:06:32] But we just want to build something and we like working together. So it was just like a matter of, okay, what are we gonna do? What are we gonna build, actually? So it never even crossed our minds to, at that stage, especially Find another job. It was more about what are we gonna do? I remember we had some motion docs where we were doing like pros and cons list of each idea that we have, like what’s working here, what doesn’t work there.

[00:06:58] And we had some very terrible arguments around that time on everyone wants to go in some different direction. But during that stage, one of the ideas that we had was a webinar text tool. Like instead of focusing on product antics and saying that we use AI or something, we realized that no one really cared about like the technology that you’re in, as long as we are providing some value to them.

[00:07:21] So around that time we tried like focusing on a webinar tool that’s like a competitor to Google Analytics. You can think of this as the second version of the product. Okay. The idea there was basically like tracking the same way that we were tracking like the product analytics part, but for web analytics and all actually like showing people the journeys of all the visitors that they had, giving them like easier to understand dashboards rather than going to like Google Analytics and like going through all their like complex data visualization methods.

[00:07:54] That was the second idea Around that time, there were of simple webinar tool, like privacy friendly tools that were coming out as well. So like we wrote their wave along with them at that point. Yeah. And one of the key things that we did around that time was actually applying to a website called AppSumo.

[00:08:14] It’s like a lifetime deal platform. Have you heard. Yeah, I’ve heard of it. Yep. Yeah. Yeah. We applied there and it was, I think er actually applied there, but he didn’t really think much of it. He just filled out an application and forget about it. And he just left the product there to chill on its own for a while.

[00:08:32] And then after a month or so, as we were still like deciding on what we were gonna do next, we realized that there were like a little traction there. That may be like a means something. Oh wow. Yeah. Like we came back. So 

[00:08:45] Matt Paige: You didn’t even realize it was getting traction. It just, it was something you had did.

[00:08:50] That’s what I love about so many journeys and stories. It’s these random serendipitous moments that happened. So you started to get traction which, gave you another kind of nugget of insight of, okay, there may be something here. 

[00:09:05] Arda Bulut: Yeah. Yeah, exactly. I think around that time, like it’s made about one one k or $1,000.

[00:09:12] We taught us the, again, thing there. We saw that, like we said, okay, maybe there’s something here. So like we decided to invest more in depth, small channel. There are like some Facebook groups or other like committees that they have for the buyers there. So basically we just. Try to be more active talk with the customers around there.

[00:09:31] And as people trusted us more, like we try to actually gain some traction from the part which is like easy to use website analytics tool. This is before like any attribution, before B2B assess any of the current things that we are working 

[00:09:44] Matt Paige: on right now. So who is your target customer at this point? Yeah.

[00:09:48] Or did you really have a target you were going after? 

[00:09:51] Arda Bulut: At that point, we didn’t even choose a target audience. It was whatever customer basically showed us, mostly agencies and e-commerce people though their audiences Usually those people, like the Facebook groups are full of them.

[00:10:04] But yeah, like we started gaining some traction there. Like people really like the product. I think one of the key things there was playing the underdog against a big tool like Atlantic, because like then you become that big. There are gonna be out of people that don’t like it. There are gonna be out of people that like really hate it.

[00:10:24] That’s also like one of the things, all of those simple Retics tools used. And we tried to use it as a, like the alternative tics, like the analytics that you’ll actually want to use. That was the messaging around that time. People got behind that, like they were sick of Google, Netflix.

[00:10:41] We also tried to fight with session recording and heat map tools a little bit as well. That was like a time where we were trying to position us based on like other tools. That kind of works for a while, like if you’re going for that kind of a why, but I think in the long run it wasn’t gonna really work out because.

[00:11:01] At some point you have to change your messaging so that your product is at the focus of it instead of some other product. Yeah. If you have, that’s interesting. Yeah. I think if you have like another tool in your header in your website Yeah. Like in your main page. Yeah. I think that’s gonna be a problem later on that you should probably think about.

[00:11:20] Yeah. But it’s worked for a while. The money from the money we made from AB. Probably the pree round that we did there, like just from the buyers, just from the cost that they bought there. That really helped us going for at least like another year or also 

[00:11:35] Matt Paige: that money. So at this point you’re you’re at this next inflection point, you’re starting to get some positive signals.

[00:11:40] You actually have got some kind of revenue coming in. You mentioned like these agencies are interested, but you still haven’t gotten to click this is it. We’ve got product market. What’s that next inflection point that got you? I think this is where you actually start to get to what hockey stack is today.

[00:11:56] Yeah. What was that next inflection point? 

[00:11:59] Arda Bulut: Yeah. Basically like we got the money from smo. We were doing good, but the problem this time was like the customers and the features that they were requesting wasn’t really aligning with the vision that we. For the website analytics tool leader, they were asking for like white labeling features.

[00:12:18] They want to basically show use our product, show it as their own to their own customers, especially like the agencies. And we didn’t wanna go down that road, you, your product then wouldn’t have any, Brand or something we, yeah, as a SaaS. Our SaaS, like we felt closer to other SaaS businesses, but we didn’t really have a way to validate the idea to actually focus on that.

[00:12:40] So around that time, like the second pivot that we were about to make was more about an audience problem. Rather than like the actual product, because we were like happy with the product. It was usable, like people were getting value out of it. So we didn’t really think about the product aspect that much around the pivot.

[00:12:57] So what we did there to actually decide on what we were gonna do next, how we were gonna execute that is talk again with a lot of. But it isn’t just like talking about some abstract concept or like a problem that they might be having that they just stay to like in a cold. They actually had something to show to them, like the actual product.

[00:13:17] And we could ask them like, this is the product, this is how can use it. Would you use this? Like, how does, how do you think this works? Fits in your workflow? That was like the big question that we were asking around that time. Unlike. We tried that with e-commerce people. We tried that with agencies and we also tried that with SaaS people.

[00:13:36] And what we realized that SaaS people were generally a lot more responsive to our messages. Yeah. They were like, they really wanted to like help us out as well. And they were also interested in the product. But the key thing is most of the people that we talked to weren’t interested in 90% of the.

[00:13:56] Wow. I remember one person just said I’m not gonna use this. I’m not gonna use that feature. I’m not gonna use this page. All I want is this specific thing. And that specific thing that they wanted to see was actually attributing revenue back to block posts. It was like insight that we had. Yeah.

[00:14:15] I love 

[00:14:15] Matt Paige: that too. Cuz some people may hear that, that, oh, I’m not gonna use 90% of your. And they walk away with their tail between their legs. But what that customer just gave you is like the biggest inside of all. Yeah. Like here’s the gold, this 10% right here is what I care about. And not only what I care about, it’s what I would pay for.

[00:14:35] Yeah. And what’s shout like props to y’all for actually now focusing in on that area. So now you’ve started to understand. Who’s that core customer that kind of B2B SaaS marketing person looking for attribution, and you’re getting to what is their job to be done, which is beautiful. And now you’re starting to you’ve gotten to, what’s that core problem that needs to be solved, right?

[00:14:59] Yeah. At this point, 

[00:15:00] Arda Bulut: yeah. At that point, like we started the messaging with the same thing that they told us, like at between revenue back to block post. And the fun thing is around that time we didn’t even. That much about attribution, like it was just a, like a funny word that we heard about. We didn’t even have that functional thing, the product, it was just like a precursor to that.

[00:15:22] So just that weekend, like Pura just hacked away, built like that, the very, very first attribution feature onto the product and try showing. To be SaaS businesses and like with that, with actually like being able to show that they were a lot more like open to their problems. We could really talk about like the core problems that you mentioned that they were having and we realized that it isn’t just about block post or like revenue there, it’s about actually unifying data.

[00:15:52] That they were getting with the rest of the tax stack that the company is using. Because without that, like the, I think someone else just said that like they were, every month they were praying that the blog post, that they were like publishing will have some kind of traffic Samsung kind of visitors because otherwise, like they had nothing to show to, like execs to C suit, whatever.

[00:16:13] They were just like praying to get that success and they had no way to measure it. They had no way to optimize it. 

[00:16:20] Matt Paige: Yeah. When the alternative is praying for success, then yeah. Yeah. You got a good if you can solve against that, then you got an opportunity space there.

[00:16:28] That’s such an awesome story. I’m curious though you’re the CTO of this product. Yeah. I’m assuming it’s built on a really modern stack, being a new solution. But I’m seeing like every week, like new features, new functionality. Being built. What do you attribute that to? Y’all’s ability to quickly iterate and build and deliver.

[00:16:48] Not only build the new features, but actually deliver and put them into production in, a safe and secure way. 

[00:16:54] Arda Bulut: Yeah, I think it’s about like the mindset, because even two years ago when he first started, deciding on tech, like the tech, actual tech stack was just about, okay, which technologies do we know?

[00:17:08] Yeah. Which are technologies that we can actually like push some code, some production server, and what’s the fastest way to build the mvp? Basically, that’s what we like Bora and first thought. And that’s how we like built the frustration of the product that like some parts of that code is still being used in production right now.

[00:17:26] But like from then on it was about always choosing the simplest text that you can have for that stage of the company so that you can quickly find something to show off to people. And like even now, while we are like pushing features, it’s about making, like finding the simplest way to actually build that.

[00:17:45] And I’m pushed that and iterate over it over time to actually make it like the complex thing that it is now. I think like Base Camp had a great example about this, like while building their calendar feature, they didn’t just go out and build like this complex calendar, but instead they tried to understand the core problem that people are having.

[00:18:05] Yeah. And then build feature around that core problem instead of just saying okay, we should build a calendar or something. In our case as a we don’t just go out. Build the most complex thing and see if that works for the people. We try to like iterate over the process to make sure, like the first version works, second version, not that well, maybe we improve it at the third version.

[00:18:26] So that way like we’ll have something to show the people every week. Every week there’s something like new happening in the platform. Yeah. And right now especially, 

[00:18:36] Matt Paige: No, I was just gonna say, you’re speaking my language. Basecamp is such a good example and use case Yeah. Of how to do this. And like you said it’s quickly iterating and not being afraid to put something out there so people can react to and you can continue to iterate on, right?

[00:18:50] Arda Bulut: Yeah, exactly. Like we have some features that not also people use. Sometimes we remove features from the product that we know that no one’s using. So that’s, that also happens, but you should invest like from the start, try to invest less than you would normally do. So then you have to actually build, remove it from the product.

[00:19:10] It’ll be such a big close at the end there. Yeah. That’s like a big thing there. 

[00:19:15] Matt Paige: And that’s such a good nugget too. It’s the everybody’s always in the mindset of build more features, put ’em out there. But what you just mentioned was critical. It was. If a feature’s not being used, if it’s not adding value, remove it.

[00:19:29] Because at the end of the day, like you’re only creating more complexity in the solution for your user. I like to think of it as you’re forcing your users to burn more like brain calories, right? With the more stuff you have out there. I love that approach. Even early on, y’all are taking stuff out.

[00:19:45] If it’s not adding value to keep it lean and very focused on the problem it solves. 

[00:19:51] Arda Bulut: Exactly. The simplest example for that is like in the sidebar, for example. You think about how many things you have in the sidebar and like how much page that you have. Yeah. Just the other day, like we had to remove one feature, like the complete feature from the sidebar because like we knew no one’s using it right now.

[00:20:09] It isn’t like the key thing in the product right now. So you have to like sometimes do those kind of sacrifices to actually make the product. More intuitive, like it comes back to the ease of use as well. If it’s less complex, then people are like more likely to use it more. 

[00:20:24] Matt Paige: Yeah. That’s a great segue.

[00:20:25] And that’s a big piece of what we think about as built right? Is the product usable? Yeah. And it’s more than just the ui, it’s the actual user experience. And one thing I love about Hockey Stack is y’all don’t just think about it in the span. I’m a customer. I’m in the solution. You take it further than that, and I see this with the interactive demo that people can use Yeah.

[00:20:46] Online. And I know that’s an engineering effort to do that. There’s all kinds of, how quickly you can get it set up. I think y’all, you’ll talk about, you can get set up in two minutes. Talk about that and how you think about the importance of ease of use Yeah. In the product. 

[00:21:01] Arda Bulut: The thing there is like, there’s always gonna be some effort to actually set this and use these tools, but it depends on whether you’re putting the effort on the customer side or the developer side.

[00:21:11] And like as much as possible, we try to put it on our side, put the weight on our shoulders. So that for the customer, everything looks automated. Everything looks like very easy set up. That means that we have to do out of con like configuration out of like generalization on our side because we integrated a lot of tools.

[00:21:28] We get out of data from from these customers and they have different configurations of these tools. We don’t ask them to actually provide us. All these information about their configuration, like they don’t have to fill all these forms to actually integrate at all. For them, it’s just one click.

[00:21:45] But for us it’s actually making sure in the background that everything works according to the generalized model that we have for our data. So it’s It’s about who is it gonna be hard for either you or the customer. And I would always prefer for it to be hard for myself other than the 

[00:22:01] Matt Paige: customer.

[00:22:01] Yeah. I’m stealing that. I love that concept of putting the weight on your shoulders and not your customers. That’s such a great way to think about it because it really, you have that trade off, right? It can be on your customer’s shoulders or it can be on yours. And one thing I heard you mention, like a big part.

[00:22:18] Product or solution are the integrations and making that easy. Yeah. How do you go about prioritizing and determining which integrations to add to the platform? Do you have any kind of criteria you go through when you’re saying, let’s prioritize this integration first over this integration within the solution?

[00:22:37] Arda Bulut: Yeah, that’s a good question. And like it has a very simple answer. Whatever video. It’s things that people like, it’s things that our customers ask us. It’s things that we In the download. Yeah. So like the simple metric that we use is okay, who is asking for this? How many people are asking for this feature?

[00:22:56] If we have just one person asking for integration, we still put that in the roadmap, but like a little. Lower than the other things. And if we get like a lot more people think we use this tool as well, if they like mentioned that thing, that test gets prioritized more and more until it’s like in the cycle for this week, in the cycle for next week.

[00:23:17] So it’s like very simple. But it works like for the last couple months at least, we aren’t building anything that our customers aren’t asking. Yeah, it’s just local. So many good. 

[00:23:29] Matt Paige: So many good insights here. And that’s the thing. Like so many people, I think overcomplicate this, what you just said is, does the customer need it? Have we heard them ask for it? And the big piece there is you’re actually continuing talking to customers. Listening to customers, which a lot of people overlook. A lot of the times people will talk to users and customers at the. But don’t do it throughout. Yeah. And you make it easy in that way cuz it’s not some complex formula or prioritization framework.

[00:23:56] It’s no customers have told us they want this, they need this. So we prioritize it in that way. Exactly. Yeah. To wrap it up. I got one more question for you. CTO Hockey Stack, y’all are growing. Y’all are, we’re just chatting before this. You’re living it up in San Francisco now mingling with all the folks there, but what’s.

[00:24:15] The biggest thing you’ve learned, like if you could go back to your former self at the beginning of this, what’s that one piece of advice that you would give your former self or another young, CTO or engineering leader about building a solution that can scale and grow? 

[00:24:33] Arda Bulut: Good question.

[00:24:34] I will. The biggest thing while building a product with limited resources is like the last thing we talked about, actually, prioritization. So like you have, it’s like a very simple problem. You have about, I don’t know, eight to 10 hours that like you can potentially work in a day and like most of the day, like even though you think that you are prioritizing the same things, if you actually.

[00:25:01] Drill, drill down and actually see what you’re doing at hr. Usually there are gonna be things that don’t really matter that much, but you still do because you think that they matter because you didn’t like criti. Critically think about those stuff, but if you can actually prioritize your day as well as prioritize the futures, you can actually have a lot more impact.

[00:25:20] I’m like a big. In the 80, 80 20 rule or yeah. Yep. Most of the things is gonna come from that 20% of the work that you’re doing. The other 80% is just manual things that we can probably like automate or just don’t do at all. So if you can actually correct the code, if you can actually always try to optimize the process that way you’ll be able to move a lot faster than a lot of people are think like, how are they actually building all this so fast?

[00:25:47] But in fact you are just focusing on that like key part, key 20% that’s giving, like creating delusion of that. You’re doing like everything at the same 

[00:25:56] Matt Paige: time. Yeah. Yeah. Such a foundational lesson there. And it’s broader than just. For engineering a solution that applies to life and everything.

[00:26:06] And going back to that customer you talked to, sometimes it’s that 10% of the solution they care about. Yeah. But that’s awesome. But Arda I appreciate the chat. It’s been great having you on. Built right. Thanks for joining us today. Thank you. All right, so let me stop.

The post Listen, Prioritize, and Scale to Build a Winning Product appeared first on HatchWorks.

]]>
Generative AI Playbook: How to Identify and Vet Winning Use Cases https://hatchworks.com/built-right/generative-ai-playbook/ Tue, 11 Jul 2023 12:00:46 +0000 https://hatchworks.com/?p=29662 AI has been around for a while, simmering in the background on our devices and in wider society. But generative AI has become a hot topic of conversation following ChatGPT’s launch.  Naturally, you may be asking, how can I use generative AI in my business?  But a word of caution. We believe if you want […]

The post Generative AI Playbook: How to Identify and Vet Winning Use Cases appeared first on HatchWorks.

]]>

AI has been around for a while, simmering in the background on our devices and in wider society. But generative AI has become a hot topic of conversation following ChatGPT’s launch. 

Naturally, you may be asking, how can I use generative AI in my business? 

But a word of caution. We believe if you want to build something the right way, every decision and every tool you use needs to be carefully considered.  

In our first Built Right live webinar, we welcomed Jason Schlachter, Founder of AI Empowerment Group and Host of the We Wonder podcast, to share his methods for identifying and assessing generative AI use cases. 

Keep reading for some takeaway points from the episode or tune in below.  

What is generative AI? 

Generative AI is artificial intelligence that generates content such as documents, words, code, images, videos, and more. It’s the type of AI that everyone’s talking about right now.  

On the surface, it’s incredible technology, but Jason is quick to say that AI shouldn’t be regarded as the solution. It’s a tool, not a solution. Instead of trying to make AI work in your organization, you need to see if you can find any genuine use cases for it.  

 

Questions to ask yourself before using generative AI  

Jason suggests asking yourself a couple of questions to help frame your perspective. One is, if you had an unlimited number of interns, how would you deploy them to maximize business value? 

This will help you zero in on which areas of the business require the most help for low-skill tasks – which are prime candidates for automation.  

Another question Jason suggests is asking what you would do in the same scenario with an unlimited number of staff or an unlimited number of experts. What would you have them do to help?  

Jason says this last one takes things up a level because one of the things generative AI can do is empower people to do things that they’re not experts in. With this exercise, you can start to uncover which areas of the business need the most help and what type of help they need.  

 

How to assess use cases  

It may mean that you come up with several different use cases – all of which could benefit from generative AI. The next step, in this case, is to figure out a way to prioritize and assess them.  

Jason shared a real use case in our discussion about his upcoming trip to Japan. He’s visiting Japan with his family and wants to find activities that are off the beaten track. It’s a complicated vacation to plan when it consists of booking hotels, navigating public transport, buying tickets, working out travel times, and everything in between.  

He could go with a travel agent but prefers to be in control of the planning. Expedia and TripAdvisor are great, but you still have to break down an itinerary and research everything yourself.  

Instead, Jason could ask generative AI tools such as ChatGPT to build itineraries, plan trips, break down costs, and explain which options are best and why. It would be like having your “own executive team working on this.” 

The downside is that Jason would have to put a lot of trust in the AI that everything was 100% accurate. The last thing he wants is to be stranded with kids in the middle of Japan because ChatGPT got some travel times wrong. 

However, if it worked, and you could query it and change things, it could potentially up-end the travel market. It’s something that Jason believes will be dominated by generative AI in the future.  

So, once you build an idea of different use cases and prioritize which ones are most needed or important, you can move on to the next step – figuring out if they are viable. 

 

How to determine viability 

1. Assess business value 

You need to be able to assess the business value of implementing generative AI. It may be that you want to rapidly prototype something or build a customer chatbot that not only shares technical information but can also adapt to questions from customers.  

Assess how valuable the input from the AI will be – will it reduce costs or speed up processes? Will it improve and speed up customer service?  

 

2. Fluency vs. accuracy 

Another way of looking at viability is to determine whether fluency or accuracy is more important in your use case. Fluency just means the ability to generate content well. Accuracy is about generating information that’s factual 

If you want AI to write a short story, it’ll probably turn out something that reads well and can help creators with structuring their content. However, if you’re looking for generative AI to contribute to a new chatbot that gives out medical advice, you need an AI model that prioritizes accuracy. 

Getting AI that can produce accurate results every time is more difficult, but one way around it is to train models with your own data. That way, you can control everything the model learns and produces as an answer. 

 

3. Low vs. high-risk 

An important thing to always consider when using AI is the risk potential. Some use cases may be fairly low risk, for example, AI helping you write a blog post. Others can be high-risk – such as using an AI travel plan that leaves you stranded in a foreign country. 

There are ways to reduce risk, however. The example Jason uses is if T-Mobile used AI in a chatbot, you could reduce the risk of it giving a false answer by only training it to give answers it can back up with a document. This also means weaving your own data into the model and making it truly unique to your organization.  

Tuning your own AI models can be difficult, but it can help to improve accuracy and performance. It also doesn’t need to be a huge model with billions of pieces of data. It could be so small that it can run locally on a small device.  

 

4. Defensible vs. non-defensible  

Jason says that it’s important that the model you’re using and the use case you’re building is defensible from a business perspective.  

So, you would need to take into account profit, turnover, the entire cost of implementing AI and changing processes, training time, getting support, maintaining the model, and so on, into account. It may be that AI in your particular use case isn’t defensible now, but it might be in the future if your major competitors go down that route and you’re forced to adapt. It may be something you want to revisit in the future to see if things have changed in this case.  

After this big-picture view, you can decide whether it’s truly defensible from a business perspective – and ultimately worth it.  

Deciding to implement generative AI in your business isn’t a decision you want to make lightly. There are so many cost and value factors, accuracy issues, risks, and then the impact on the business’s bottom line to think about.  

 

For more insights into identifying viable use cases for generative AI, tune in to the full episode with Jason.  

Embrace the future of software development with HatchWorks’ Generative-Driven Development™.

We leverage AI, automation, and established accelerators to streamline your processes and deliver your project outcomes with efficiency and flexibility.

Instantly access the power of AI and our team of AI-enabled practitioners. We are ready to support you on your project!

[00:03:02] Matt Paige: Jason, let’s kick it off. So welcome everyone to our first edition of Built Right Live. If you’re not familiar with the Built Right podcast, we focus on helping you build the right digital solution the right way. Check it out on all the major podcast platforms. We drop a new episode every other week. We got a good one to drop today too with Erica Chestnut, Head of Quality at Realtor.com.

[00:03:24] Matt Paige: So go check that one out. And like we said, please drop in comments as we go along. We’ll be checking the comments and that may tailor our conversation a bit as we go. But today we got a really good one for y’all. We got special guest, Jason Schlachter, founder of AI Empowerment Group and host of the We Wonder Podcast.

[00:03:44] Matt Paige: So he is got some, podcast chops as well. But Jason, give us an introduction so folks have a little context of your background, your history, and what AI Empowerment Group exists to do. 

[00:03:57] Jason Schlachter: Awesome. Thank you, Matt, for the introduction, and I’m glad to be here. This is exciting. I see a lot of comments coming through chat which is making it also great to see the, participation already.

[00:04:07] Jason Schlachter: Yeah, so my background is in AI primarily. I spent about the last 22, 23 years in the AI industry. I went to school for a master’s in AI in 2001, back when there were basically no jobs in ai. And that led me down a path where I started off as a researcher doing a lot of work for DARPA Defense Advanced Research Project Agency, Army Research Lab, Naval Research Lab, Nasa,

[00:04:34] Jason Schlachter: Intelligence organizations, all kind of stuff that you could imagine would use AI before mainstream businesses were going crazy for it. And at some point I left that world moved into a strategy role led AI strategy at Stanley Blacken Decker for their digital accelerator. And then from there went over to Elevance Health, which owns Anthem Blue Cross/Blue Shield.

[00:04:56] Jason Schlachter: And there I focused on leading the RD portfolio and strategy mostly around AI and then as a product lead for their clinical AI work. And so, since leaving Elance, at AI and Empowerment group our focus is really on solving the people part of ai. That’s the way I like to sum it up really nicely cuz what I’ve seen, and I think a lot of research supports this, is most efforts to deploy ai.

[00:05:19] Jason Schlachter: Do not return the business value that people expect it to return. About 90% of AI initiatives fail to deliver on the business value that’s promised. I’ve seen many organizations where it’s a hundred percent. It’s almost never a technical reason. It’s almost always something at the organizational level.

[00:05:39] Jason Schlachter: So there was a maybe a misunderstanding of what was expected for the project. There wasn’t a deeper, nice deep, enough vetting of the use case. There’s maybe misunderstandings by the sales and marketing team, so they weren’t able to sell it. The project was canceled at the last minute because of legal concerns, data concerns, contract concerns.

[00:06:00] Jason Schlachter: So AI Empowerment Group really addresses all those non-technical. Challenges by upskilling the workforce, getting them AI ready so they can make the right decisions by holding workshops to help figure out which use cases are worth pursuing, building out the strategies to support that and much more.

[00:06:18] Jason Schlachter: But that’s, a highlight. 

[00:06:19] Matt Paige: Nice. Yeah. Yeah. Awesome. Jason. So everybody listening, I wasn’t lie when we said it. We had an AI expert. He’s been in this game for a while, that the hype around generative ai he’s, been at it much longer than that, but those who don’t know HatchWorks we we’re your trusted digital acceleration partner delivering unique solutions to achieve your desired outcomes faster, and really on a mission to leverage AI and automation.

[00:06:45] Matt Paige: Paired with the affordability and scale of nearshore to accelerate your outcomes. But Jason I’m pumped about this conversation today. We’re giving people a, sneak peek into our generative AI playbook, but hitting on one of the most foundational concepts with, which is how you actually identify and then vet some of these use cases.

[00:07:05] Matt Paige: But let’s, start at the foundation in order to start defining use cases. Let’s ground people in what generative AI is and what it isn’t to set the stage there. Awesome. Thank you, Matt. 

[00:07:19] Jason Schlachter: Yeah. So let’s talk a little bit about generative ai. Generative AI is a subset of the field of ai.

[00:07:25] Jason Schlachter: And the field of AI has been around for a long time, like thousands of years. And I know this is sounding crazy when I say it like that, but I’m gonna, I’m gonna back it up for a minute. So even going back to the biblical tests of the Old Testament there are like parts that talk about ai, they talk about people creating autonomous machines and systems that can do tasks that can operate autonomously to take away the, menial work that people don’t want to do.

[00:07:53] Jason Schlachter: And they talk about these, systems as created things that just don’t have souls, don’t have consciousness. And I think philosophically they were already addressing a lot of the use cases that we could even think about today. So thinking about the use cases for ai, for auto automation, for robotics, It’s been happening for thousands of years, which I felt was shocking when I figured that out.

[00:08:17] Jason Schlachter: And, so moving forward to today the, modern field of AI emerged in the 1950s and in the sixties and seventies. It was research in the eighties and nineties. It was commercialized. It was already a multi-billion dollar industry in the eighties and nineties. I think a lot of people don’t fully realize that.

[00:08:34] Jason Schlachter: And then of course, in the last 10 years or so, it’s really gone completely exponential. There’s been big data, deep learning, generative ai, adversarial networks. It’s just a full breadth of everything. And I think most recently we like to see things through our human eye like lens.

[00:08:50] Jason Schlachter: We anthropomorphize everything. So for the first time, like in a long time, it’s not some system in some enterprise that’s making some pricing decision. It’s this thing you can talk to and it talks back to you. And that’s scary and exciting and interesting. And I think that’s what’s driving a lot of the hype.

[00:09:08] Jason Schlachter: And it’s generating things. So for a long time, we’ve often said that creativity when, and creativity is hard to define, but like creating things is the human quality that machines will never have. And now they’re doing it. And so there’s questions like, what is art? What does it mean to compose something?

[00:09:24] Jason Schlachter: Who can win an Emmy? Who can win a Grammy? And so this is like really what’s, causing the hype? So, generative AI is artificial intelligence that generates content. And the kind of content it can generate in today’s world is text like, documents, words, phrases code, because code is text.

[00:09:46] Jason Schlachter: So it’s just a, certain type of text. It can generate images videos, 3D content, like for games it can generate music. You guys might have seen there was a Drake song that came out that was supposedly like, pretty popular, actually. Sounded good. 

[00:10:02] Matt Paige: Matt, did you, I’ve not seen that yet. Was it produced?

[00:10:05] Matt Paige: They did some generative AI to produce it. 

[00:10:08] Jason Schlachter: Drake didn’t produce it. Somebody else produced it, but it was Drake singing it. Oh yeah, Drake, he found out about it after it started becoming popular and it was like his voice and his style to his music and, somebody just basically trained a model on his voice, his style, and dumped it out there.

[00:10:26] Jason Schlachter: And there’s just these questions of what does it all mean? It can generate speech and audio in that same use case. Other side of it is like very, like hard sciences. Generative AI can generate like biochemical sequences like protein molecules. So very, open. In terms of what’s possible it is probability based.

[00:10:48] Jason Schlachter: It is based on, deep learning architectures. Which means that it’s, probabilistic. And I won’t go into the technical side into exactly how it works, but it’s not thinking and reasoning in like a symbolic, causal way. It doesn’t understand that if it rains today, the ground will be wet in, in like, a a very expressive way.

[00:11:11] Jason Schlachter: The way we understand that, it just has some miracle representations that, are able to connect those concepts together. And so it might respond intelligently but, it doesn’t actually think and understand in the way that we typically would expect. It also will reflect any kind of bias or flaws that are in the training data.

[00:11:29] Jason Schlachter: So if you had healthcare training data, and in that healthcare training data, certain members of the population are not getting the care they need for like societal reasons, not clinical reasons. And then you trained an AI system to make decisions about what care should they get, when they should care, when should they get that care for the best outcome that bias would pull forward into the model.

[00:11:52] Jason Schlachter: There’s ways to mitigate the bias and but, generally this is a challenge. If you have bias in the data, you have to account for it the best you can and the bias will show up in the end. And so with generative models, it’s the same. If we write with prejudice or bias or hate speech, like it shows up in the generative models as well.

[00:12:12] Jason Schlachter: It also pulls us into the post content scarcity world. Like up until this moment we basically lived in a world where there was a limited amount of content. At some point it was hundreds of books in the world and millions of books in the world. Now there’s no number of books in the world.

[00:12:30] Jason Schlachter: There’s an infinite number of books in the world that can be generated on demand. And so that really changes the whole world in which we operate. 

[00:12:40] Matt Paige: Yeah. No, that, that’s awesome context setting there. But what was really cool was the, history dating back to biblical times. I was not aware of that.

[00:12:49] Matt Paige: That, that’s super interesting. But you, like the Drake example. You mentioned you can think of whole business models changing here. That’s a big piece of this. You also think of the accuracy of the data, and we’re gonna get into that in a minute when you’re talking about vetting some of the, viability of these use cases.

[00:13:08] Matt Paige: But I think one big piece of it is with a hive cycle, you sell this in the dotcom boom, a lot of this, there’s a lot of people. With a hammer in search of a nail, right? Yeah. The hammer being generative ai. Yeah. Let me go find a nail, lemme go find something I can do with this. Yep. And back to basics. It’s important to flip that and focus on the outcomes and relative use cases first, but maybe take us through like how, to think through some of the like, higher level business outcomes to start to bucketize where you can focus some of these generative AI use cases.

[00:13:42] Jason Schlachter: Yeah, absolutely. And, maybe I can start to Matt, with a, bit of the, why we’re going through this and what it means to find these use cases and, I’ll segue into, some of those. Okay. In this talk finding the use cases, validating the use cases I wanna talk about a couple like preamble type things.

[00:14:04] Jason Schlachter: So first, if you’re out there with customers, if you’re out there. Trying to solve problems, trying to figure out how to make your, product better, trying to reduce your claims processing costs, like you are the expert and you are the person that knows the, opportunities and the needs that, you could address.

[00:14:23] Jason Schlachter: And so in that sense, like you’re the perfect person to find the use cases for AI and generative ai, and it really is on, on, on your shoulders to elevate those opportunities and bring in the rest of the stakeholders. And so I think to do that, it’s really critical that you understand at a high level, at a non-technical level, like what is ai, what can it do?

[00:14:44] Jason Schlachter: What’s hype, what’s not hype? What are the opportunities and risks in, in pursuing this approach? How would I frame out and scope and describe this use case in a way that I could bring in the other partners? To be a part of it. And so there is this, ability for you to do that with a fairly basic understanding of how to think about these things.

[00:15:05] Jason Schlachter: And that’s our goal here today, to get that, basic understanding. And then if you think about finding the use cases, making the plans there’s a need to make a plan, there’s a need to find the use cases We don’t plan to have a plan. We plan to get good at planning.

[00:15:22] Jason Schlachter: And the reason why is because your plan doesn’t survive first contact with the customer or because of where I spent most of my career, first contact with the enemy. And so understand, right? I, had to adapt as I shifted from the, defense world to the right consumer world. I had to change a lot of my phrases and sayings.

[00:15:42] Jason Schlachter: And this is one of them. First contact with the enemy to first contact with the, customer market. And I, and we live in this dynamic world. So in finding these use cases, like it’s not that there’s gonna be the perfect use case. Like the goal here is to get good at finding use cases, to get fast at validating them.

[00:16:00] Jason Schlachter: And, trying them and learning. Because the faster you can do that the, better you’ll get to keep up with this exponential curve that’s ahead of us. And then the last thing I wanna say is we are here to talk about generative AI because it is exciting and there’s lots of things you can do, but for most businesses, Most of the use cases for AI are not gonna be generative ai.

[00:16:22] Jason Schlachter: Like most of the business value is gonna come from the stuff that is not taking up all the headlines right now in the media. It’s gonna be pricing your products dynamically or better. It’s gonna be automating some of your internal customer service or claims processing. It’s gonna be facial recognition on your I don’t know, like your product, that makes something a little bit easier for your, consumer to, to log in.

[00:16:48] Jason Schlachter: So even though we’re here talking about gender and AI and it’s very exciting, I just wanna put that in perspective because you don’t wanna be looking with this hammer for all the nails in your organization. This is just like one tool and it’s a very, powerful tool and that’s why we’re talking about it.

[00:17:04] Matt Paige: Yeah. And I like the way, I like the way you framed it. It’s like building the muscle. That’s the essence. Building the muscle of how do you go through this process. To get to the end outcome that you want to get to. So that’s a, foundational piece of what we’re trying to do. Think of this as like a workout.

[00:17:20] Matt Paige: Y’all this is the, intro. We’re, the trainer. This is the beginning of the workout. Yeah. So and I think there’s different areas you can find opportunity, right? There’s internal areas, there’s external areas. It can be revenue generating co so there’s different focuses where you can start to think through where do you wanna focus some of these efforts.

[00:17:40] Matt Paige: But any thoughts on that? 

[00:17:42] Jason Schlachter: Yeah, exactly. There’s a great quote by Douglas Adams, which says that technology is oh God, I’m forgetting the exact verbatim, oh, technology is a word for something that doesn’t work yet. And I think it’s a great phrase because if we’re talking about ai, it means we’re not talking about a solution.

[00:18:05] Jason Schlachter: It’s a technology, it’s not a solution. And so we want to pivot to what solutions could be, right? So optimizing your internal company operations could be improving a product or service for a customer, could be optimizing like your defenses, your cyber your, it could be improving your documentation.

[00:18:27] Jason Schlachter: So there’s all these different kind of use cases that are either optimizing your business or innovating your business, helping your customer in some specific way. And I think if you look at it at like the industry level we can dive deep into some more like industry level type stuff. There’s a lot of specific use cases at the industry level.

[00:18:45] Jason Schlachter: So like on the financial side these kind of models can be used for customer segmentation. You could custom, you could segment out customers by needs and interests. Targeted market campaigns. You can do risk assessment, fraud detection in healthcare. You can do drug discovery, personalized medicine, medical imaging.

[00:19:05] Jason Schlachter: On the manufacturing side, there’s product design, there’s manufacturing planning and quality control. There’s on the technology side, there’s more efficient coding, software development and processes, cybersecurity, automating data science. I’m just running no one’s. You don’t. Do you guys want to remember all these?

[00:19:22] Jason Schlachter: I’m just trying to give you like the shotgun view of oh my God, this is a lot because this is only a small bit of it.

[00:19:26] Matt Paige: There’s something you said just leading up to this, we chatted about, and there’s this sense where people can stay at the surface level of what AI, generative AI can do, but where you get the gold is where you focus into a specific domain discipline, where your area of expertise is.

[00:19:46] Matt Paige: That’s where you find something unique. So it is important to think about within your industry, within your business, within. The, problems that your customers have. Yeah. That’s a key element to where you’re thinking how you can apply these things. And another, I heard someone talking the other day when you’re thinking about what you wanna roll out in use case and all that.

[00:20:07] Matt Paige: Take the word AI out of it, and does it still have value? Yeah. Does it pass that smelt test? Like you reference like the, Google and Apple events recently, apple didn’t mention AI really at all, but it was foundationally in just about everything. 

[00:20:22] Jason Schlachter: Yeah. That’s a really stark, that’s a stark example of that.

[00:20:25] Jason Schlachter: Google talked about AI a lot. Apple didn’t talk about AI at all. And I think Google positions themselves to be a company that delivers AI as a tool, right? Like they’re selling ai as a solution. Apple doesn’t really try to sell you ai. Apple tries to sell you a good experience, a seamless experience.

[00:20:46] Jason Schlachter: So there’s not a strong need to talk about AI specifically. They might talk about like intelligent typing or smart notifications or something like that. And that makes a lot more sense. Matt, I think maybe if you want we could jump into some of these sort of questions that help. Yeah.

[00:21:03] Matt Paige: So, just to set this up, this is one of my favorite areas. So many folks I think get stuck early on thinking in an incremental nature versus kind of a stepwise trans transformational nature. So j Jason, take us through these questions. Great place to start. If you’re talking with folks in your business trying to facilitate and exercise around this, take us through some of these questions and how to think through ‘

[00:21:28] Jason Schlachter: em.

[00:21:29] Jason Schlachter: Okay. Awesome. Yeah, so these questions are, very simple. They don’t even say anything about AI specifically, but they’re gonna help you get to the core of the use cases where you could deploy generative ai. And in a bit we’ll talk about and how you validate and assess those opportunities.

[00:21:44] Jason Schlachter: All right, so this is a question that I, I heard from some buddies of mine at pro Lego. It’s a AI consulting company. When we were talking about use cases if I had an unlimited, a number of u of interns how would I deploy them to maximize business value? So that’s a question to ask yourself.

[00:22:01] Jason Schlachter: You have an unlimited set of interns, you’re in charge of them all. Where do you put them? I think like some people might be like, I don’t really know. Other people might be like, oh my God, yes. Like they need to go do this one thing for me. Because that will save my life, right? They need to go and sit in our call center because that’s where our customers suffer the worst.

[00:22:23] Jason Schlachter: Or you need to go and review all these claims because we’re six months behind on processing claims. If you can do that, then you can talk, you can find friction point or an opportunity that would benefit your company or yourself. And there’s some different variations of this question that I would ask too.

[00:22:44] Jason Schlachter: Matt and I were going over these earlier and just spinning up different versions that hit at like different slices. So another one would be like, in addition to unlimited interns, what if you had unlimited staff? So you manage a team of infinite you go from team of however many you have now, 5, 10, 50 people to unlimited people.

[00:23:04] Jason Schlachter: What would you have ’em do for you? 

[00:23:07] Matt Paige: Yep. Another way you framed it too, I think you said, what if I had a small country working towards a problem I had, just to put it like in context. But what that’s doing is that starts to sound kinda honest. It Yeah. That it does. You’re right. A lot they’re. There’s all kinds of dystopian stuff we could get into as well.

[00:23:25] Matt Paige: Yeah. But it’s it’s that reframing though, cuz it’s not so much about the people element of the resources. And that’s the beauty of starting to trigger some of these questions when you are de dealing with technology like ai, it takes some of those constraints out of the equation or it flips the script a bit.

[00:23:44] Matt Paige: So that’s the idea behind some of these. 

[00:23:46] Jason Schlachter: Yeah. And so I’m gonna continue this, Matt, with a few additional questions. Yeah. Keep going. Gone from interns, so not super skilled, but maybe very eager and capable to staff, know what they’re doing. Next one I wanna ask is, what if you had unlimited experts, you could bring experts from all fields to your team to help you, what would you have them do?

[00:24:09] Jason Schlachter: That takes it up a level now, because one of the things that generative AI can do is it can empower people to do things that they’re not experts in, but they can do with generative ai. So I’m not an expert painter, but I love art. I have a lot of ideas. I’ve seen art. If I can describe verbally my perfect vision for a painting, then I can use generative AI to create that painting.

[00:24:31] Jason Schlachter: And, it’s gonna look really good. It’s gonna look like a professional work of art if I do it right. So I’ve become like an expert in the sense that I’m now an artist. There’s probably a lot of philosophical arguments about did I create it really? And can I view myself as an artist?

[00:24:47] Jason Schlachter: But, practically speaking it will be difficult for people to differentiate between that AI created painting and someone creating a painting. So if you had experts, how would you use them? Okay, we’re gonna keep going. So 

[00:25:02] Matt Paige: we got some good questions popping in the chat. Not that we have to hit ’em right now, but there’s some Keep, keep ’em coming, y’all.

[00:25:07] Matt Paige: We’ll, try to weave some of these in a minute. Yeah. Keep hit, keep hitting the questions. 

[00:25:11] Jason Schlachter: Okay. Okay, here we go. All right, so this is one of my favorite ones. Up until now we’ve been focused a little bit on that internal optimization of my business, right? So how can you optimize your business internally?

[00:25:23] Jason Schlachter: Yeah, you could have used those interns to follow your customers around and give them an amazing experience, but it’s been a lot of like internal locus of, you. Yeah. Now we’re gonna shift it to external. So if you could give every one of your customers a personalized team of as many people as needed, five people, 10 people, a hundred people, and their sole job is to give your customer an amazing experience, what would that team be doing for your customer?

[00:25:49] Jason Schlachter: I think that is one of the most like, Powerful things to think about.

[00:25:53] Matt Paige: That’s powerful. Yeah. And it, that is taking it from incremental to potentially business model changing disruptive use cases. And that’s the idea of this exercise, right? It’s starting to get into more of that blue ocean starting to just generate the ideas, get them out there with a reframing and hell are some of them in chat G P t and give them some context and ask ’em there you can have them play a role in your in your facilitated exercise as well.

[00:26:23] Jason Schlachter: And so this is not to imply that you generative ai currently can fill the issues. It’s not, we’re not meaning to imply that if you’ve created this, imaginary team that you’ve given it to your customer and is doing everything to make your customer have an amazing experience, that then generative a can meet those needs.

[00:26:43] Jason Schlachter: Most likely it can’t. The point is though, is that you’re starting to get to the core of thinking with a different framing of I could write unlimited articles on behalf of my customer. I could book I could book everything they need for the entire week on their behalf. I could go clothes shopping for them.

[00:27:02] Jason Schlachter: There’s a lot of things that you could do with an AI model that can generate things and also summarize and explain things and, represent design and stuff like that. So that’s the gist of this. And then there’s one more question, the kind of fear mongering question here.

[00:27:20] Jason Schlachter: And this is if your customer if your, sorry, your competitors, if your competitors could do the same, your competitors had unlimited staff, they probably would use that staff to make great customer experiences as well. But I’m gonna frame it in an adversarial way. If they use that staff to put you outta business, what are they gonna do?

[00:27:42] Jason Schlachter: So now you have to think about this because. Most of your competitors won’t do that, but the best of your competitors will be doing that. They’ll be thinking through these potential use cases and when the technology is ready or when it makes sense from a value perspective to apply the technology in that way, they’ll be ready and waiting to do that.

[00:28:02] Matt Paige: Yeah, and this is the one, it when we were talking about these, it hits a different area of your brain when you frame it from oh shit, the competitor’s trying to put me out of business. What are they gonna do? And it, does, it gets you to think about it from a different lens in a lot of ways.

[00:28:19] Matt Paige: So those are the framing questions in essence. So this is all about idea generation, reframing how you think about things. The last point you made was interesting too. Even if it’s not perfect right now, you can still begin testing and playing around with this cuz things are progressing at a rather Alarming, crazy, whatever adjective you want to add rate right now.

[00:28:42] Matt Paige: So what may not be possible today could be possible in three months, six months, a year, five years. So it’s core that you start thinking about this now and how it’ll impact your business model, how you operate, 

[00:28:57] Jason Schlachter: right? Yeah. Cuz it’s not the technology that fails to deliver value in almost all cases.

[00:29:01] Jason Schlachter: It’s, yeah it’s the, system point of view. It’s the organizational failure. You, your organization and your team should be able to, frame these opportunities in the right way and, be data and AI driven in their thinking process so they can act fast.

[00:29:20] Jason Schlachter: Because when that new capability emerges, and we saw it when chat g PT four hit the market. There were some companies that like overnight had applications. Some of them were bogus and borderline fraud, but those have fallen away. And now we see like Adobe deploying image creation models inside of their Adobe platform so that you can like completely generate a new background for your foreground.

[00:29:46] Jason Schlachter: Or you can erase an object and then ask it to generate a new object. And it will do that in, the application. So those are starting to become more more, mainstream for sure. 

[00:29:59] Matt Paige: Yeah. That’s one we’re playing around with it HatchWorks right now. I think it’s Firefly is the name of the Adobe product, similar to a mid journey or something like that, but it’s within the Adobe ecosystem.

[00:30:10] Matt Paige: I think this is where we’ll start to transition into some of you have ideas, you have a list of ideas you’ve generated, but how do you begin to test vet the viability of. We should do these over these that’s the one of the most important things is how do you start to prioritize some of these use cases And there’s a bit of a call it a rubric or Yeah.

[00:30:35] Matt Paige: Analysis. You, take it through. So Jason start to take us down this path of how you begin to wait and prioritize some of these ideas. 

[00:30:44] Jason Schlachter: Absolutely. And Matt, let’s stick, let’s throw up our our, use case that we’re gonna use to, to illustrate some. Yeah, let’s do it. Go through it. 

[00:30:52] Matt Paige: Okay. So yeah you, set it up you got the, real story.

[00:30:56] Matt Paige: And I’d say too, we got a couple, there’s one related to the stock market. There’s one related to chat with a customer, interactions. We may play around with a couple of those later, but yeah, let’s hit the, main one. Jason’s taking a big trip in about a week or so. So Jason set up the use case for us.

[00:31:14] Jason Schlachter: Okay. Awesome. Yeah, and Matt let’s, make sure we get those questions in too. I, so the use case I am, I’m most focused on right now is travel. So I’m, heading out to Japan in a bit with the family and trying to book our, travels and I, want to be on the edge of the like touristy kind of stuff.

[00:31:34] Jason Schlachter: I don’t want to be like deep in it and, so that means I’m looking for experiences that are just like a little bit off the beaten track. And so booking hotels, looking for national parks, trains, buses, do they can, it says that the hotel room sleeps for, but I only see two beds.

[00:31:53] Jason Schlachter: Are they charging us extra for kids? Like all this kinda stuff. And it’s a huge amount of time to really dig into it if you wanna make it right. And I don’t really want to hand it off to a travel agent because. I, like the idea of being in the details. I like the idea of having the controls.

[00:32:08] Jason Schlachter: But with Expedia or Priceline or TripAdvisor what I’m having to do is I’m having to like, break down the larger itinerary in my own mind. Research all these different places, of which, most of them, which I’m not gonna go, some of ’em I don’t really understand and then look at for individual things.

[00:32:25] Jason Schlachter: Can I find a train from point A to point B and what does that mean and how much does it cost? And how do we, where do we put our luggage and can I find a hotel in this city? And I don’t really know which district to stay in all this kind of stuff. So, if I had the ability to give myself a team of, staff that were gonna work on my behalf as a generative AI might, I would wanna say to the generative ai, I’d like to take a travel, I’d like to take a trip to Japan with the family.

[00:32:51] Jason Schlachter: We want to be outdoors hiking. We want to, get our hands dirty, doing archeological digs. We want to take lots of photos. We wanna be at local cultural events. We want to be at the GU Festival in Kyoto on these dates and Super Mario World super ipo. Super important to my kids and to me. So we wanna go to that as well.

[00:33:12] Jason Schlachter: Give me some itineraries and, like figure out all the connection points and show me like cost structures and explain to me which ones are better than the others and why. And from that, It’s like I as if I had my own executive team working on this for me, and then I could look at it and I could say this looks cool, but I don’t want to go there.

[00:33:33] Jason Schlachter: Or I could even query it like, Hey, why are you putting me in this city? I didn’t ask for that. And I could even respond with we found that like people like you who have gone to Japan and visited the city really enjoyed it for these reasons and it fits comfortably with your schedule here and there.

[00:33:49] Jason Schlachter: It would just be like a very easy conversation. And from a, an, let’s say like a TripAdvisor perspective, like it’s all AI driven. There’s no customer service agents, it’s scales, there’s no time. So that’s, to me like a, use case that very clearly is gonna become dominated by generative ai.

[00:34:09] Jason Schlachter: Yeah there’s one catch and we’ll get into this moment with the things away. It needs to be right. Do not wanna be stranded with, two kids at a bus station. With a hotel that only sleeps two people, even though it, booked it as four. And that’s where generative AI is not so great.

[00:34:29] Jason Schlachter: And so we’ll talk about that. 

[00:34:30] Matt Paige: Yeah. What are the stakes? And first of all, I’m jealous of the trip. That’s awesome. You’re getting to do this. But we can put ourselves in like the seat of Expedia or a company looking to disrupt Expedia. How should they be thinking about this? And, frankly yeah, Expedia should be very wary cuz this, is the type of emerging technology that literally could upend an entire business model.

[00:34:53] Matt Paige: And just as an aside we, gotta episode coming out later with Andy Sylvestri or leads up our design practice. There’s potential for this shift from a imperative to a declarative approach, A point and click approach to like declarative where I’m talking and interacting.

[00:35:12] Matt Paige: With the interface, so it changes how user interfaces are designed. So be on the lookout for that. It’ll be coming out in a few weeks. But Jason start to take us through Yeah. You just set up the context. What are the different dimensions that you can start to way a use case that’s right.

[00:35:32] Matt Paige: To determine how viable it is. 

[00:35:34] Jason Schlachter: That’s right. Okay. So we’ll start with business value, but we’ll keep it really short because business value is something that is, well studied, and so you want to be able to assess the business value to assess the business value with generative ai.

[00:35:49] Jason Schlachter: You may wanna rapidly prototype, you may wanna do wizard of Oz kind of things where maybe you give a customer a chat bot and you label it so that it’s very ethical and transparent as you’re talking to a, generative AI bot. And it’s, very expressive.

[00:36:05] Jason Schlachter: It can look through all the documentation the all the manuals. It’s not just dumping technical information to you, but it can, reformat it and answer your questions. But at the same time, you have this whole thing that it’s AI bot driven. What you could really be doing on the backend is you could be having some of your expert customer service people quickly typing stuff out.

[00:36:26] Jason Schlachter: And so you haven’t really implemented anything technologically, but you’ve started to assess the viability of a customer accepting that they’re gonna engage with an AI and understanding how they engage with an ai if they structure their queries differently if they scope their, requests differently.

[00:36:44] Jason Schlachter: So that’s an example of the business value where you could start to get to it. Next, and this is a really big one, Andy. This is, or sorry, Matt. We do have Andy on the call. He’s MCing it all in the background. Matt, a really big one is fluency versus accuracy. For these generative models, fluency means generating content and, that’s what they do.

[00:37:05] Jason Schlachter: They generate content really, well. Accuracy means that the information is factual. And so if you asked a generative AI model, text-based generative AI model to help me write a short story about, and you explained what you wanted to write, it could dump a story to you. And it’s probably gonna read really well.

[00:37:23] Jason Schlachter: It’s probably gonna be great for creators that need help structuring their content or want to add some details to their content. It just really speeds up that kind of workflow. In that case things like hallucinations, which is a term for when generative AI models say things that aren’t true.

[00:37:43] Jason Schlachter: There’s a lot of technical reasons why that happens, but they do that. Then in that case, it’s okay because like fantasy create creativity abstract thoughts, like those are all interesting aspects of a short story. But if you have an agent that’s meant to give you medical advice and you’re asking it do you go to the hospital like, what’s going on with me? You really want it to be accurate and it’s not as important that it generates creative content or that it, yeah. 

[00:38:13] Matt Paige: And this is a new kind of dimension I feel like with AI and generative ai, the importance of this one moves very high up the list of considerations where it wasn’t as nascent as a concept.

[00:38:27] Matt Paige: I think in the past you mentioned business value, that’s still critical. Always gonna be there. Yeah. This one’s interesting cuz. It, it lit things. It can go rogue, it can hallucinate like you mentioned. And what is the, risk or the outcome of if something goes wrong? Yeah, 

[00:38:45] Jason Schlachter: So you have to, think about your use case.

[00:38:48] Jason Schlachter: Is it a use case that demands fluency? In which case it’s something that we can you can address more easily with the models. And if it’s accuracy there’s, ways to mitigate this. If you do demand accuracy you’re able to train models, you’re on your own, you’re able to tune some of the existing models.

[00:39:09] Jason Schlachter: So there’s, like foundational models emerging for generative ai. These are like open AI’s chat g PT four. But also Google has Bard, etta has I think llama, so like a lot of these companies are, building their own models. These are foundational models.

[00:39:29] Jason Schlachter: They have very large representations of language and semantics. And then they layer on top of that with ability to be prompted and respond appropriately. So these are models that you could use off the shelf for some of your business use cases. And if fluency is your goal, those are probably great fits.

[00:39:48] Jason Schlachter: But if you have a, need for accuracy, you may need to tune them on your own data. And so this is where you start to, to ask yourself, do I have enough data to do that? So it wouldn’t be impossible to generate a model that answers medical questions. It’s a great use case for generative ai if it is highly accurate and probably highly regulated.

[00:40:13] Jason Schlachter: Yeah. Maybe even reviewed by a clinician in certain, in, in certain or many use cases. 

[00:40:19] Matt Paige: Or if it reaches a state of getting into the unknown, territory, can the model be geared in a sense to where it’s not. Spitting out a random response, but it is saying, I don’t know there’s that element of it as well, which how do you start to actually monitor that, may be a bigger, totally different problem.

[00:40:44] Jason Schlachter: Yeah. There’s not a lot of, self-reflection is a challenge right now for these models. They’re, they know everything, even when they don’t because what’s in their mind. Yeah, exactly. They’ve been trained on a certain set of world data and they have a partial understanding of that data.

[00:41:01] Jason Schlachter: Yeah. And, they look pretty convincing when they talk about what they know. But when they’re asked to talk about something they don’t know, they don’t say, they don’t necessarily say, I, I can’t talk about that. They, try to answer it in the context of what they do now, and because they have like partial understandings of what they do now.

[00:41:18] Jason Schlachter: There’s not like a, an explicit like expressive representation of these concepts and some kind of logical reasoning and causal kind of way. It’s all very probabilistic. You get very weird emergent phenomenon because you can find weird edge case paths through the, probabilities of these models.

[00:41:35] Jason Schlachter: So fluency and accuracy is, a cornerstone of how you should think about your use cases. The other really big one is low risk and high risk. We talked about this just a moment ago, but what’s a low and high risk? Like Expedia sending me to a foreign country with my family and telling me to go stand somewhere on a corner because there’s gonna be a bus and there isn’t, is high risk, right?

[00:42:00] Jason Schlachter: But me jumping onto like T-Mobile’s website and asking a question in natural language and getting back like a personalized explanation. It’s pretty low risk, especially, and this is interesting. So you can do retrieval, augmented training on these models where in order to sup, like in order to suppress errors and to build confidence for the user you can force it to only say things that it can back up with a document that’s retrieved.

[00:42:28] Jason Schlachter: So in the, oh, okay. It could pull up some kind of like knowledge base article that exists in, T-mobile’s data set. And it could say this is the thing I found, but I’m not gonna make you read it. Here’s like the two sentences that directly answer your question. But if you need to dig deeper this is the document that I use to generate this answer. 

[00:42:52] Matt Paige: And and this is taking it a step further than just let’s just get the op, the open AI chat, G P T A P I and just integrate right now you’re starting to weave in some of your own company’s data information to enhance.

[00:43:07] Matt Paige: The experience, the model, all of that. So that’s upleveling it a bit versus just slapping AI on your, product service or process. 

[00:43:19] Jason Schlachter: Yeah, exactly. And then that’s a fundamental question too there’s a lot of use cases you can unlock with off the shelf stuff, but there’s a lot you can do to tune these models.

[00:43:27] Jason Schlachter: And so when you tune these models, que do you have the data? Cause you, if you’re tuning, let’s talk about if you’re tuning them. So if you’re tuning a model, why would you do it? You might do it cause you need more accuracy in the kind of these case we explained. And in that case, you need to ask yourself, do I have the data to tune it?

[00:43:43] Jason Schlachter: And so what do you need to tune it? You need your own documents that represent the, knowledge sets and the way of speaking about the things you care about. So in T-Mobile’s case, it could be like their, knowledge bases, their tech technical documentation. You also, Need, you may need prompts and answers.

[00:44:05] Jason Schlachter: So one of the ways these models get built is a very labor intensive step where, people literally write out a prompt and then write out an answer, and then they show the model both. And they use those to train the model as to this is what a good answer to this prompt should be. And, some of these bigger companies like Google and Microsoft, they have like thousands, if not tens of thousands of people employed full-time, like writing prompts and answers.

[00:44:28] Jason Schlachter: It’s a very labor intensive part of the process. So that might be something you do to tune a model. The other reason you would tune a model if not for accuracy might be performance. So maybe you don’t need a huge model. Like maybe you can run with a really small model that takes less compute. You can run it on a locally, on a device or just costs less.

[00:44:50] Jason Schlachter: But you need to tune it because you’re, building an auto mechanic helper generative AI system that that helps your auto mechanic rather than reading car manuals for cars that he hasn’t worked on for a while, he just asks the question and gets the immediate answer with reference back to the model, the manual pages or something like that.

[00:45:08] Jason Schlachter: Like in those cases it could be small, it could run on device. So those are some considerations there. And then the other piece here is what’s defensible and non defensible. Is it important for you that the model that you’re using and the use case you’re building is, defensible from a business perspective?

[00:45:30] Jason Schlachter: So yeah, let’s get back to the travel example. Would it be defensible if Trip, if TripAdvisor built that capability? I’m gonna pause, I’m gonna I’ll, throw you the question. 

[00:45:44] Matt Paige: Yeah. And, folks in the audience too, if y’all want to answer. You know what’s interesting? If it’s simply if you could do the same thing referencing chat, G P T or some large language model that’s open to the public, I’d say no.

[00:46:00] Matt Paige: It changes the whole business model and defensibility of their business. Now, if it’s leveraging to your point data that an Expedia or a TripAdvisor has that they can supplement, Into the model, then I think it does begin to have an element of defensibility. But what’s, your take? I’m curious.

[00:46:22] Jason Schlachter: Yeah, it would have to leverage custom data from, TripAdvisor. They’re not gonna get, yeah. Anything that’s capable of doing that kind of use case off the bat. They’re gonna have to spend a lot of time and a lot of money leveraging their own data to tune those kind of miles. And even then, I think it’s really gonna struggle with being accurate.

[00:46:43] Jason Schlachter: Cuz there’s so many connection points, right? Transportation, hubs, hotels, sites, but if you think about what they have they, can trace member trajectories through like cities and tourist areas and restaurants. So I, do think there’s a lot to it that they probably could do. I think it’s partly defensible on the model basis.

[00:47:03] Jason Schlachter: Yeah. It’s partly defensible because Expedia might be able to do the same. Priceline might be able to do the same. Booking.com might be able to do the same. I would argue that there’s nuances that TripAdvisor has, that they capture, like extensive photos from users and very bi like multimodal, like hotels, cars hiking restaurants.

[00:47:29] Jason Schlachter: E everything is, so across the board. But I, think even if it’s not fully defensible, they still need to do it to be competitive in their industry space. So, it’s somewhere between like differentiated and, highly defensible to the competitors might ability to do the same, but maybe not quite in the same way.

[00:47:49] Jason Schlachter: But I, think ultimately what’s interesting is like non defensible doesn’t make it bad either. Like things can be very high value, but non defensible. So in this case of TripAdvisor, it might be that the model is non defensible. Like it might be that they can build this model, but so can every other travel service.

[00:48:11] Jason Schlachter: So then there’s like other levels of defensibility, right? Like, use cases and business models were defensible before AI came along. So yeah. What other ways is it defensible? Like it could be that, that their brand alone is, helping to make it defensible. Like I don’t necessarily want a startup, an AI startup, even if they’re well-funded, sending me and my family off to Japan for a while I, might not trust it.

[00:48:37] Jason Schlachter: I would much rather go with a TripAdvisor. It might be defensible in that they have partnerships and integrations in a way that this actually works, right? Because the rubber has to meet the roads still, if they’re gonna book these itineraries. So there may be other ways to make it defensible that isn’t the model.

[00:48:55] Jason Schlachter: So I think when you think about these use cases from business perspective, a defensible model is great if you can do it. But you’re not gonna get a defensible model without spending a lot of money and having a lot of data. Yeah. So it may not be critical. 

[00:49:07] Matt Paige: I think it deals with, is it connected to your inherent value prop or the customer, facing side of the business model itself.

[00:49:18] Matt Paige: Then this defensibility question becomes really important. But you mentioned brand, actually is a differentiating thing. Now I’d say most folks it’s, the level of apples and the big ones where that’s where you see the, brand defensibility truly shining through. But that’s a critical piece of this.

[00:49:38] Matt Paige: I’ve seen there’s like websites that track how many AI startups are happening being created like every day. And there’s some where they’re literally just putting a skin on top of a foundational model and there’s no inherent defensibility to it. Somebody could have spun it up over the weekend and it’s like, how do you weave through. That in es in essence, is there something, is there substance behind it that makes you unique? 

[00:50:09] Jason Schlachter: Yeah. That’s an interesting example cuz those companies were serving a market need some in some ways. Like in the very early days, the average non-technical person probably didn’t know what open AI was, didn’t they had a website, didn’t know they could go to the website and subscribe to their model, just saw it in the news. And then they get a friendly cartoonish bot popping up on their their, iPhone ads, Yeah. Saying here for access to the model. And it’s that was, a marketing niche that Open AI was neglecting.

[00:50:40] Jason Schlachter: I think they’re, picking up on that now.

[00:50:42] Matt Paige: A great example is the, Chat G P T app. They didn’t have an app for a little while and there were competitors that created an app just leveraging chat, G P T. Yeah. And they were able to get some amount of, yeah. Probably actually crazy scale, but then Chat, G P T OpenAI came out with their app.

[00:50:59] Matt Paige: And you probably just completely killed their whole business model. So that’s like the whole defensibility piece. How easily can a competitor in and just take it over? 

[00:51:08] Jason Schlachter: Exactly. Exactly. Okay, so there’s two more things I wanna touch on here. Yeah. One of them is whether it’s internal or external facing, like this kind of relates to risk, but it’s not directly related to risk. So if you think about internal versus external if you’re using it to create, and this is really where, these generative models have the most value to create content where fluency is, the highest need and, risk is low.

[00:51:34] Jason Schlachter: So this internal facing use case of help me compose emails to my colleagues faster, or help me create marketing content that I can post online faster, or like generate blog posts for me that I can just tweak and, send out. Or summarize to me this. This document that I received from one of my partners or explain to me th this chain of emails, like those kind of things can really boost productivity.

[00:52:05] Jason Schlachter: They’re fairly low risk. There’s a human in the loop. Human in the loop is, maybe the magic word here. If there’s a human in the loop and it’s just proposing information or helping to accelerate something low risk. Those are often internal facing. But when you’re customer facing there’s, higher risks.

[00:52:21] Jason Schlachter: So that’s another thing you wanna consider too. And then part of that is, is doing the AI ethics component. So in all of this there’s a need to consider the implications of the ethical implications of using AI models even in your own business, but especially if they’re affecting customers.

[00:52:41] Jason Schlachter: At Elavance, we were building AI models for healthcare and we were impacting people’s ability to get care with those models. Yep. Our intent was to improve their health outcomes and to make things better. But things can go wrong. And even when they go right there’s always risks that you have to assess.

[00:52:59] Jason Schlachter: And so we would hold these, ethics workshops and the idea here is to, dive deep into what it means to build this. And so I’ll just, I’ll spend a moment on that, Matt. Yeah. But I, think it, it this, happens like really early on it’s not like something you do at the end of your use case pitch when you’ve got your funding and you just need to move forward.

[00:53:24] Jason Schlachter: It’s really like early on in the process of the viability of the idea and the business value. And so there’s a, there’s ethic workshops you can do where you can work with a team of stakeholders and you start off really small low, overhead, an hour or two, get the basics.

[00:53:42] Jason Schlachter: And as you grow your, business case and your, plans and your, funding, then that’s when you start to land more and more layers of this. And this is actually something that, that we do for our customers. We help them to, work through these kind of ethics workshops where you, want a third party that has experience running these and understands how things go wrong to, to run this internally.

[00:54:03] Jason Schlachter: And so you look at your users, you look at your stakeholders, identify all the stakeholders you try to understand There the, values and the interests that the users and stakeholders will have. What kind of tensions might arise? Like how are you gonna test your assumptions? Do you think about the impact you could have changes in behavior that might emerge?

[00:54:27] Jason Schlachter: A great example for me is like cars, like Atlanta, where we live, was built after the invention of the car, primarily because the original Atlanta city was burned down and they rebuilt it really after cars came to be. And at the time the the mayor of Atlanta said I, dream of building a city that is a car, first city.

[00:54:49] Jason Schlachter: And it’s that seems like an anathema today for us, but that was the AI of the time. They wanted to build an ai, first city, a car, first city, right? Yeah. And now Atlanta’s like really difficult to walk in and traffic is bad, congestion’s bad, and we’re slowly peeling back the layers of that a hundred years later.

[00:55:07] Jason Schlachter: So that’s an example of like changes in behavior. If. There was an ethical review committee for the car, first City, like maybe some of those things would’ve come up. So there’s also things like the group interactions that emerge. So how it affect groups. There’s questions around data and privacy explainability.

[00:55:29] Jason Schlachter: So if a model is, impacting your life, like you should be able to understand why it’s making those decisions. We don’t want to take the, distributed bias and distributed failures of our, current sort of like business ventures and centralize them in a way that nobody can question and understand them.

[00:55:47] Jason Schlachter: There’s questions around do you have a human in the loop? How do you monitor performance? How do you mitigate things? How do you get feedback? And so all these kind of things are discussion points like what is fairness? What does it mean to be fair in this use case? This is part of the validation cycle, but you just, you start light an hour or two on the first pass and by the time you’re, funding like a big use case in a big program, like it should be very rigorous. There should be processes in place, accountable stakeholders and all that stuff. 

[00:56:17] Matt Paige: No, that’s awesome.

[00:56:18] Matt Paige: And, great example of something that can be facilitated with AI empowerment group in, Hatworks there. So we got about five minutes. I’m wondering, Jason, we could jump into some of these questions and topics in the chat if you’re up for it, unless there’s something else you wanna cover. That’s it.

[00:56:35] Matt Paige: That’s great. Yeah. Yeah, is there’s, Jacob had one. Does anyone use AI for scheduling appointments? And I don’t specifically know of a tool. I’m sure there’s several folks trying to achieve this, but this is like a, perfect example of a use case that you could disrupt a Calendly or products that exist out there.

[00:56:56] Matt Paige: How could that impact that workflow? I need to schedule appointments, plan out my day. I don’t want to be the person having to reach out to somebody and say, Hey, does this time work? Does that time work? Jason, that was an interesting one. Any thoughts on that? 

[00:57:11] Jason Schlachter: Yeah, I think there, there are use cases like Calendly that, that do that today.

[00:57:16] Jason Schlachter: And I think there’s, other AI startups out there that, that do something similar. But I, guess I would challenge the, notion of what it is that, what is the real task that, that you want done or that I want done? It’s not strictly that like I wanna schedule the meeting with Matt and so I want Calendly to go figure that out for me.

[00:57:38] Jason Schlachter: That’s still that that like process level where I have to get it done. I would love to just have a, more robust agent where, I said Hey, I want to talk to these 10 people this week. Go figure it out. And then Matt gets an email from Calendly saying, Hey Matt, Jason has identified you as somebody who’d like to speak with this week.

[00:57:56] Jason Schlachter: What is your availability? 

[00:57:59] Matt Paige: And what you just did there is you took the question from earlier if I had a team or a staff that could go and do this, how would they solve the problem versus me having to be like the main point of failure bottleneck in the process. That, that’s a great example of how to reframe how you think about a use case.

[00:58:19] Matt Paige: I, like Chris is creating movie scripts about Batman’s early days, which is it’s funny, but like it does change how that whole industry works, potentially. Yeah. From a creator perspective and all of that. We 

[00:58:36] Jason Schlachter: are so for people who are not like deep into stable diffusion or, Dolly, there are models out there right now, generative AI models creating movies.

[00:58:47] Jason Schlachter: And writing the scripts for those movies. And so it’s, emergent. Like I, I believe like in the next year we’re gonna see like TV shows where the script has been written. The Yeah. The actual animations have been completely created by the ai. They may not be successful, I don’t know, but it’s happening.

[00:59:06] Matt Paige: But this is this is one of those big transformational disruptive type of things you think back in the day from music going digital. Yeah. Same kind of thing. Yeah. And there’s gonna be the movies, the studios trying to fight this change of AI and generative AI playing a role.

[00:59:21] Matt Paige: But it has the, feeling of something similar that’s happened not too far in the past. 

[00:59:26] Jason Schlachter: Or what if it’s make, me a commercial that’s gonna cause people to hire AI empowerment group to, to help them with AI strategy create music for it. Like some kind of like amazing techie humanistic background and, write the script and then use my voice to make, to create it.

[00:59:44] Jason Schlachter: And Because it can speak like me. Cause it’s trained on my voice. Like it will just speak for me. Yeah, it’s possible. 

[00:59:51] Matt Paige: Yeah. And Clause brings up an interest. Interesting One. How can businesses leverage the potential of utilizing chat sheet PT to enhance customer interactions, streamline various business processes while ensuring data privacy and compliance?

[01:00:05] Matt Paige: Yeah. Particularly when it involves sending data via the API back to open AI cloud. I think this is an inherent like risk type of aspect. 

[01:00:15] Jason Schlachter: This is a good one. Yeah. So Klaus, you mentioned you’re with a, German company and, the EU is passing measures to, to require that any use of generative AI be approved by committee and be licensed I believe.

[01:00:32] Jason Schlachter: And I think we’re gonna continue to see pushes for that. I don’t necessarily think that we should be. Regulating generative AI or ai at that level? In the broad sense, I think there’s specific use cases that should be regulated. Just we regulate food with the F D A or drugs.

[01:00:51] Jason Schlachter: Certainly in certain domains and where there’s certain need for precision, it should be regulated. But I think for a lot of these startups with low risk, that should be able to get out there and do it. But, in Europe, you’re probably gonna be faced with that, challenge. One way to, to mitigate what you’re asking about is not to send it to OpenAI.

[01:01:08] Jason Schlachter: Run your own models, run them in your own cloud, host it in your building push it to the end user, run it on their client machine. And so in doing so, you’re not necessarily sending their data to open ai. There are open source models that are emergent in generative ai. And some of them are pretty mature.

[01:01:32] Jason Schlachter: Stable diffusion is a great example. It’s first class generative AI model that’s open source. There’s a lot of large language models and chat c b T type capabilities. On the open source side, I’m a firm believer that the open source models will overtake the closed source models given time.

[01:01:52] Jason Schlachter: So yeah, it, you may not 

[01:01:56] Matt Paige: have, there’s even like a there’s a leak document I think from Google. I, believe it was real, but they were cautioning get this exact thing internally that hey, the open the, and it’s funny, they call themselves open ai, it’s not really open per se, but you look at like meta tech taking that strategy and there’s other kind of foundational open source models out there, but they have the potential to overtake things that are being developed internally.

[01:02:23] Matt Paige: Yeah, 

[01:02:23] Jason Schlachter: Meta’s a great example. So open AI originally founded with Elon Musk and, others to, to open source these AI models so they wouldn’t be closed source then strong armed overtaken by Microsoft. Yeah. Now Microsoft owns it. They, make them closed sourced. Meta, has and, Zuckerberg has surprisingly shown up to be like the big open source creator of these models.

[01:02:48] Jason Schlachter: And I think from a business strategy it makes sense. Goo Google’s playing to win, Microsoft’s playing to win. They want to, be the winners in this generative AI race. I don’t think Meta wants to do that or necessarily needs to do that. They’re playing to not lose. If, they raise the water for everybody, then everybody, is okay and nobody loses.

[01:03:11] Jason Schlachter: And I think that’s Meadows play. And, that’s a good strategy against these two giants that are dumping all their money into it. 

[01:03:18] Matt Paige: So the network effects element there too, right? If they’re, At the foundation of it it kind ofra raises their, business. It happened 

[01:03:27] Jason Schlachter: with stability stable diffusion there’s thousands and thousands of, versions of stable diffusion being spun up because it’s open source and Dolly has its trajectory. 

[01:03:37] Matt Paige: Yeah. We are at time. We could go a little bit longer. I just to close it out though, I love the last comment there. It’s heard that these tools are an expansion to your imagination.

[01:03:53] Matt Paige: They totally agree. One of my favorite uses of chat, G B T is telling it to graphically describe any concept, great foundation for any type of media creation. But it’s an interesting concept. It’s like that co-pilot and it’s like a whole nother topic. Yep. Yep. Matt, there’s one. Yeah we can keep going.

[01:04:11] Matt Paige: Let me do just the, call out and then we can stick on for another couple of minutes. But yeah, so like, we mentioned earlier, patchworks and AI empowerment group, we are partnering together. So like these type of custom workshops is the exact type of thing we can take your organization through.

[01:04:28] Matt Paige: Jason, you mentioned the ethics based workshop this is the part where having an expert is critically important and hit up Jason or I and we can help you help get that facilitated. But any other closing thoughts? And then maybe we can jump to a few other chat items.

[01:04:45] Jason Schlachter: Yeah, Matt, totally agree. I love the ideation process, the creative problem solving piece, and I love hearing about the kind of problems that are real and concrete and those kinda opportunities would be a lot of fun and productive for, both of our organizations.

[01:05:00] Jason Schlachter: So hopefully we’ll hear from some of you. I would love to. Pick up this one question from Monica Lapera. Which is the biggest fear for some people is that AI can replace some jobs or even professionals. How do you balance the pros and cons that AI brings to the world? A great question.

[01:05:18] Jason Schlachter: We’re not gonna answer it in the last moment here, but I, think it’s a great question just to surface, because there is immense responsibility. This is really the dawning of, the an age in which how we work and how we live and how wealth gets distributed and who has what is gonna dramatically change.

[01:05:37] Jason Schlachter: And there’s a lot of hype out there. Generative AI is not everything that it’s hyped up to be. And, it’s gonna take a long time for a lot of these things to happen. But the reality is, That we, under, we over predict the short term change, but we under predict the long term change.

[01:05:54] Jason Schlachter: And so this is a, great question of service and I think we just have to really be deliberate in the ethics of all this and try to build the world that we wanna make and not, the world that we can. There’s just 

[01:06:06] Matt Paige: tools I’d say too. It’s do you have the opportunistic mindset or the negative or positive, I’m forgetting the, correct terminology here, but think about 20 years ago, majority, a large portion of jobs that exist today did not exist previously.

[01:06:26] Matt Paige: So a lot of times transformational, disruptive, things like this create new opportunities we don’t even know exist yet. So I think this is like one of those things that has the potential as well. Even though it may be replacing some jobs, I think it’s gonna create a whole host of new ones in the process.

[01:06:43] Jason Schlachter: Absolutely. And a lot of what it’s gonna do is not replace jobs, but replace tasks. So if you’re like a medical claims reviewer, like I’m just taking a wild stab in the dark here. You might not love reviewing medical claims. It might cause it well, and you have some training that makes appropriate for it or, it’s easier than being out on the ER floor all night.

[01:07:06] Jason Schlachter: But you may not love all aspects of medical claims processing. And so this is where I think AI can remove some of the, burdensome tasks that you don’t enjoy so that you can focus on the stuff you do enjoy. So what if you could focus on the really interesting clinical challenges or like the really puzzling situations and not the mundane minutiae of comparing numbers or checking dates or understanding the timelines and stuff like that.

[01:07:32] Jason Schlachter: So I think for a lot of people, for most people it’s gonna, it’s gonna remove the mundane, more automatable tasks but, not their jobs. There certainly will be people whose jobs are lost. But like you said it’s, always changing. 

[01:07:49] Matt Paige: Yeah. It’s like back to jobs to be done communication is the job that’s existed for a very long time.

[01:07:54] Matt Paige: From talking to physical mail to email, to slack and keep going. The job remained the same. It’s just how you did it changed. Yeah. And just the last one, just because Chris is hitting on it how’s it gonna impact the stock market? Anything being done to regulate that? I’d say, I don’t know.

[01:08:15] Matt Paige: I think there’s a lot of stuff already being done today leveraging ai Yeah. In terms of stock trading and that, that’s already prevalent in a lot of ways today, but I don’t know. Any thoughts there to wrap us up with the last kind of q a question? I don’t know. 

[01:08:30] Jason Schlachter: Yeah I would imagine that most stock trading right now is already done by ais.

[01:08:35] Jason Schlachter: So maybe the question is if theis get better, like what does it mean for us? Yeah. Yeah. I don’t know. The, only stage that buy I can give on that is put your money into a index fund and forget about it. You go anything else is a gamble, whether it’s AI driven or not. 

[01:08:54] Matt Paige: That’s right.

[01:08:54] Matt Paige: That’s right. Cool. That was really appreciate you being on Jason. Thank you everybody that came and participated. We will be putting this out there on the podcast and sending out the recording to everybody that joined. And we got a few I think good takeaways in terms of templates and things we can share from this talk as well.

[01:09:15] Matt Paige: But really appreciate the time, Jason and everybody. Have a good rest of your day. Thank you, Matt. 

[01:09:21] Jason Schlachter: Thank you guys for the questions. It’s great to be here. 

[01:09:24] Matt Paige: Thanks everybody. Bye.

The post Generative AI Playbook: How to Identify and Vet Winning Use Cases appeared first on HatchWorks.

]]>
Harnessing Generative AI Tools for Modern Software Development https://hatchworks.com/blog/software-development/generative-ai-tools-for-modern-software-development/ Fri, 07 Jul 2023 20:16:35 +0000 https://hatchworks.com/?p=29665 One of the most exciting areas of Machine Learning is Generative AI, a subset of AI that creates new data instances that resemble your training data. In the context of software development, this means generative AI models can assist in writing code, thereby transforming the way we develop new applications. Generative AI in software development […]

The post Harnessing Generative AI Tools for Modern Software Development appeared first on HatchWorks.

]]>

One of the most exciting areas of Machine Learning is Generative AI, a subset of AI that creates new data instances that resemble your training data. In the context of software development, this means generative AI models can assist in writing code, thereby transforming the way we develop new applications.

Generative AI in software development provides solutions to pressing issues like the shortage of skilled software developers and the growing backlog of feature requests and bug fixes. Companies are increasingly turning to AI to help streamline their processes and deliver high-quality software applications more efficiently.

The Competitive Edge: Harnessing Generative AI Tools for Modern Software Development.

AI coding tools, powered by generative AI, are at the forefront of this revolution. These tools generate code, helping developers to write faster and more efficiently, while also reducing the possibility of human error. AI applications are becoming increasingly sophisticated, able to take on more complex tasks and deliver even more value.

This article will delve into the world of Generative AI in software development, exploring its impact, the tools that are leading the charge, and what the future holds. Whether you’re a seasoned developer, a project manager, or a business leader, this piece will give you insights into the world of AI-powered software development.

How AI improves developer workflow and enhances productivity

Artificial intelligence (AI) is more than a buzzword—it’s reshaping the way businesses operate, innovate, and maintain their competitive edge.

At HatchWorks, we’ve seen firsthand how AI can streamline tasks, foster continuous learning, and boost productivity. Today, we’d like to share some insights from three of our colleagues, all of whom are incorporating AI into their everyday work.

Our journey begins with Fernando Manzo, who enthusiastically uses ChatGPT and the beta version of GitHub CoPilot. Both tools have become indispensable to him, assisting in understanding and developing code.

CoPilot acts like an autocomplete tool, suggesting the next steps when the developer might hit a roadblock. However, it’s ChatGPT that Manzo views as the superior ongoing training tool, capable of providing a contextual understanding of code and assisting with complex SQL queries. But Manzo reminds us that AI is not perfect—it can produce syntax errors and invalid options, so it’s essential to double-check your documentation.

Beyond coding, Manzo also appreciates the role AI plays in communication. Tools like ChatGPT and Grammarly help him in refining client-facing communications, making them concise and more compelling.

Gabriel Bejarano, another AI enthusiast at HatchWorks, agrees with Manzo that AI won’t replace developers anytime soon. AI’s role, according to Bejarano, is about enhancing performance by reducing time spent on repetitive tasks. GitHut CoPilot, for example, excels in autocompleting code and simplifying tasks such as sorting.

Bejarano also finds value in using AI for creating test cases, translating to new code bases, and even teaching coding when given the right prompts. He likens ChatGPT to a digital consultant that can answer complex questions and help with intricate tasks.

Stay competitive with essential AI tools in software development

What exactly are AI-based code-completion tools, and how do they work? To put it simply, these tools integrate with the software that developers use to write code. They leverage AI models trained on vast amounts of code to predict and suggest the next piece of code that a developer is likely to write.

Natural language processing (NLP), a subfield of AI that focuses on the interaction between computers and human language, plays a crucial role in the functioning of these tools. NLP enables the tools to understand and generate human language in a way that is both meaningful and contextually relevant. This is key to their ability to generate code from a natural language description, a feature that is proving to be a significant time-saver for developers.

Let’s take an example of a popular AI-based code-completion tool: OpenAI’s Codex. This tool is capable of translating comments written in plain English into code snippets in a variety of programming languages. By simply typing a comment describing what they want the code to do, developers can get a head start on writing the code. This not only saves time but also helps to reduce the cognitive load on developers, allowing them to focus more on problem-solving and less on syntax.

However, AI-based code-completion tools are not limited to code generation. They can also be helpful in identifying bugs and suggesting fixes, thanks to their ability to learn from the vast amount of code they have been trained on. They have the potential to become an invaluable coding partner, assisting with everything from writing boilerplate code to debugging.

Simplify code optimization and query management with AI

Generative AI is revolutionizing software development by automating tedious and repetitive tasks such as writing boilerplate code, performing standard database operations, and creating common UI elements. This automation enables developers to concentrate on the more complex and creative aspects of software development, fostering innovation and enhancing the quality of applications.

First, generative AI tools have a significant impact on code generation and developer productivity. As discussed earlier, AI-based code-completion tools integrated into development environments can expedite the process of writing code. By suggesting potential code blocks that match the developer’s intentions, these tools can reduce the time and effort spent on writing and debugging code.

One study suggested that these tools can boost code generation speeds by up to 55%. Consequently, this could lead to a considerable improvement in developer productivity and a reduction in the time it takes to bring new software applications to market.

Secondly, AI tools play a crucial role in debugging and enhancing the quality of software applications. AI can sift through vast amounts of code to identify potential bugs or vulnerabilities that may have been overlooked during the development process. It can also suggest potential fixes for these issues, reducing the amount of time developers need to spend on debugging. This proactive problem-solving capability can lead to improved software quality, as well as enhanced security.

Unlock ChatGPT’s potential through effective prompt crafting

AI can assist in the testing phase by generating test cases and scenarios. Clear, well-written task scenarios for QA testing, dogfooding, and beta testing help ensure that the software application is robust and reliable.

Prompt Engineering is a concept that plays a crucial role when developing a new application with an AI system. Essentially, Prompt Engineering involves crafting prompts that effectively guide the AI system to generate the desired output.

For instance, when developing an application, developers might provide the AI with prompts that describe the functionality they want to implement. The AI would then generate the necessary code based on these prompts.

Finding balance – using AI efficiently without sacrificing quality

AI coding tools aren’t designed to replace human coders but rather to augment their capabilities. Experienced developers can leverage these tools to produce higher quality work more efficiently, while less experienced developers can use them as a learning aid, accelerating their skill development.

This partnership, when effectively managed, can result in better software quality, reduced development time, and an overall more efficient and enjoyable coding experience.

AI-generated code expedites the development process. It’s capable of churning out blocks of code quickly, which can significantly reduce the time taken to develop software. But it’s not without its potential downsides.

One of the main challenges is the risk of vulnerabilities or bugs within the AI-generated code. While AI tools are becoming increasingly sophisticated, they’re flexible. This is where the critical role of a software engineer comes into play. Engineers are needed to review the AI-generated code, refine it, and ensure that it is accurate, secure, and efficient. Without this human oversight, the code produced could be subpar, insecure, or inefficient.

To ensure the quality of AI-generated code, a variety of strategies can be employed. Automated testing tools can be used alongside generative AI to check the code as it’s produced, catching any errors or vulnerabilities early on.

Regular code reviews by experienced developers can also help maintain high-quality standards. These measures, combined with continuous learning and improvement of the AI models themselves, contribute significantly to the quality assurance of AI-generated code.

On the other hand, the benefits of using generative AI in software development are manifold. AI can automate repetitive, low-level tasks, freeing up developers to focus on more complex and creative aspects of coding. This can lead to improved design and faster development times.

Additionally, AI can assist in debugging, potentially improving the quality of the final product. It can also help manage software feature requests and bug fixes, contributing to overall developer productivity.

Despite these advantages, the use of AI-generated code does present potential risks, one of them being intellectual property issues. For example, if an AI model has been trained on copyrighted code, there’s a risk that the AI could generate code that infringes on that copyright. Companies must take these issues into account when implementing AI tools, ensuring that they have the necessary permissions and licenses to use the training data for their AI models.

Leading with an AI-first approach in your business culture

Brandon Powell, CEO of HatchWorks and leader of our AI Task Force, envisions AI as a catalyst for growth and innovation across the entire business. He believes we’re at a generative AI tipping point, where AI tools can optimize different business departments, upskill our workforce, and ultimately lead to improved productivity and profitability.

One area where AI is making a significant difference is in recruitment. It not only helps in sourcing candidates but also engages them through AI-driven chat.

However, with the adoption of AI, there’s an essential question: How do we ensure AI doesn’t alienate our team members?

Transparency and continuous learning are key. We have to make sure that the adoption of AI aligns with the values and goals of our people.

Support client success with AI education and empowerment in product development

While AI holds great promise, it’s not without its challenges. AI code generators, for example, still need human intervention to piece together complex environments and understand legacy systems. Yet, Powell is hopeful, viewing these challenges as opportunities for the team to focus more on significant tasks and less on mundane ones.

AI is more than just a tool—it’s a strategic partner. It assists in making complex tasks manageable, fosters learning, and opens new avenues for growth. As we continue to adapt and learn, we’re excited about the endless possibilities AI brings to our organization, our customers, and our industry.

“The future of HatchWorks lies in being an AI-driven organization,” Powell says. “One that is Agile and ready to train our customers in the effective use of AI.”

Discover how HatchWorks’ Generative-Driven Development™ can accelerate your software projects – explore our innovative approach today.

Frequently Asked Questions about generative AI in software development

Generative AI in software development refers to AI systems that can automatically generate code. These systems use machine learning models trained on large amounts of code data to predict and suggest code blocks based on user inputs.

AI-based code-completion tools integrate with the existing software development environments. Developers can write descriptions in natural language, and the AI suggests several variants of code blocks that fit the description. Developers can then select and refine the appropriate code.

Research indicates that AI-based code-completion tools can speed up code generation by up to 50%. They can also assist in debugging, which may improve the overall quality of the developed product.

More experienced engineers seem to benefit most from AI coding tools. However, less experienced developers can also see productivity gains, although these may be less significant.

AI-generated code may contain vulnerabilities or bugs, just like human-generated code. It’s essential for software engineers to review AI-generated code to ensure its quality and security.

The cost of generative AI coding tools is generally low, with subscriptions typically ranging from $10 to $30 per user per month. These products are readily available and don’t require significant in-house development.

One potential issue is the risk of vulnerabilities or bugs in the AI-generated code. Additionally, inexperienced developers might not see as much productivity gain from these tools. Lastly, it’s important to discuss licensing and intellectual property issues with the provider to ensure the generated code doesn’t result in violations.

Summary

The move towards AI integration is more than just a trend—it’s a significant shift that has the potential to elevate the quality of software development significantly.

Here are our key takeaways:

  • Generative AI is not replacing developers; it’s augmenting their capabilities and helping them focus on problem-solving
  • Tools like ChatGPT and GitHub Copilot streamline coding and boost productivity by automating repetitive tasks and assisting in debugging
  • AI-generated code can have errors; human review is essential for quality and security
  • AI can be a catalyst for innovation and business growth
  • AI adoption should align with team values and goals, focusing on transparency and continuous learning.

By harnessing the power of generative AI, developers can focus more on complex, creative tasks, while AI handles the routine, tedious aspects of coding.

Thanks to the following for their contributions to this article: Matt Paige, Fernando Manzo, Gabriel Bejarano, and Brandon Powell.

Interested in exploring the power of generative AI for your projects?

HatchWorks can help. We provide the tools, expertise, and support you need to harness the potential of AI in your software development process.

Contact us today to learn how you can leverage generative AI.

The post Harnessing Generative AI Tools for Modern Software Development appeared first on HatchWorks.

]]>
Quality-Driven Product Development with Realtor.com’s Erika Chestnut https://hatchworks.com/built-right/quality-driven-product-development/ Tue, 27 Jun 2023 12:00:10 +0000 https://hatchworks.com/?p=29644 In this episode of the Built Right podcast, we look at the often overlooked and undervalued topic of quality in software development and how good process and culture are what creates the foundation for it. Joining us is women-in-tech career coach, Erika Chestnut, who is Head of Quality at Realtor.com. Erika has been building and […]

The post Quality-Driven Product Development with Realtor.com’s Erika Chestnut appeared first on HatchWorks.

]]>

In this episode of the Built Right podcast, we look at the often overlooked and undervalued topic of quality in software development and how good process and culture are what creates the foundation for it.  

Joining us is women-in-tech career coach, Erika Chestnut, who is Head of Quality at Realtor.com. Erika has been building and leading quality teams for around 15 years. She has a wealth of knowledge to share about the foundations of good quality, why organizations that want to improve quality are often focused on the wrong thing, how you create a balance between quality and innovation and the good leading indicators in quality.  

Keep reading for some takeaways from the episode or check out the full discussion below. 

Many organizations focus on the wrong things when trying to improve their quality, primarily considering testing as the sole determining factor, explains Erika. However, true quality in software development starts much earlier in the process.  

 

Shifting focus: moving quality up the chain  

According to Erika, when people think about quality, they think about the end state and therefore they land on the thing that happened right before the end state, which oftentimes is testing.  

However, she explains that quality should be addressed throughout the entire process. Shift left testing, as referred to in the industry, means moving the validation, checks and awareness further up the value stream.  

It involves examining your processes and the impact they are having on product quality. 

Erika likes to say, “good process creates quality, good process results in quality.” Because processes create consistency and continuity, which results in quality. 

While testing requirements, are you also checking:  

  • Your process 
  • Your communication flow 
  • Your documentation 
  • That everyone has what they need 

 

By expanding the scope of quality beyond testing, organizations can address critical factors that are impacting the end result.  

 

Quality everywhere: Process, Tools and More  

Erika emphasizes how opportunities to lead with quality are everywhere.  

It’s a case of asking: can we improve quality in our processes? Can we improve the tools that we use and how we leverage them? Are the tools implemented in a way that’s cohesive and integrates into our system in a meaningful and impactful way? 

Every opportunity within the development lifecycle should be considered to enhance quality.  

 

Common pitfalls and opportunities for improvement  

When asked about a common pitfall that Erika witnesses time and time again, she hits on one of her biggest pet peeves, which is the lack of clarity regarding the business structure and flow. She emphasizes that understanding the structure and flow of the business is fundamental for ensuring quality. 

Without a clear big picture view, teams may become blinkered within their own domains, missing crucial integration points.  

Leaders need to communicate the organization’s structure, product flow and internal narratives, enabling teams to grasp the interconnectedness of their work.  

This type of awareness fosters better collaboration and a holistic understanding of how each team contributes to overall quality.  

 

Balancing quality and innovation  

Finding a balance between quality and innovation can be challenging for companies, Erika explains.  

While innovation is often the top priority, focusing solely on it can jeopardize quality.  

Rapidly introducing new features or functionality without addressing underlying issues can lead to long-term problems.  

Erika emphasizes the need to consider the impact of innovation on quality. Monitoring leading indicators such as defect density, release health, rollbacks and time between failures helps identify if innovation is having a negative effect on quality. 

 

Recognizing the value of process and quality  

Helping businesses recognize the value and impact of process and quality is crucial before issues arise.  

Erika advises aligning the story of quality with the goals and interests of the business to create a compelling narrative. By connecting the dots between process improvement, quality enhancement and business outcomes, leaders can appreciate the significance of investing in quality.  

Analyzing metrics defect density, release health and customer feedback serves as tangible evidence of the impact of process and quality initiatives.  

This approach can help foster a culture of quality from the top down, ensuring that process improvements receive the attention they deserve.  

 

To hear more about quality in software development, tune into the full episode today.  

Subscribe to Built Right for more engaging conversations that will help you build the right products the right way! 

Matt Paige: Today we’re chatting with Erica Chestnut, a true champion of quality. She’s been building and leading quality teams for around 15 years now. At places like Cabbage Turner broadcasting calendly realtor.com to name a few, and that’s an awesome list there. By the way, Erica and before that she’s led development team. She was even a developer herself, so I know you’re gonna fit right in with our built Wright community here. But welcome to the show, Erica.

Erika Chestnut: Thanks. Thanks for having me. 

Matt: Yeah, excited to get into this topic. This is one we haven’t gotten into yet on the Built Right podcast, but. Today we’re getting into the often overlooked and sometimes undervalued topic of quality in software development. And the topic of why good process and culture are really at the foundation of good quality. And PS, for everybody listening, stick around. We’re gonna get Erica’s take on her perspective of generative AI and how it’s impacting the discipline of quality. I know everybody’s talking about it, so we want to get Erica’s take on that as well. But to set up the problem, Erica, you talk about organizations who want to improve their quality are often focused on the wrong thing. So what is this wrong thing and what can they do about it? 

Erika: Yeah. Quality is always, not always that’s a poor statement. Oftentimes, when people think about quality, they think about the end state and therefore they think about the thing that happened right before the end state. Which is oftentimes testing. And so when they say our quality is not good, they say our testing is not good, or we are not investing in the right type of testing, i.. manual versus automation, or we don’t have enough coverage. We don’t have enough code coverage, we don’t have enough functional or non-functional testing. But the reality is actually that it starts much further up the stream. And you started to hear about this when we, when the industry was like shift left with testing, but then just like most buzzwords, right? Innovation, innovative, right? Like it, yeah. It’s not unpacked. And so now it’s like shift left testing. Okay what does that genuinely mean? And what is the impact of that? 

Matt: And real quick shift left testing, that’s meaning moving quality further up the value stream towards more than the beginning of the process. Is that right? For just for listeners?

Erika: Yes. But it’s not moving quality up, it’s moving. The it is moving quality up, but it’s really about moving the validation, the checks the awareness. Yeah, what is impacting our product quality? And so one thing that I always love to say is that process creates quality process, results in quality, good process, because process creates consistency and continuity, which results in quality. So when you say moving quality left or further up the chain. People are still thinking testing. Oh, we’re testing the requirements. Are you checking your process? Are you checking your communication flow? Are you checking your documentation? Does everybody have what they need? Are you checking to make sure that the quality team is not starting the new sprint at a deficit? Because the engineers didn’t start stopping before they started finishing, right? Like you, you’ve gotta. Shift the idea of what impacts quality, what creates poor quality. And it’s not just testing. 

Matt: Yeah. And you make a good point cuz quality is at the end for all instance of purposes. That’s the last thing. Let’s check everything, make sure it’s good to go. And a lot of times they can be the scapegoat when something goes wrong or doesn’t get right. Delivered. And I love how you hit on this concept, the process, but to clarify a lot of people think like process, they think like tools, but it’s not about the tools. Think tools are often, over, put on a pedestal in terms of, oh, they’ll fix everything, but it’s not about the tools. It’s the underlying pieces in the process. And I love how you talk about the culture element that comes into play. As well. Yeah, 

Erika: it’s, and there’s definitely tools and it’s testing that’s at the end. It’s the testing, it’s quality is the entire thing. So actually I have to retract my statement. It’s not that it’s not about moving quality. It is it’s. Quality doesn’t need to be moved. Quality is everywhere, right? Yeah. The opportunities to lead with quality are everywhere, so it’s not about moving it left or right or up or down it’s about acknowledging that there are opportunities to improve quality in everything. Is it improve quality in our process? Is it improve quality in the tools that we use and how we leverage them? Are they the right tools? Are they answering the right question? Are they implemented in a way that it’s cohesive, they’re not cohesive, that it integrates into our system in a meaningful and impactful way. All of that is quality. All of that produces quality at the end state. And they all come together like they’re, it’s not just testing. It all comes together to produce quality. 

Matt: Yeah. So to make this more real, I am curious cuz you’ve been in a lot of. Interesting companies from small to large, and I know you do some like side consulting stuff in the past, but what examples do you see? What are those common pitfalls that companies have, whether it be process related or just in quality in general? Is there anything that’s I see this every time, or this is like a big thing that typically happens a lot? 

Erika: One thing that I see this always, it’s a It’s, I think a pain point for not a pain point. It’s, yeah, it’s a pain point for me. Is that Or a pet peeve? That’s actually a better word. There you go. Pet peeve. People do, it’s a pet peeve of mine and I see it all the time. The structure of the business is not clear. And it’s fundamental quality opportunity that is missed. When it’s, The structure of the business is not clear to the teams or the business flow, like the whole, like what is it, what are the boxes that make up the business and how does it flow left to right? What are the, what are the little exits along the way? And what happens is not. Fully unpacked for the team. And then when we go through like hiring companies are going through this massive hiring these windows, and then we’re throwing people in and we’re saying, Hey listen, go to your team. They’ll help you. The team has blinders on, the team has blinders on, and they’re like, this is our little world, but we’re not providing this big picture view. For people to understand at the top level this is our business. This is our structure, this is how we talk about ourselves internally. And this is very clearly how it moves down into the organization. From a structure and from a business business flow. Like the actual product. And so I find that’s, those are missed opportunities oftentimes. And they don’t recognize, leadership doesn’t recognize that, that it’s impacting quality, and I’ll go into teams and I’m talking to teams and they’re like, I don’t know about this. I don’t know how this integrates with this other system. I don’t know. I had one, one manager say, my area, my enterprise, Area doesn’t integrate with this other area, these other, like this main area of our product. It does. It did. And they didn’t know it. So like we put on these blinders and you’re like, Hey, I’ve got my area and I’m good. It’s but are you thinking about how your area integrates with these other areas and what the impact is and do you understand and are you mindful of that? Yeah, that’s, so that’s the thing that I think it’s missed.

Matt: That’s interesting, especially I guess when you get into larger scaled organizations. But it gets back to we talk about a lot about at HatchWorks connecting to the outcome and understanding the outcome and knowing that in all layers of the organization. Yeah. It’s so important cuz you have to understand, what is the business outcome trying to be achieved. But I love your point around the connection between multiple teams and having that, yeah. Having that quality understanding between the different organizations. Quality and innovation. In my mind, I feel like they can sometimes be at odds. Quality is very much process driven rigids not the right term, but you do want like foundational process in how things are structured and then on in the innovation side, a lot of times, whether it’s like business model innovation or anything like that, you’re thinking of breaking process and norms. How do these two play together and how do you create balance between quality and innovation? Those two companies? Yeah. 

Erika: Most companies struggle with that as well. The, that, that balance of quality to innovation because obviously the business is running after innovation. That’s the, they wanna stay ahead in the market. They want to be first to make that next big change. They want to be the unicorn in this space. To do that. Sometimes you’re running fast and you are focused on what’s the new Wizbang feature that you have, but that can be a struggle. It can it, it can be at the expense of quality and if we’re not looking at it, If we don’t pay attention, we’re like, listen just innovate. Get these new products out, get this out. And you might have the teams, you might have the quality teams saying, there’s a problem with our quality. There’s a there’s a problem with our architecture and we’re building these new features on top of it. And so these new features are nice and shiny. But we’re putting them on top of something that doesn’t smell so great. And eventually the new shiny thing will wilt and it will also smell right, because we are not we’re not considering that we never fixed the actual problem. We never cleaned up. The smelly stuff, right? Yes. And that’s part of quality, but it’s not just bugs. It’s what might create delivery problems. What might create inefficiencies? What, what doesn’t allow us to roll back quickly if we have problems? How long does a problem linger out there? How many open issues do we have? Just. Even just meeting acceptance criteria. The turn of how frequently, how long it takes to get something delivered, and then we’re making. We’re taking shortcuts because the requirements weren’t 100% clear. And so we had to go talk to product a lot. And then we went back and forth and we made changes. And then all of a sudden, something that we developed two weeks ago that was actually pretty well baked has now been hacked at the very end, and were released out there. And then there’s an edge case that we didn’t know about, but it’s like an extreme edge case, right? So it’s that innovation like when you think, are we innovating too quickly? Over quality. What is the impact of qual impact of our innovation quality? Did we release these new features that we, this new functionality, there’s new innovation, and do we see a high level of defects? Did do, did our defect density increase? Did our CSAC scores go down because our customers are like, Ew, this is broken. You told me about this new hotness and now I’m coming here and it’s just broken. That sucks. I don’t wanna use your product anymore. I don’t wanna tell somebody else about your product. Right? We have to balance that, but it’s oftentimes a struggle. 

Matt: It’s almost like quality in a lot of ways is the enabler for innovation. If you don’t have that foundation set, yes. It makes innovation that much more difficult, to actually. Really do that. And I love the, I got a visual in my head. I have a, a one year old baby. So when you mentioned the didn’t smell so great. That’s bringing up some bad memories from last night. Things were thrown away. I don’t want to get into it. But the you hit on some other things though. In our business, and we’ve done kind of a foundational shift as of late, really focusing on what are our leading indicators. Versus our lagging indicators, like what are some good leading indicators in quality? And you mentioned, I think like some time to resolution and things like that. What are you looking at whether, lagging or leading that are indicative of either, things are going good or maybe I need to like hone in a certain area.

Erika: Yeah. That, like that defect density, right? What’s, what is our release health look like and do we see a lot of releases that are going out and we’re seeing our health dip? Are we seeing a lot of releases returned? A lot of rollbacks, reverts incidents. What’s our mean time between failures in production, right? These are all alarms. These are red flags that we can look at and say, maybe we’re innovating too quickly. Maybe. Maybe we need to slow down what’s causing this. Maybe we need to look like we’re our requirements not fully baked. Were our acceptance criteria not clear? Where was the failure? Did you know? Did we push in something really late that increased defects? Did we not? Is it, how, what type of failure is happening? Is it a backend failure? Is it a load capacity issue? These are all things that, like, when we begin to unpack that and we say, hold on, we’re seeing an increase here. Let’s look at it and understand what the problem is so that we can target it, fix it, and then go fast again. But often times they don’t. That’s such a foundational. 

Matt: Yeah. Yeah. That’s a foundational piece is knowing what those metrics are. So you have your, Dash dashboard, for lack of a better term of your indicators. Yeah. And then when something’s off, you know where to dig into. And I heard you mentioned the the defect density. Is that just like volume of defects or is it hitting on something 

Erika: more specifically? Yeah it’s volume of defects. So let’s say that, we’ve identified. 200 defects in production. And please don’t start me math because math is hard. So we’ve identified 200 productions. 

Matt: No it’s Friday for us. We’re not getting into math.

Erika: It’s Friday and it’s been a Friday. Yeah. Good. Not in a Margaritaville kind of way, although maybe it needs to be very soon. 

Matt: Yeah. That’s next. 

Erika: Yeah. But Like the number of defects. And then let’s say that we, we, we have a trend. We see that we have, maybe some spikes here and there. But we start to recognize that those spikes are happening every time we release to production. That means we’re recognizing that we are introducing in every release a spike of issues that then we’re having to work back down. How do we improve that spike? Is there a correlation? And then what is causing that? What are we missing? Do we not have enough automation regression? Are these regression issues? Do you know? Do we not understand our system well enough that we understand the impact of the changes that we’re making on downstream areas of the system? What is creating that spike, which is costly? Because especially if it’s like a critical area of the system generates an incident, you’ve got no less than 10 people jumping into that conversation. You’ve got the eye of the cto, the eye of the cmo, so you’ve got executive leaders and you’ve got senior leadership. This gets really expensive and they’re just looking at it and waiting. And they’re jumping in, they’re engaging in the conversation. And then you’ve got. The management, the middle management layer, and then you’ve got the ICS that are implementing potentially the ch, you just got a lot of people in that. It’s costly. And so it’s Hey, we’re seeing, we release and we spike, and then we spend on top of the innovation time to get those spikes back down, or we’re leaving them out there and the customers begin to deal with death by a thousand cuts because, oh, it wasn’t a big issue. But there are a thousand of them that everywhere that comes like gnats, right? And you’re like, yeah. Like walking into a 5,000 bugs. 

Matt: Yeah. I went to school down in, in south Georgia, in Georgia Southern. So I’m used to the gnats. You can probably sympathize being Atlanta East in. 

Erika: Can I get around you? 

Matt: Yes. That’s a good point though. You talk about, it’s like how do you help the business recognize the value and the impact? Of process and quality. Yeah. And it’s almost like it, it’s before it’s too late, I think is the key thing. It’s like, how do you help them recognize that value? What is, where have you found success or what are some good things to hone in on to help connect it to business value before it is too late? And then you got, yeah, all the C-suite breathing down your neck, like you mentioned as a scenario nobody wants. 

Erika: It’s telling that it’s identifying the story of quality within your organization. So hearing from, I love to, like, when I come into an organization, I really want to hear I hold what I call my what? The bug meetings, and I’m meeting with different people. I’m asking them some similar questions depending on where they are level wise. Some are a little bit, more detailed conver questions, some are more strategic but they’re still in the same vein. And then I’m looking for those categories. I’m looking for the sentiment and the conversation. I’m looking for the themes to surface to help understand where are the problems. Because the thing about telling a story is you want it to be compelling. You want it to be interesting. We’ve all picked up a book before and gotten, maybe a chapter or two in, or watched a new series and got into the second. Second half, halfway through the second episode and was like, this just isn’t my jam. The story of quality is no different. You have to tell a compelling story. You have to explain it in a way that attaches and connects to the business heart, to what the, what leadership is interested in. To the value of the business, which is the customer, which is our revenue. You’ve gotta connect it into that conversation, and that takes time. That that, yeah, that requires a lot of moving parts and pieces, but when you understand. The sentiments. When you get that feedback, you’re at least able to say, Ooh you’re worried about availability, or, ooh, you’re worried about, SEO tracking, or, you wanna understand our customer sentiment. Okay how can I get that and surface that information? How can I make that visible through the lens of quality and say, Hey, listen, we’re tracking this. And we wanna hold the teams accountable to it and start to drive that conversation. So you’re taking the heart of what the business is interested in, and you’re moving it through the lens of quality and pushing it back to the teams to say, this is something that we need to look at. How are you improve? How are you helping to improve this? 

Matt: You’re like a quality marketer. You, it’s, I promise that’s one of the reasons you’ve been so successful in your career is being able to connect that story. That’s so cool. I love that. 

Erika: It is not an easy thing to do. I don’t tell you. No, but it’s interesting. It’s not, it’s interesting, but people don’t always think, the thing, the interesting thing though is the frustrating thing is that it’s not all explaining that I have to go through that. I can’t it’s not. Common. It’s not a common expectation. And so yeah, I’m like wandering around sometimes thinking, what data do you have? What? What is the data and people like, but why? Like getting this data is hard. And I was like, I know. And I don’t really have a why for you yet. I’m actually just trying to see what you are tracking. What do you think quality is and what do you measure? Because now I wanna pull it together into a single, cohesive conversation and be like, now when we look at this across the board. Hey, we have a problem right here. Should we focus in on that?

Matt: Yeah. That’s how you connect the dot. That’s right. And one thing you mentioned earlier, you talk about acceptance criteria. I’m curious your perspective on this when should quality members on the team, whether it’s a QA engineer or whatever role it may be, when should they be engaged in understanding the user stories, requirements, or whatever it may be?

Erika: At the very beginning with everybody else. Here’s the thing. Yeah. With quality team members have the benefit of constantly exercising the entire system. If somebody knows the ins and outs of your house and you have a problem, or you want to make an addition to your house, wouldn’t you call them first? Yeah, somebody who’s constantly, I’m thinking we just had a problem with our AC and we call somebody that this, the. It was the same guy that came out and fixed the AC problem we had last time. I’m not gonna talk about the shadiness that feels like, but he clearly said, he was like, yeah, he’s talking to my husband and he’s saying, cuz I wasn’t out there, but he was talking to my husband and he was like, yeah, this is what we talked about last time. Here’s this, that, and so forth and so on. Like he knew the problem. Which made getting to the resolution or understanding the, like just that knowledge push in made it so much quicker to get to the resolution and therefore cost us less money because he is out here less time, right? Yep. That’s qa. QA is constantly exercising your system from the customer perspective is, which is who we care about. QA is connected to the heart of the customer inside the business. 

Matt: Yeah, it’s the health of it. And I want everybody to like pause for a second just so you don’t miss this point. If you’re a scrum master product person or whatever it is, bring your QA folks into these ceremonies early on. Yeah. Because to your point they can save you. A lot of times they’re gonna be thinking about something from a different angle that you may not thinking about one, and they’re gonna be given additional context when they actually are doing the testing, which is gonna make their job. A lot easier. So anybody that’s not doing that br bring your QA friends into those conversations. Yes. Earlier 

Erika: on and I will I will point out like one thing that I often have had to do when I’m going into new orgs, when I have a new team, I have to coach inside of my team because the QA folks can be wallflowers at times, some of them can be wallflowers, and so they will come into a conversation and they’re like, yep, I’m listening. I’m actively listening. And that sounds odd, but okay. They know what they’re talking about and so I’m just gonna wait for it to come to my desk. And I have the context, but the QA organization, It’s one of the things I love talking to the QA community about. It’s like we are more than just testing in that single step in the delivery life cycle. We provide that value. We need to speak up, we need to provide the, here’s a gotcha, have you considered this? Have you turned the box in this way? And when you when, when team members, when were brought into those conversations, ask for that, pull on them, get the, request that feedback like, Hey, What do you think? Like I’m, these, this is these are the boxes. I love to talk about things in, in the form of boxes. So this is the box of the flow. These are the boxes of the flow. Currently, we wanna shove one right there. What do you think about that? Like what’s gonna happen? How does that help or harm the journey that you experience and go through and think about from a customer perspective? Is that good? Is that bad? Ask those very specific questions to, specifically to the QA team, to, to draw them out and get the, that insight. 

Matt: And that’s a facilitator like tip there, right? If you’re a Scrum master product person. Yeah. Like one thing that we do in a lot of workshops is we’ll always ask around the group, Hey Lisa, do you have any clarifying questions or anything like that? Bob, do you, and you go around the full room. And it’s funny a lot of times why people say no, but, and then they’ll go into what’s on their mind. So that’s a good tactic to get those, like you mentioned, wallflowers, to speak up. Cause they, they do have an opinion and it’s a valuable one a lot of the times. All right, so the hot topic right now, everybody’s talking about it. Everybody and their mom, generative ai. Now we’re playing around with GitHub co-pilot and some other tools at Hatch Works. But I’m curious, what is your perspective, thoughts, theory, whatever it may be? The prediction on how generative AI will impact the quality assurance discipline Positively, negatively, how it evolves. What’s your hot take? 

Erika: It’s significant. It’s significant. The thing to remember with all of the technologies, these are tools in our toolbox, yeah. I’ve heard the conversations, people in and out of QAO, the end of testers, the end of all of these things. But AI’s been building in the quality space for years now, for years. ChatGPT. I loved some chatGPT, right? Just being able to ask questions. It is another way to turn the box. It’s another way to leverage a tool to help us better communicate, to help us quickly write scripts, but just in general, like this generative ai, like the conversation around it, automated routine testing. It’s like it’s just generate generative AI can create new test cases that mimic the variety of user behavior and edge cases. Let it do it right? Yeah. We still, the humans still need to be in the conversation because we still need to analyze that. We need to, AI can handle those routine tasks, but we are analyzing it as humans. But it, it changes our role. And that’s the thing, like it doesn’t go away. It changes our role so that we can, it could be more cognitive. We can literally sit with something and think about it. As opposed to this is mundane, this is redundant. You know what people have said years ago, you’re just banging on testing, is just banging on a keyboard, which is not, it has never been. It is not, yeah. But it gets us even further away from that idea because now we’re like, let the machine take the inputs and generate something, and then let us tweak it to be more informed, more intuitive, more human. Let it let us use the machine to do predictive analysis, analyzing historical data to predict potential problem areas. Let it enhance performance testing or increasing QA accuracy, unbiased unbiased testing. This is a big one, so go further into that. So the story, right when the Apple Watch came out. Eventually became, one of the stories was like women are the biggest users of it. I don’t have data points. It’s been so many years. Women are the biggest users of this, but it does not have it does not have period monitoring on it. But yet women are the biggest users. It was a miss, right? Correct. Cause women were not included in that product team. They were not included in the usability testing. Like the, this was a miss, a big miss. And when it was added, like women were like, hallelujah. Thanks. But we have these bias, especially, it’s like you talk about like people you know in the DE D E N I space, and when you think about accessibility, I have bias. There’s, we all have them. I don’t. I don’t know what it feels like or what to consider directly when it comes to screen reading, not being able to read the screen. I don’t know, like what is better? What is a better experience, but that could be programmed into ai. Yeah. And there’s un like having unbiased testing supported with AI and then being able to be a lead, taking that information from like leaders in the space who understand it and plug those in as, excuse me, plug those in as models that that AI can use. So there’s so much opportunity to make it. To leverage this tool to create more efficiency, to create more impact, to be more valuable in the organization. But we’ve gotta, we can’t be scared of it. We can’t be scared. We need to look at it and be like, listen, you are mine and I am going to I know that you’re a hammer and there is a nail. I am not going to use you, to do these other things, but I’m gonna use you. Nail everything in cuz I know how you work. 

Matt: This is great and I love this. You have the eternal optimist mindset versus the pessimistic, it’s gonna take everybody’s job. And I love that cuz it’s an enablement view of, it gets me outta like the mundane, like stuff I don’t want to be doing. And it uplevels us as humans. It’s that, and that’s why I love how it’s positioned as, people talk about it as a copilot, we’re still in charge. Yeah. But it, but it’s helping enhance what we’re doing. Yeah. Really exciting stuff. I love where this is going and I love that you’re testing it and playing around with the tool versus waiting. Cause I think that’s where so many people miss, is once it becomes mainstream, then it’s like too late. And you’re like trying to play catch up mode, right? 

Erika: Yeah. Transparently, listen, I, when it first came out I was playing around with it and I was like, okay, here’s a requirement. Write a test case or tell me what the acceptance criteria for this requirement is. And it two seconds rattled some stuff off, and I was like, those are decent. All right, tell me what pesky now use this and tell me what the test cases are for this in it. Two seconds later, rattled off some pretty decent test cases. You know what? And I say that decent test cases with it. Not being informed, especially before it was had access to the internet, was it really not being informed? And just going off of if we’re talking about this thing, right? If you’ve told me this is the requirement, and giving it enough information to be informed enough so it doesn’t just say you’re gonna have to log into the system, so do that, right? Yeah. But instead if you’re, if you’re doing this functionality, here are some things that you’d want to connect. And then really deep diving in and saying what are some non-functional versus functional? What is security? What type of performance testing? How would I test these APIs? What type of data should I use? Where should I, what are some considerations? And just continuing the conversation. That was fun. Yeah. But it was scary at first cause I was like, oh yeah. Snapple. 

Matt: Yeah. It’s, and it, like you said it’s wow, this is decent. But what, connecting back to a point you made earlier where you had the example of somebody kind of being blinders on focus and suggest their organization, they didn’t think about how they were impacting others like this. This could be a use case right here where generative AI and the tools we’re using do have that purview across the entire organization to say, Hey, Are you considering this? Yeah. That maybe outside of your discipline, so like that’s an interesting kind of use case for this as it starts to evolve. I, I think it’s really exciting where 

Erika: to go. I want us to get into the point where we’re able to feed it. Privately feed it information and say, okay, now that you understand this ecosystem, Now that you understand our structure, our business flow, our business model, right? Now that you understand that, what should we innovate on? Yeah. What are the concerns with our product? Like now you’ve analyzed our tests and how our tests are performing. Should we innovate or should we fix tech debt? And what’s the impact, what’s the financial impact? AI can start to answer all of those questions just at a, just as a food keys, strokes. That is so exciting. Being able to like unpack, and I’m not saying to get to that point is significant. I get that yeah, what is the data that we feed it? How do we feed it that data? How do we protect privacy and da security, all of that stuff. I get that. Yeah. But man, Jetson’s opportunity there.

Matt: I always think back to the beginning of, cell phones, where they were to where they are today. Yeah. Nobody could have imagined where we are today, where, like, where the internet’s gone. I think it’s gonna be the same thing with generative AI in a lot of ways. So it’s gonna be fun to watch.

Erika: That’s to the iPhone, right? That’s what of AI is, it’s like that point and then, All products. Now all phones follow that same view. Every phone is that, smartphone view based off of what Apple did. Nobody has a razor flip phone. Some dudes still have, I remember the Verizon little like brick thing that slid up and stuff like that. That was the cool thing. Yeah, not more right? Like everybody, a sidekick. Yep, that’s right. Yeah. Generative ai. That’s where we are at right now and it’s. I cannot wait. 

Matt: That’s awesome. All right, so let’s do a couple quick rapid fire questions to wrap it up. Okay. First thing that comes to mind. What company is doing qa, right? Which is there somebody in the community that you’re like, oh they’re really good at it. 

Erika: I, that’s not a, that’s not a fair question. It’s it’s subjective, right? Everybody’s doing something right. Can I plea the fifth? Everybody’s doing something right? Yes. Everybody has opportunity. Yeah. I worked at Calendly. I’m gonna say Calendly doing it right? Yeah. 

Matt: Shout out Calendly though later. 

Erika: Yeah. Yeah. Teams are looking to improve. There’s, there’s a lot of great things that Realtor is doing. There’s still opportunity. There’s opportunity At Calendly, there was opportunity at cabbage. It’s just about the focus and so yeah. I’m gonna complete the fifth. 

Matt: No those are good answers. What about an individual? Is there anybody in the QA community that you follow or, Think is influential. 

Erika: Angie Jones is amazing. Lisa Crispin and Janet Gregory are the agile queens. Like just, the, they have the Bible. Three of them actually on agile testing and processes. Come top of mind for me. 

Matt: For sure. Like those, like  it, it send their LinkedIn to, we’ll put ’em in the show notes for some folks who may be interested to start following them. And what’s one thing that you wish you could go back to like your former self and give some advice to your former self if you could go back?

Erika: It’s usually not about, it’s not about quality. 

Matt: It doesn’t, has to be, it’d be anything.

Erika: Own what you know. Don’t worry about what you don’t own what you know, because what you know is impactful. It’s important and it’s valuable. And when you spend time worrying about what you don’t celebrate and champion and communicate to others what you are excellent at, and therefore you don’t continue to hone it. It’s okay to know that you have what the gaps are. If you want to work towards filling them, but some gaps I don’t want to learn how to surf and that’s okay. Yeah. I’m not a surfer and I don’t want to learn how to surf. I like to swim and I want to learn how to become a better swimmer Still in water, right? Yeah. So it’s Hey, what are you excellent at? And what are your, what do your passions lie? So own what you know and lean into that and don’t worry about. Don’t worry about. 

Matt: I love that. And one thing you mentioned earlier, and just to wrap it up yeah. One thing I love about your experience, what you do is your involvement in kind of women in tech and the diversity and inclusion space. Anything to speak about the, I see your kind of involved with the women in tech and career coaching there. Yeah. Anything that you are either excited about within this space or how you’re helping folks in this area.

Erika: Yeah, I, as a woman in tech myself I’ve spent the better part of my career being the only woman in the room, especially as a leader, being the only woman in the room also being the only black person in the room and that can be difficult. It has been difficult and I’ve had to learn how to manage my my own imposter monster. I’ve had to learn how to manage my voice and showing up the way that is right for me and not worrying so much about what others, how others think I should show up. I had somebody tell me I should be more docile and quiet because certain genders should be docile and quiet, that I should modulate my tone. And so I’m passionate about coaching women especially. Because I spent a lot of my career not being confident about who I was and how I showed up and second guessing and not speaking up when I should have spoke up or not owning what I knew, and so I’m excited about that and I love to talk to women about that in, in the space and help them.

Matt: Yeah. And then the community element’s so important too, I think. Having that community of folks that are going through, the same thing, they can trade stories and I got two young daughters at home, so I appreciate you pioneering the way for women in tech as they come up. You’re an awesome role model there. Yes. But where just to wrap it up, where can people find you, whether it be LinkedIn or what you’re doing? Anything you wanna. Plug here at the end. 

Erika: Ah, yeah. I’m Erica Chestnut on LinkedIn. Please feel free to reach out. I love to talk about quality. I’m a bit of a dork about it and obviously I love to talk about women in tech in general. But you can also reach out to me on ericachestnut.com. That’s where you’ll learn a little bit about my leadership consulting and my. Women in tech coaching and my quality leadership consulting and coaching all things I love to do. I’m really passionate about coaching and supporting people, either women in tech or in the coaching, or excuse me, in the quality sphere. Feel free to reach out. I’m around. 

Matt: Awesome. Thanks Erica. Appreciate the conversation. Thanks for joining Built Right. 

Erika: Thanks.

The post Quality-Driven Product Development with Realtor.com’s Erika Chestnut appeared first on HatchWorks.

]]>
Disrupting the Status Quo: Gated’s Approach to Continuous Improvement https://hatchworks.com/built-right/continuous-improvement/ Tue, 13 Jun 2023 12:00:27 +0000 https://hatchworks.com/?p=29616 Have you ever wanted to tune out the noise in your email inbox? Most of us would love to take a break from email from time to time, but it’s easier said than done. This was a dilemma that Andy Mowat had, which led him to start Gated, a unique solution to cut through the […]

The post Disrupting the Status Quo: Gated’s Approach to Continuous Improvement appeared first on HatchWorks.

]]>

Have you ever wanted to tune out the noise in your email inbox?  

Most of us would love to take a break from email from time to time, but it’s easier said than done. This was a dilemma that Andy Mowat had, which led him to start Gated, a unique solution to cut through the noise to find the conversations that truly matter. 

Tune in to the latest episode of Built Right, where Andy and host Matt Paige discuss user-focused strategies and rethinking communication in the digital age. Uncover valuable insights on solving problems, iterating quickly, and maintaining a user-centric approach to product development.  

Keep reading for some takeaways from the episode or check out the full discussion below. 

What is strategy? 

For Andy, strategy is about being clearly focused on a single problem or opportunity. In Gated’s case, it’s all about building a tool that anybody can use. That means Gated’s strategy is focused on understanding user behavior and figuring out how to solve their problems. 

The team at Gated is in agreement that strategy is about understanding the problem you’re trying to solve versus developing features and figuring out how to deliver them to users.  

How Gated began from a common frustration 

Known as “noise canceling headphones for your email,” the idea for Gated came from Andy’s frustration with the number of irrelevant emails he was receiving each day. He started to reply to emails saying, “if you want to Venmo me 10 cents, I’ll pay attention to it,” and put that money into his nonprofit as a donation.  

That was the bare-bones version of Gated, which Andy built in AirTable before hiring developers to enhance the product.  

Gated’s mission to change the world of communication 

Gated’s mission is one that a lot of people can get behind. Many of us would love a way to cut through the noise in our email inboxes, and Gated offers a neat solution.  

Most people can’t afford to detach themselves from email completely, so Gated offers a way around that.  

Andy says that their mission is to “change the world of communication.” Rather than letting everyone flood your inbox, DMs, LinkedIn chats and Slack messages, it guides people to engage with topics that you truly care about.  

The idea of Gated is so unique and unheard of, we asked Andy how you can drive change and get people to adopt something so foreign to them. For Andy, this is about more than getting people to sign up for yet another digital tool. It’s about creating a cultural moment where people can articulate what they’re really interested in to try and drive more relevant online conversations. 

This benefits people in two ways – not only does it help the person being contacted, it also helps the person reaching out to communicate more effectively and avoid wasting time. 

If you can get a clearer understanding of how you prefer to engage and be engaged, you can set those boundaries with others. Instead of trying to be everything to everyone, you can start to cultivate more meaningful conversations as a result.  

Building a product that people love 

For those involved in product design, there is a constant balance between building a product that is financially viable and can be monetized, and creating a tool that users love. In an ideal world, a product will achieve both things.  

Andy’s mission with Gated is to change the world of communication – and he strongly believes that if you can change the world, “there are a lot of interesting ways you can make money from it.” 

The way Andy’s team looks at monetization is first, to make sure people use it, love it, and that it can drive change. The second job is to figure out how you can get people to pay and to make the product go viral.  

How to get customer discovery and engagement right 

With any digital product, you need to prioritize customer discovery and engagement. This is something that Andy takes very seriously and leads by example as CEO. 

If Andy sees an interesting person has made a donation through Gated, he will drop a note to thank them and to ask for feedback on anything they could improve. This is how you can draw people into believing in your product and turn people into customer advocates.  

Andy has spent a lot of his career asking himself: how do you use data in a product to trigger the right actions in your team? With better technology and AI, you can start to automate some of those actions and decisions – even when it comes to customer engagement.  

A common mistake that businesses make is leaving customer engagement as an afterthought. Meanwhile, Andy has built this into Gated’s workflow and developed a system for continuous discovery.  

Making the hard calls as a product CEO 

Another mistake that Andy sees companies make is a reluctance to make the hard decisions. Companies are eager to keep everyone happy and will over-promise and spend much of the team’s time on developing more and more features.  

But leading a product is also about knowing when to “kill stuff.”  

Andy believes that very few companies have an architect/product person who is empowered to make those hard decisions – but this is something Andy wanted to avoid with Gated. 

Being able to make hard decisions is core to a successful product strategy, says Andy. 

To hear more about Gated, how it started, and Andy’s insights into building a product, tune in to the full episode today. Subscribe to Built Right for more engaging conversations that will help you build the right products the right way! 

Matt Paige: Today we’re chatting with my friend Andy Mowat, CEO of Gated, and we’re gonna go deep into the land of product strategy today through the lens of Andy’s experience in building gated, as well as some past experience at big name companies that you all will recognize. But before we jump in, Andy would love for you to provide a brief intro. You know who you are, what do you do, some of your past experience to 

Andy Mowat: kick. Yeah, absolutely Matt. Great to see. I historically have scaled companies up to, large scales. I’ve taken Upwork. Box and most recently CultureAmp to unicorn status. Periodically I start something from scratch and I’m doing that right now. I’m running gated gated. The original product takes unknown email outta your inbox and challenges people to think Hey, is your attention worthwhile? And as we talked about we’re launching in a couple weeks, which probably is launched by the time we talk here our new right platform, which is helping people. Take control of their attention on all platforms. We’ll dive into that.

Matt: Yeah, so excited to get into this and there’s some nice kind of meaty pivots along the way throughout the journey, kinda leading to this new gated 2.0 here. But I wanna start with, a meaty kind of meta question here. And how would you define strategy? This is such a, I think, a nebulous question sometimes, and feel free to jump into examples from past experience, but how, in your mind, what is strategy? 

Andy: I think it’s being as clearly focused on a single problem or opportunity as possible. For us, when we’re building a tool that anybody can use. It’s understanding user behavior really well and then figuring out how to do that. It’s, and this kind of goes counter to my approach, but our ctl Allen’s very passionate about it, which is it’s understanding what’s the problem you’re trying to solve versus Yeah. The features, and then, yeah, figuring out how do you deliver the right solution to that problem.

Matt: Yeah, it’s perfect. It’s that element of choice, right? And making a decision and being deliberate. About where you’re going. And I, that’s missed so many times. I think in strategy. That’s what I love about, gated, you’re very focused in who you serve and which what you’re doing there. Another element though that I love about gated is this element of the manifesto. Yeah. And your view of the market. And it kind of encapsulates your strategy in a sense, in this kind of narrative format. But with, I would love to, you. Give your overview, your take on leveraging something like that to help convey your strategy. 

Andy: Yeah, I think we spent a couple months iterating on the manifesto. If people haven’t seen.com/manifesto, I think it helps people understand what the company’s trying to accomplish. And I think, you probably, I don’t know when you encounter a manifesto versus when you first learned about gated. I think if you’re like, okay, it’s just a tool you’re gonna do I need the tool or not? But I think a lot of people have also really gravitated towards like the change we’re trying to bring in the world, which is like communications less noisy, more personal and all of those things. For me, I think with the original manifesto came from the concept of not everyone can use our original email product, but everyone should believe in the mission we’re trying to accomplish. I think as we launched this new platform, literally anyone can use it, which is fun. And so we’re rethinking the manifesto, but it’s just small tweaks. But, it doesn’t fundamentally change, but it guides the decisions we make on the product. And so we take it very seriously and it’s not a, here’s three bullet points on a webpage. It’s it’s an interactive kind of flowing manifest.

Matt: That’s what I love about it. It puts your stance, your viewpoint in the market. And you mentioned like how I found out about I’m a great, a user gated now. I love it. I remember I saw y’all put something out there around noise canceling headphones for your email and that kind of just clicked for me. It’s actually how we first engaged and interacted. But it just clicked, which is awesome. But I want to get into gated, and this is a really interesting. Where y’all first started and where you’re moving towards. So we’d love, maybe start there with the where did gated originate? Where did you first come up with this idea? And then we can get into where it’s going, which I think is really the exciting spot. You’re l you’re learning from customer behavior as you go and talking to customers and iterating on the product. 

Andy: Yeah, absolutely. I’ve sent billions of emails. I’ve caused a lot of pain. Probably a lot of your listeners. Sitting there saying Godammit, all those emails I think I’ve sent push send on like 8 billion emails or my teams hat, right? And so we’re all guilty, right? I know all the hacks. I was sitting there at a series E company and I was just getting blown up. I wake up every morning and I was the buyer for a lot of tools and People would ping me and I was just like, God damnit. Like this stuff’s irrelevant most of the time. And so I wrote an email and said, I don’t know you. Here’s my Venmo. If you wanna Venmo me 10 cents, I’ll pay attention to it. And then I threw on my nonprofit. It was like, Hey, if you want to donate to my nonprofit, but people started donating, like they started donating like 10 bucks, 20 bucks off of 10 cent As, I’m like, this is interesting. I gave it to a lot of friends and they’re like, man, this changed my email inbox. So that’s the original product. I think what we’ve learned in the, so we’ve got tens of thousands of users that are using it and passionate. Yeah. But I think we’ve also, 

Matt: and I wanna pause there real quick, Andy, just on that point, super interesting. Just for the audience. Like the key thing I get from that is you noticed a pain point in the market in verse going into a hole and, trying to build something out for months on end. It’s like the best example of a… You put your Venmo in his email, 

Andy: kinda iterate there and Yeah, exactly. Yeah. So I built the first version in Air Table and, actually I built the first version, I just sent my demo in. Then I was, yeah, exactly. Then I built it in Air Table in Zer. Then I like hired a young kid to code it up and then I finally was like, okay, I need to get some actual developers. Yeah, and, it’s, and so I, it was more just a side hobby for a while until people be like, Hey can you so yeah, that’s, that was it was a lot of fun, but I wasn’t looking at it as what’s the market problem? I’m more looking at what’s bugging the crap outta me? 

Matt: Yeah. I love, it’s such a good point. It’s solve your own problems is such a good starting point. And you had this hypothesis of, okay, maybe this could be a mechanism to do that. And you did it in an early kind of quick, iterative way. It’s such a great, just lesson for folks out there building digital solutions. Start there, start small, start to it. But yeah. So you’re at this point, you’re starting to get some traction with this initial kind of idea, iteration of gated. So what comes next from there? 

Andy: Yeah, so we’ve, as I said, we’ve got like tens of thousands of users using it, loving it. But I think we, our vision is how do we change the world of communication? And I think the insight we’ve had in your user of the original product is there are people that can afford to just turn off the noise and. But most of us can’t. Most of us need to live in that noise, see that opportunity. And so our new platform is focused on helping people surface the right conversations out of the noise. So our fundamental thesis is this, on the new platform it’s hard to filter email after it’s been set. And everyone lines up and they just, and I’m talking not just email, but email, LinkedIn, dm, slack messages. And the problem is everyone can reach you on all these places. They, some places are like, oh, you’ve gotta be connected to me to reach people. But we all get the LinkedIn invite, it’s random that we don’t know and we have to deal with this. So our thesis is instead of letting people just blow you, Let’s guide people to engage around the topics that you care about. Yeah, so we believe you should be as available as possible for the things you want to talk about. And for the rest of this stuff, no. And so we are building a universal link. So you can get gated.com/. If it’s not taken, if it is, we can talk about this call. I gotta get that one early. Yeah. If it’s not, I’ll give it to you and if not we can get you page or something like that. There you go. And so you get that and you can, we can help you articulate what you’re focused on, keep it updated, and then let people that want to communicate with you around those things, regardless of whether you’re linked into them or connected to them or they have your email or anything, be as available as possible for the right opportu. Yeah. And the rest of this stuff, people gotta go figure out a different way to reach you. And so that’s what we’re building and that’s what data is all about going forward. Yeah. It’s such 

Matt: a real pain point out there in the market. A lot of ways we like to think about it, the, person that’s brought this out into the world, the Marty Kagan thinking around valuable, viable, and feasible. You’re hitting that valuable point head on. Cuz I know everybody listening has experienced this, especially our product in engineering. Folks who get inundated with emails, they could care less about. One interesting thing though, this is not something that’s just commonplace. You could even argue this is more kind of category creation lens, but what you’re combating, I feel in a lot of ways is just the status quo. They’ve always done it this way, and you’re looking to break a habit, which is difficult. I think it’s probably one of the. Under considered things, when you’re building a solution that’s driving a change in habit, how do you actually go about driving that change and getting people to adopt it when it’s something maybe a little more foreign or unknown to them? 

Andy: Yeah, I think you gotta, you gotta, we talk about creating the cultural moment. So when we’re launching, whether this podcast drops before or after that you. Wake people up and be like, whoa, that’s interesting. That’s different, right? With the original email product, people are like, wait a second I’m getting this email when I sent you an email because I don’t know you. It’s forcing me to think, I think with this new one, we won’t be like interrupting the flow of communications more way for the good stuff to rise above it. We’re launching with, tons of leading people that are sitting there saying, like, when you ask me Matt Andy, how can I help? I’m not doing a good job of articulating that right now. So we, if we can put that on everyone’s LinkedIn profiles, put it in people’s emails, we can start to, people be like, Hey, like what is this thing? And I see all of the other people like Matt articulating really well now, what he’s interested in and how I can help him. Like how do I get that? So yeah, it’s gonna be interesting, like we’re pushing the bounds of communication, but I think rather than there’s a better world out there, right? Which, I show up and I see you and I figure out like email, LinkedIn, dm, slack, whatever, and I’m like, here’s what I wanna send. Or even worse. Here’s what the AI tool that I use decided that it wants to send. Yeah. Copy and paste that. And so I think it’s gonna be a lot of fun. Like we’re taking a big swing at communication and we think that there is a better way, which is let’s be, it comes down to the in, I don’t know if you’ve seen it, but like the concept of. How do you want to engage? How do I want you to engage with me? Yeah. And we’re helping our users articulate that, right? I always love, I’ve written my own user manual. I’m like, how do I work? I love that. And there’s, it’s like when you join a company, like here’s our norms, here’s how we work, here’s how we wanna engage. And so I think that concept of let’s just not. Punching bags for somebody else, but let’s have our own gate. That helps people understand how we want to engage and on what topics I think becomes interesting. And so hopefully people listen to this get excited about that vision for a better world of communication and go get your gated profile. It’s it’s free. I think over time people are like how are you gonna make money in, on the original product we charged the sender. Here. I think we see, if we can change communication the right way and create the value. There’s lots of fun ways to like power features for power users type of thing.

Matt: Yeah, and it’s interesting, it’s not, maybe the traditional two-sided marketplace type of solution, but providing those kind of rules of engagement doesn’t just serve you as the person that people are trying to reach out to. It also serves the person trying to do the outreach, think of how more, optimized, I guess their outreach can be. If they know who they’re reaching out to and understanding how they want to be engaged, what they’re interested in. So you’re serving both sides of the market. You kinda have your core user over here. That person performing the outreach is very much probably in your consideration set, I’m sure as you’re building out gated.

Andy: Yeah. Yeah. And I think it’s, with the original product it was all about like how you stop people selling you things Here. I think it’s like, how do you actually get the people. They’re maybe in your peripheral tangential network to know how they can help you, right? There are probably people you’re trying to meet every day, right? Like for me, I look at it as I, in addition to like my core day job, like I love helping like high like revenue leaders that are trying to competition things. And if I, if there’s a half an hour call, like I love helping you help vice versa. Like how do we help each other? And so I think there’s a lot of those conversations we’re all trying to have that we haven’t articulated well. And you and I haven’t chatted in a little while and I don’t even know what are the types of people you’d want to meet and all that stuff. So I think there’s, yeah, it’s the sales people, which is the original product. But we also like forcing sales people to pay, like only creates so much value, right? Like they’re still trying to save. Yeah. Yeah. But helping people understand the topics you want to connect on is a much more powerful plan. 

Matt: Yeah, that’s interesting. So it’s deeper than just this transactional. Type of piece. It’s like, how are meeting and growing and working and networking with other people. It’s more than just the transactional level. Yeah. I love that. 

Andy: I, when I go on LinkedIn I know who Matt is. Yeah. I know what Matt’s done, but I don’t know what Matt wants to connect on and talk about. 

Matt: Yeah. No, that’s perfect. One piece that’s really interesting, this ties back to strategy and good strategy, and I heard you mention. There’s been this change in the market very recently. This, rise of generative ai. And it’s just changing things in terms of making it even that much more easy for people to spam, reach out to folks and just go crazy with it. I think that’s an element of good strategies when you. Tie what you’re trying to do to a change in the market, an inflection point in the market, and I think that’s an interesting piece. Maybe speak on that some. Just the element of strategy and being able to leverage market trends and things 

Andy: going on. Yeah. I see three major trends that we’re thinking about. Yeah. One is the proliferation channels, right? Hey. You need to join this Slack group this Discord channel. You gotta be on TikTok now or whatever. It’s, yeah. And so you’ve gotta to be able to be successful, like you keep getting pulled to more channels. Every single channel is dying, right? Like it’s, aI will kill every single channel. Over time. If you’re not overwhelmed by LinkedIn, you soon will be. Yeah. And then the final one that’s really been the most nuanced for us is the barriers to reach. Are being shattered, right? So every morning you wake up, you get like a couple to, maybe for me it’s 10 or 15, like LinkedIn invites from people. I don’t know. And I gotta decide, do I want to accept them and have them live in my dms forever or do I wanna ignore them? And it’s like that whole concept that’s a really interesting one we spend a lot of time thinking about, which is, it doesn’t feel like the right way to decide, do I wanna have a conversation? I haven’t decided if I wanna let you in forever. And so I think LinkedIn’s there’s an interesting trend where it’s become more of a social graph than like actually people, and yeah, that’s, that is it’s hurt them, but it’s just changed the world. So we spend a lot of time thinking about what are those trends? And then how do we take it, how do we position our plots in the right way? 

Matt: Yeah the other piece I heard you touched on too, this focus of building the experience and a product user’s love versus the focus on mod monetization. What’s your thoughts on that, especially early on in terms of trying to get scale and all of that? You wanna create a viable business that’s gotta be part of the roadmap in some way. But what’s your take. 

Andy: Yeah, I’ve learned a lot on that one. I think we gotta, 

Matt: I bet with Upwork and Box and all those I’m sure. Yeah. Past experience. 

Andy: I think it’s interesting. Like we’ve, we went, we have a very interesting model with our original product, right? Which is users get it for free. We take a percentage of the sender donations and fine for that. I think, that is, it’s a very innovative revenue model. And it. As we move to the new one, like that model doesn’t apply as much. And so what I look at is, I guess I’m in it to change the world. And I think if you can change the world, there are a lot of interesting ways you can make money on it. At the same time, like I’m conscious, there are products like Loom, right? Like it’s my favorite example of I freaking loved them. It was a hundred million people used the damn thing. And then. They got their pricing model and they were like, okay, now you’re using an infant, and then we gotta roll that back. Yeah. And I think a lot of people were pissed off and looking for alternative solutions. And so I think the way we think about monetization is first job, make sure people use it, love it, and can try change. Second job is like how you pay, how do you make this thing viral? So you can go change the world. And then if we’re providing value to. And I think that the, so we’ve really defined very early of what are the values with this new product that we can create? And we look at it as three things, and I was just pulling up on my side. One is, in the, in this order, in this like sequence of can we help you articulate the topics you care about and keep them current. I think that’s a subtle thing, but it’s, it’s hard to keep current. You go on your LinkedIn or your, your GitHub or whatever, and. This was me two years ago, or I’ve never updated this thing. Can we give you peace of mind of not checking other channels and can we help you make like new and valuable connections? If we can do those things, I think there’ll be a subset of people that will value it enough to pay for it, or we are, they like, Hey, if I’m using it for these things, get this one additional feature. So I think I look at it as you have to have a philosophy, and this is what we’ve been talking about internally, a philosophy of what would be a paid feature and what would not And yeah, also not dumping everything out there of every feature on day one. Keeping it very narrowly focused. Like I looked at a tangential comp and I was like, man, they got everything. Like it’s and so I think it’s, how do you keep it very simple solutions focused on one use case versus everything under the sun and then, You don’t need to bring additional functionality and charge for it versus trying to reel functionality back, which Ishmm like Loom did and I think it went down pretty badly.

Matt: Yeah. An interesting point you mentioned that I was chatting with the CTO O of Hockey stack the other day on one of our built right episodes, and they hit this inflection point. They’d built this product and they’re doing their customer research, which is something y’all do an awesome job at gated. But they had this insight where the person was like, I don’t want to use 90% of this solution, but this 10%, this is what I need. And they actually cut the bloat out of the pro product, created hyperfocus in this one area and it just drove a lot of their growth. But I heard you mentioning, focusing in on kind of core use cases a super interesting area there. One thing y’all do at gated, though, I always see you out there wanting to talk to more customers, more users. Yes. How do you weave that into your process? It’s such a big thing. We do AtWork with continuous discovery, but I think you’re one of the better added, especially, leading a company as a C E O.

Andy: Oh, thanks man. I appreciate that. That is, I’ve gotten an article on my LinkedIn profile about it. I can I can drop it. Sisi I call we, you don’t need to put it in the show notes there. Yeah. I’ll send it to you. But it’s a customer advocacy playbook. I think I look at it as, first off, the product has way, like interesting moments of joy and excitement. And then we can build little things off of it of awesome, you’ve got this really, we saw a $250 donation off a $2 ask last week. We’re like, yeah, hey that’s pretty cool. And we can reach out and we can talk to people. So I think it’s like engineering the product where there’s opportunities to connect with users and then building the motions to be able to do that. I don’t know when you and I first connected, but you probably had a fun donation or something, or like one of our teams. Hey Matt, congrats. And you’re like, ah, that’s awesome. And then you’re like, you ideally the product creates joy and then you can start a conversation around that. And, I’ll be candid, like we’ve templatized some of those things, right? I, yeah. Have this fun thing where if I see an interesting person make a donation through our, just feeds through Slack all day long. And if I’m like, wow, that one was cool, I’ll drop an be like, need to see your donation via gated. And people will be like, oh my God, I love this thing. I’d be like, okay, cool. I think I literally have a follow up feedback, which. Awesome. Thanks so much. We love feedback do you have anything that you’d improve? And people like, yeah. You bring ’em in, you throw ’em in, then you can turn ’em into advocates. And so there’s a lot of, I don’t know where I developed it, but I’m really passionate about engaging customers. Now there’s a balance to it too, right? You want your product to do that all. And we haven’t built that all into the original product and on day one, we’re not gonna build it all into the other ones. So it’s like the how do. This is what I’ve spent a lot of my career on. Which is how do you use data in product Yeah. To trigger the right team actions. And then over time, like with technology and ai, you can start on automate more and more of those things. Yeah. I don’t wanna 

Matt: pause, pause there. Anybody passively listening right now? This is your indicator to, to heighten your sense. But the piece you did there, you identified these triggers in, your user’s journey and you systematized actually talking to those users. So you’ve almost built this operating model or system for continuous discovery, which is a great way to do it cuz everybody says they want to talk to customers, but if you don’t consciously think about it, it’ll fall to the wayside with everything going on. So that is critical. I think you’ve identified. At what points, what inflection points are great opportunities to talk to customers and learn. And you built a kind of a system around it. It doesn’t have to be, some over-engineered thing, but you have these inflection points. So that’s super interesting there. Maybe the last point to get into here you start, you’re making this pivot in essence with gated. How do you think, in terms of making a pivot what is. What’s going through your mind where you decide, okay, we gotta make a change. You’ve probably been through some pivots in your career in the past. Is there any similarities you’ve seen through them When a pivot’s ready to happen, different things happening at the same time anything around a pivot in general that just kind of triggers in your mind where it’s more than just a small tweak, it’s something found. 

Andy: Yeah, it’s interesting, like generally I am growing companies really rapidly, right? Like my sweet spot. 5 million, hundred million sales. This is a little earlier stage than I’ve played before. I think for me it’s more if you see the friction in the growth motion, which, for us, I think we’ve built a to people love. Yeah. But it’s, not everyone can sign up for the original email product. And so what we said is, Can we change the world? And if we can’t with the original product, like we’re still gonna keep it over here, but like what? Like we’re always thinking of like, how do we get this to go to millions of people and drive that impact? And so yeah. I don’t think I’ve come across the pivots before on the product side. I think I’ve definitely encountered other things where okay, like we’re not growing fast enough. What else? What other products can we add? But I think here, I guess I would say. I’m conscious you can’t like, keep adding to the product bloat. And so we’re very much like on that day, in mid-May, the new platform will be our focus and yeah, that’s really important. And the existing platform will be there for the original users. We’re not gonna kill it, but I think, so for me it’s being comfortable making those shifts. If you took it maybe a step like. Every company I’ve ever been at has struggled with. We can’t ship more stuff. It’s hire 10 more engineers. You get no more growth no, no more product features. And so I remember I was talking to this dude at Facebook, I was like, man, you guys are constantly innovating. And this is eight years ago, right? And he’s yeah. What’s your secret? He’s we make the hard. And so I was like, okay, gimme an example. He’s messenger. And at the time, I actually still used Facebook and I was like, yeah, I freaking hate how now I have an app for Facebook and an app for Messenger. And he’s I know. We knew it would piss some people off, but by doing that we got the ability to move a lot faster with both products. And so what I’ve seen consistently within companies is they’re afraid to make the hard decision. And yeah, I, in my historical world, it’s go to market changes. And I think within product, which I’ve always been partnered deeply with product, it’s the hard architectural decisions, right? So everyone wants to make everybody happy. They do. It’s let’s add this feature. But no one’s saying, okay we have to kill stuff. And I think that very few c. Have that architect slash product person that’s empowered to make those hard calls. And that’s exactly what we’re doing here, right? Which is we built a product that people absolutely love. I think we send 700,000 challenge emails a month and the world knows about us, and we’re gonna blow it all up to change the world in an even bigger way and bring a new platform. 

Matt: And you just hit on one of the core elements of strategy right there. It’s being willing to make those. Decisions and many aren’t. And that’s where people fail. That is so core to strategy is being able to do that. So hats off to y’all for doing that. And the other thing I heard too, is it tied back to where you wanted to go. Tying back to the manifesto, the impact you wanted to make, and you realized we can’t do that with where we are today, so we need to make a change. So it, it ties back to that thread of where you want to go, what you want to do. And it’s just the core elements of strategy being executed really nicely. I’m really enjoying the journey here. Last question for you Andy. I’d like to finish up on this one. What’s something you wish you knew Cerner? Something you could go back in time and tell your former self, and I’ll leave this open-ended. You can take it anywhere you want, whether it’s on the product side, engineering side at go to market side. What’s one thing. 

Andy: You would tell your former self with gated? I think what it would be we saw massive early user love and we’re like, great hire. I think in retrospect, I would hold back on hiring and I see this across the board people like, let’s hire a huge sales team, or let’s hire a huge this or that. I think it would’ve held back on the hiring until we were like, we literally can’t deal with the deluge. Because then I think it would’ve given us the ability to. Iterate faster and all that stuff. And so for us, we probably staffed up early for the growth. We definitely had some, but we didn’t have like millions and millions of users on the original product. And so I think for us, as we personnel we’re like, how lean and mean can we be? Yeah. And for as long as we possibly can. And yeah, so that’s, I think it’s a, it’s also like the world has changed, right? Two years ago. Like when we raised money and when every company raised money, it was like you had infinite capital. And yeah, it was available right across the board. I see it on product. I see it on in, I see it on sales teams. I see it. And I think the world is changing. Now. Will there be another mad rush of this in a couple years where everyone’s go spend it as much as you can? Probably, yes. Yeah, but I think it’s a discipline that I took away. Yeah. I 

Matt: love that. So it’s almost go lean and mean until it’s painful or until it hurts that you it’s, yeah, you gotta have that pain point, right? Yeah. That’s great. Andy, I appreciate the conversation today. Thanks for joining Bil Wright. Have a good rest of 

Andy: your day. Thank you, Matt. Great talking. I’d say last thing I would say is 

Matt: yeah, where can people find you? Yeah, that’s a great point. I missed that. Where can people find you? How about they find gated gated.com?

Andy: We, you can go there, you can find it over the, in kind of mid-May new product will be available. And check out the website and if people wanna email me you can andy gated.com. And if you don’t know me, you’ll still get an ask for a donation. 

Matt: Yeah, and I’m a  user of gated, one of the early adopters of it. Love the product. Can’t wait to see where it’s going next. So you’ll definitely check this tool out. It’s a great one. gated.com. Thanks Andy.

Andy: Thank you, Matt.

The post Disrupting the Status Quo: Gated’s Approach to Continuous Improvement appeared first on HatchWorks.

]]>