AI Promise Land – Adapt Your Profession

The AI field, Generative AI specifically, is fledgling and quite promising. But it is a huge space, with abundant possibilities and professional opportunities. Previous two articles “Befriend AI” and “Make AI work for you. Not the other way around” went through what it takes to be mentally and practically prepared to live and thrive with AI.

This article wraps up the three piece mini-series with the focus on professional preparedness:

  • Where do you start? 
  • What would be the right fit for you in the near term and long run? 

AI brings disruption and gradual innovation to our daily chores, including professional activities. Hence when trying to professionally adapt ourselves in the AI space  we need to look from an angle – which part of the AI landscape we want to get into: disruptive, incremental evolution, or both. 

AI Disruptiveness

Let’s start with disruptiveness. The main attraction and jaw dropping factor of Generative AI is its fluency in language that sounds intelligent. Its ability to assist us in daily decision making rivals and exceeds at times human capabilities. A big paradigm shift is that  we can resort to natural language for communication with AI in expectation of a significant productivity boost. We don’t need to know programming languages, no need to deal with compilers, forget about having a Computer Science degree or advanced STEM PhD. 

Prompt engineering combined with our expertise in a particular field – education, marketing, customer service, biology, construction etc. – will do. In a nutshell, prompt is the new way we achieve fulfillment of our expectations from the computer by the means of simply talking to it in human’s natural language. Never this was achievable to astonishing degree as we witness it now when communicating with ChatGPT like systems. Hence the paradigm shift.

You want actionable summaries of the way you do your business, some unexpected nudges in directions to innovate, get better and unbiased temperature of perceptions in the market – all you need to do is to master prompting while staying an expert in your respective space. 

With the advent of Large Language Models the prompt engineering was finicky and required a lot of preparation to write well structured and carefully thought through prompts. There are still lots of “right” prompt libraries offered out there. But keep in mind that with every new version of LLMs they become ever more sophisticated, to the point that “right” prompt engineering boils down to – just express your intent in natural language, don’t sweat it – LLM will provide a decent answer to it.

AI Incremental Evolution

AI magic manifestation didn’t happen overnight. It is a result of gradual evolution of scientific methods and engineering enablers over many decades – theory, algorithms, hardware, availability of massive data to train AI. Which exponentially culminated literally at the end of 2022.

In order to understand different avenues of reinventing self in the age of AI, let’s dissect various areas of AI enablement.

Applied AI

This area concerns the way you:

  • build Large Language models. Sometimes called pre-training,
  • train them,
  • adapt them to your needs. Often called post-training, fine-tuning being a variation of it,
  • productize and operationalize LLMs with all the ensuing considerations – scalability, security, safety, alignment with human values, performance, availability, reliability and many others.

The closest synonym would be the notion of a platform in the Cloud world as we know it. You need:

  • Software and Hardware Engineers of all sorts with the primary focus on distributed systems,
  • Product and technical program managers.

LLMs need to be trained. For LLMs to be of use they need to be trained on massive data and using elaborate distributed data processing pipelines. Data needs to be collected, cleaned, filtered and injected into distributed frameworks with lots of computing nodes. These distributed systems should have a certain degree of redundancy and fault-tolerance to achieve reliable and cost effective LLM training. Many distributed systems practices of the past decade used in Machine Learning applications are still applicable here. This is an area where traditional distributed systems engineering lies in and definitely all the past experiences can be repurposed. 

LLMs need to be deployed in production. LLMs are of no use if they are not interacting with an outside world. You need to put them in production and facilitate communication with users or client applications via API. Scalability aspects, cost efficiency, performance, load balancing, security and many production related concerns are very much relevant here too. Hence your past experiences of distributed systems engineering are very much applicable.

LLMs need to be integrated in different environments. To make the best of Generative AI it needs to be integrated with enterprise applications and take key part in information retrieval and interpretation. On the other hand Generative AI is a very new technology. Although it is very promising and manifests impressive intelligence it is not always reliable nor consistently aligned with human values as appropriate to certain circumstances. Hence a new layer of engineering crops up – Forward Deployed Engineering. These are sort of hybrid consultants and  software engineers engaged with customers to deliver on GenAI promise. They know LLM well and implement LLM integration with customers in a pilot mode.

Reliability, Safety and Human Alignment considerations. Novelty and unprecedented capacity of LLMs introduces new challenges in the form of necessity to appropriately annotate data and outcomes in order to ensure trustworthiness and safety of LLM guidance. This requires a new type of roles of annotators who are knowledgeable of a particular expertise covered by LLMs (e.g. math, sociology, coding, etc). They sort of teach the models how to behave in a civil manner.

Research AI

This is area of foundational ways to:

  • build science behind Large Language Models,
  • optimize algorithms and performance of the LLM building blocks,
  • align LLMs with human

LLMs are key to the AI revolution that we live in. They capture world knowledge, translate it into rich internal representation, and magically produce intelligent-like outputs, e.g. idea completion, summaries, translations, etc. Because LLMs are based on sophisticated machinery combining cutting edge advancements in deep learning algorithms, information and probability theory, in order to make impactful inroads in optimizing LLMs it requires advanced science and engineering knowledge. If you are majoring or planning to major in Computer Science, Physics, Applied Mathematics you can rest confident that you will be able to make a dent to advance sophistication of LLM to the next level.

Here too there are different broad focus areas:

  • Software
  • Hardware
  • Networking
  • Alignment with human – RLHF (Reinforcement Learning Human Feedback), Safety
  • Fraud and abuse,

to name a few.

Supporting specializations

Of course, a great deal of AI success is attributed to Go-To-Market functions. This are your:

  • Sales Account Executives
  • Customer Success
  • Pre-/post-sale Engineers and Architects
  • Consultants
  • Marketing
  • Growth
  • Communications

It goes without mentioning:

  • Legal
  • Finance
  • Recruiting
  • IT

All these, in a sense, are not that much different than a previous generation of SaaS driven occupations. Except, unique AI perspectives are factored in each of these AI enablers.


I hope this rather brief excursion to AI Promise Land gave you a quick taste of what kind of professional (re-) orientation is possible. Keep getting familiar with AI!


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *