Why Do I Want to Learn AI Engineering?
These days, two desires coexist in my mind.
- One is the desire to properly understand AI engineering
- The other is the desire to do hands-on projects using AI
In this post, I want to focus on the first one rather than the second (projects). What I'm more curious about right now isn't "What should I build?" but why I've become interested in learning AI engineering at this particular time. If I can't convince myself, no plan will last long. That's obvious. But I've never organized my thoughts well enough to give a solid answer if someone were to ask, "Why do you want to learn AI engineering?" So this post is a record to organize those thoughts.
The Faster Things Change, the More Important Fundamentals Become
At first, I mostly thought "I just need to use AI well at work." I thought it would be enough to follow the latest trends through news, articles, and social media, and learn the necessary tools and techniques as needed. But at some point, fatigue started building up. AI usage methods change too quickly, and the right answers change often too. What worked today becomes outdated tomorrow, or a better pattern emerges right away. This is where my thinking changed.
- In the short term, you need the sense to catch trends.
- But in the long run, wouldn't it help to learn principles like "why AI works that way" even at an individual level (self-study, reading books)?
It was similar when I was learning web development. Fundamentals like JavaScript syntax remained on my side over time. Even when frameworks changed, having fundamentals made the speed of adapting to new environments different.
What I'm looking for in AI is ultimately similar to that sense. The ability to judge and adapt even in chaos, without being shaken by ever-changing trends. In terms of exercise, it feels like building core muscles and stamina itself, not just one technique.
The Core I Want Is "Problem Solving" 7, "Understanding Principles" 3
Something became clearer while writing this. The core I want is still product-centered, meaning problem-solving centered.
Right now, I don't want to be "someone who invents new models," but rather someone who better solves problems for myself and those around me (the world, customers).
The problems I face are generally of this kind:
- Users explain problems in words, but requirements change frequently.
- Features need to be built quickly, but quality, cost, and stability must be managed together.
- It seems like adding AI would improve things, but in reality, it wobbles ambiguously or fails.
The skills I want here are not "the ability to memorize trends" but these abilities:
- The ability to judge whether AI is right for this problem or not
- If it is, the ability to decide how to solve it (RAG, tool use, workflow, evaluation, operations)
- When it fails, the ability to design the next attempt without relying on intuition
So I concluded that problem-solving core 7, principles core 3 is the most realistic balance for me right now. It's not about giving up on principles entirely, but holding onto them at the level of "principles for building better products."
AI, Machine Learning, Deep Learning, and AI Engineering
When terminology gets mixed up, direction easily becomes blurred. So let me briefly organize what I understand.
- AI (Artificial Intelligence): Technology in general that makes computers mimic human abilities like learning, understanding, and problem-solving
- Machine Learning (ML): A subfield of AI. Rather than directly coding rules, it learns patterns from data to predict/decide
- Deep Learning (DL): A subfield of ML. Learning method based on multi-layer neural networks. The center of recent generative AI and LLMs
And AI Engineering often doesn't refer to "the model itself" alone. Especially recently, it often includes the process of using foundation models (like LLMs) to build actual products and make them operational.
In other words, the AI engineering I'm interested in is roughly close to these questions:
- How do I choose a model when building this feature?
- Where do I get the data from, and how do I connect it?
- How do I evaluate to manage quality?
- How do I handle breaking points during operation (regression, cost explosion, hallucination, safety)?
flowchart TB
AI["AI<br/>Artificial Intelligence"] --> ML["ML<br/>Machine Learning"]
ML --> DL["DL<br/>Deep Learning"]
DL --> FM["Foundation Models<br/>LLM · Multimodal"]
subgraph Roles["Role (Job) Perspective"]
RS["AI Researcher<br/>New Algorithm · Model Research"]
MLE["ML Engineer / Applied Scientist<br/>Training · Serving · Pipeline"]
AIE["AI Engineer (GenAI)<br/>FM-based Feature Dev · Eval · Ops"]
OPS["MLOps / GenAIOps(LLMOps)<br/>Deployment · Monitoring · Governance"]
end
AI --- RS
ML --- MLE
FM --- AIE
ML --- OPS
FM --- OPS
If "An Unshakeable Center" Is the Goal, Is AI Engineering the Right Choice?
The remaining question here is this:
When the goal is "an unshakeable center," is what I'm trying to learn really AI engineering?
My current conclusion is close to "probably yes, but not entirely."
- Because the core I want is product/system-centered (problem solving 7), AI engineering fits well.
- However, for "understanding principles 3," AI engineering alone might leave me thirsty. So I want to bring along at least the basic concepts and intuition of ML/DL.
Ultimately, the goal is not "I studied AI," but to become someone who can solve problems without being shaken even in a rapidly changing environment.
The Story of How I Rode This Wave to Get Here
Looking back, I happened to ride the wave of the times well. I was lucky. I self-studied development in 2020, went through a bootcamp in 2021, and successfully got a job at the end of that year. Since then, there were many moments when I felt the developer market was continuously declining. Still, the reason I didn't collapse was that I had the belief that I was doing what I truly love (my-purpose.en). If not for that, I think it would have been mentally harder every time market trends changed.
What I'll Try Next
- Clarify the scope of what I mean by "AI engineering."
- Is what I want research-centered, product-centered, and how far does it include?
- Concretize a learning route that fits "problem-solving core 7."
- (Reading books, simple experiments, making checklists from evaluation/operations perspectives, etc.)
- To not carry this process alone, I plan to start an AI engineering book study group.
- The recruitment post will be organized and posted in a separate post, not this one.
References
- IBM: What is artificial intelligence (AI)? https://www.ibm.com/think/topics/artificial-intelligence
- AWS: What is machine learning? https://aws.amazon.com/what-is/machine-learning/
- IBM: What is deep learning? https://www.ibm.com/think/topics/deep-learning
- IBM: AI vs ML vs Deep Learning vs Neural Networks https://www.ibm.com/think/topics/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks
- MIT Professional Education: What is Artificial Intelligence Engineering? https://professionalprograms.mit.edu/blog/technology/artificial-intelligence-engineering/
- Chip Huyen, AI Engineering (book overview) https://books.google.com/books/about/AI_Engineering.html?id=S7M1EQAAQBAJ
- Google Cloud: What is MLOps? https://cloud.google.com/discover/what-is-mlops
- Microsoft Learn: GenAIOps(LLMOps) for organizations with MLOps investments https://learn.microsoft.com/en-us/azure/architecture/ai-ml/guide/genaiops-for-mlops