This is Part 3 in my 12-part series documenting what I learned in a business school course called “Technovate Thinking.”
In the previous installment, I wrote about building an interactive program in Scratch. This time, I’m covering Session 2 — a deeper dive into programming concepts and ICT fundamentals.
- 1 What Comes After Conditionals — “Lists” and “Loops”
- 2 “Never Trust User Input” — An IT Instinct That Kicked in Automatically
- 3 Flowcharts Make Complexity Visible
- 4 “Separation of Concerns” — Different Data Roles Deserve Different Homes
- 5 ICT Fundamentals Covered in Session 2
- 6 Conditionals → Lists + Loops → Data Processing — The Building Blocks Stack Up
- 7 Next Up: Transition Diagrams — Visualizing UX Structural Issues Through Screen Flow
- 8 Books Referenced in This Article
What Comes After Conditionals — “Lists” and “Loops”
The pre-work for Session 2 was to build a program that accepts user input and stores it in a list (array) for processing.
Where the previous assignment focused on “branching based on input,” this one centered on “lists” and “loops” — storing data in an array and processing it sequentially. These are among the most fundamental structures in programming.
A program with only conditionals can only handle “how to process this one input right now.” But the moment you add lists and loops, “collecting multiple data points and processing them as a batch” becomes possible. That leap may seem small, but it fundamentally transforms what programming can do.
“Never Trust User Input” — An IT Instinct That Kicked in Automatically
The assignment requirements were straightforward. But as soon as I started building, simply accepting and storing input felt wrong. Years of professional practice had drilled into me the principle: “Never trust user input.”
So I ended up building more than what was asked for. Duplicate entry checks, validation against master data to classify known vs. unknown inputs, separate handling of exception data — none of it was required, but my hands moved on their own.
Input Validation Means “Classify, Verify, Handle Exceptions”
At its core, input validation is a design philosophy of not accepting input at face value, but classifying it, verifying it, and handling exceptions appropriately.
In infrastructure operations, this is everyday work. Monitoring system parameters, configuration management tool settings, deployment script arguments — in every context, the principle of “never trust the input; always validate before processing” is deeply ingrained.
For instance, rejecting duplicate data entries is conceptually identical to the constraint “don’t register the same hostname twice” in server management. Checking input against master data to sort known from unknown is essentially the same design as “how do you handle traffic from IP addresses not on the allow list.” Even in something as simple as Scratch, these real-world design principles transferred directly — and that surprised me.
The Difference Between “Ask 5 Times and Done” vs. “Keep Going Until You Have 5 Valid Answers”
Adding validation fundamentally changes how you design a loop.
Without validation, it’s “ask 5 times, record 5 answers, done.” With validation, if an invalid input comes in (a duplicate, for example), you don’t advance the counter — you ask again. In other words, it’s not “ask 5 times and stop” but “keep going until you’ve collected 5 valid responses.” It may sound like a subtle difference, but it completely changes the loop’s termination condition.
This shift in thinking shows up constantly in real work. In batch processing, “process 100 records and stop” vs. “process 100 records successfully and stop” require entirely different error-handling designs. Grasping that distinction through a Scratch assignment was an unexpectedly big win.
Flowcharts Make Complexity Visible
Drawing the flowchart for my program, what surprised me most was the depth of the branching.
Inside the main loop, there’s first a branch for the duplicate check. After that, another branch for master data validation. That’s two levels of nesting inside a single loop. And outside the loop, there were additional conditional branches and loops. The flowchart expanded both horizontally and vertically — it wouldn’t fit on a single A4 page.
In the previous assignment, I’d caught myself implementing first and drawing the flowchart afterward. This time, I deliberately drew the flowchart first, then implemented. The result: significantly less time spent stuck on loop design. At the flowchart stage, I could spot gaps — “this branch is missing a case” — before writing a single line of code.
What this experience drove home is that adding even one condition can dramatically increase a flowchart’s complexity. In the controlled environment of visual programming, I was able to viscerally feel the distance between “a program that works” and “a program that handles the unexpected.” Conversely, I could now imagine just how fragile a program without validation really is. That was one of the biggest gains from this assignment.
“Separation of Concerns” — Different Data Roles Deserve Different Homes
In this program, I maintained multiple lists organized by role: master data, user input data, and exception data — each in its own independent list.
Why? Because “data with different roles belongs in different places” felt like the natural approach. It’s the same thinking as “Separation of Concerns” in infrastructure design — keeping master data, input data, and exception data independent from each other.
This structure means that when you want to expand the master data, you just add entries — no changes to the program logic required. The habit of “keeping extension points in the data layer” from the previous assignment carried over naturally.
This is a real-world design principle in action. Separating master tables from transaction tables, separating normal logs from error logs — the scale differs, but the underlying philosophy is the same. “Design habits that emerge naturally in a simple environment are reflections of principles internalized through real-world practice.” Noticing that was a genuinely interesting moment.
ICT Fundamentals Covered in Session 2
Alongside the programming assignment, Session 2 also covered foundational ICT concepts — system architecture and networking fundamentals.
As an infrastructure specialist, this was familiar territory. What was new, though, was reframing it in a business context.
The Challenge of Translating Technical “Common Sense” into Business Language
Take the client-server model, or the difference between cloud and on-premises infrastructure. I use these concepts daily, but I realized I hadn’t trained myself to explain the why behind architectural choices as business decisions.
This becomes a problem in three specific situations.
First, when asked to justify cloud vs. on-premises decisions. Among engineers, “scalability” and “availability” suffice. But for executives, you need to translate: “Why shift from capital expenditure to operating costs?” “What are the legal implications of data residency?” Without business language, the message doesn’t land.
Second, when justifying security investment costs. “We should implement firewalls” or “Let’s add multi-factor authentication” — technically sound, but what executives want to know is “What’s the business risk in dollar terms if we don’t?” Unless you can quantify risk financially, security never gets prioritized.
Third, when explaining the impact of system outages. Reporting “a DNS failure caused a complete service outage” means nothing to executives who don’t know what DNS is. You need to translate it to something like “The phone directory broke, so nobody in the company could make calls.” Only then does it click.
The class discussion centered on exactly this kind of “translation between technology and business.” Watching classmates with no technical background try to articulate “why choose cloud” in their own words made me realize the opposite: I’d been lazy about translating into business language precisely because I already understood the technology.
This session made me realize that the real value of revisiting ICT fundamentals in this course is becoming able to explain technical decisions to people who don’t have a technical background. Even with topics you know well, there’s a significant gap between “understanding” and “being able to explain.”
Conditionals → Lists + Loops → Data Processing — The Building Blocks Stack Up
What hit me hardest in Session 2 was that adding just one more programming building block dramatically expands what you can do.
Last time, it was conditionals only. “If A, then B; otherwise C” — you could branch, but only handle one piece of data at a time. This time, with lists and loops added, “collecting multiple data points and processing them as a batch” became possible.
The natural extensions of this are data matching, deduplication, and exception handling. Master data validation is the prototype of a “database query.” Duplicate checking is the prototype of a “uniqueness constraint.” Separate exception handling is the prototype of “error handling.” It was a small program built in a visual programming environment, but what it does in essence is identical to what happens daily in production business systems.
Looking back, this experience directly laid the groundwork for the sorting algorithms and recommendation system design that came in Session 3 and beyond. Sorting a list (sort), selecting items that match criteria (filter), combining multiple lists (join) — all of these are natural extensions of the list operations learned here.
And there was another takeaway. I learned that input validation — something I take for granted — is not obvious to programming beginners. In IT, validating input is as natural as breathing. But looking at other students’ assignments, many had built perfectly functional programs with no validation at all. Whether you instinctively think about the gap between “it works” and “it works robustly” comes from experience. Conversely, being able to explain why validation matters to someone without that experience — that’s what communication skill as a technologist really means.
Next Up: Transition Diagrams — Visualizing UX Structural Issues Through Screen Flow
The second half of Session 2 introduced “Transition Diagrams” — a tool for mapping out screen flows in a service. What started as a straightforward exercise turned into a surprisingly deep analysis. I’ll cover that next time.