<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Process → Insights→ Action]]></title><description><![CDATA[Welcome to witness the alchemy between Six Sigma, Process Mining and Artificial Intelligence - complementing each other in transforming your operations!]]></description><link>https://www.ramram.ai</link><generator>Substack</generator><lastBuildDate>Fri, 24 Apr 2026 10:00:44 GMT</lastBuildDate><atom:link href="https://www.ramram.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Ramanathan R]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[ramsthere@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[ramsthere@substack.com]]></itunes:email><itunes:name><![CDATA[Ram]]></itunes:name></itunes:owner><itunes:author><![CDATA[Ram]]></itunes:author><googleplay:owner><![CDATA[ramsthere@substack.com]]></googleplay:owner><googleplay:email><![CDATA[ramsthere@substack.com]]></googleplay:email><googleplay:author><![CDATA[Ram]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Measuring What Matters: Consistency Is the Real Test of Automation]]></title><description><![CDATA[Variation is the spice of life (and spoiler for automation) - Wise old elf]]></description><link>https://www.ramram.ai/p/measuring-what-matters-consistency</link><guid isPermaLink="false">https://www.ramram.ai/p/measuring-what-matters-consistency</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Tue, 31 Mar 2026 09:31:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the fourth article in a five-part series on progressively transforming legacy processes using GenAI.</p><p>In the previous articles, we established three foundational ideas.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><a href="https://ramsthere.substack.com/p/why-genai-struggles-to-replace-your?r=3ice0">First</a>, successful automation depends on scaffolding - both technical and human - not just better prompts or models.</p><p><a href="https://ramsthere.substack.com/p/know-thy-process-socrates-on-ai-automation?r=3ice0">Second</a>, evaluation metrics must reflect the real subjectivity in your process.  </p><p><a href="https://ramsthere.substack.com/p/the-art-of-winnowing-dont-let-edge?r=3ice0">Third</a>, momentum comes from winnowing - separating straightforward cases from judgment-heavy ones so you can ship safely and learn quickly. This increases automation momentum &#8212; and equally important, it creates visbility for the policy teams to resolve the ambiguities that volume had previously hidden.</p><p>Once these foundations are in place, as you winnow more, a new question emerges:</p><p><strong>How consistent are your automated decisions?</strong></p><p>Especially in a Gen-AI powered system which almost always gives different responses for the same questions. </p><p>Needless to mention, in the world of operations - consistency is what turns isolated success into dependable execution.Inconsistent automated decisions aren&#8217;t just a technical nuisance. They carry a real business cost - they&#8217;re a compliance and repricing liability. </p><div><hr></div><h3>Measuring consistency</h3><p>A useful way to think about consistency is through an analogy familiar to many operations leaders: <strong>Gauge R&amp;R studies</strong></p><p>In those studies, the same measurement is repeated multiple times under identical conditions to test repeatability. If the same part produces different measurements each time, the measurement system cannot be trusted&#8212;no matter how sophisticated the tool.</p><p>GenAI systems require the same discipline.</p><p>The principle is simple:</p><p><strong>Given the same inputs, how often does your system produce the same decision?</strong></p><p>Let us return to the running example from the previous articles - <em>&#8220;identifying industry codes from business names&#8221;</em></p><p>To measure consistency, we ran the same few hundred businesses through the system multiple times, often 30 or more iterations, with identical inputs.</p><p>We then categorized the outcomes:</p><p>- Cases that produced the  <strong>same code every time</strong></p><p>- Cases that alternated between <strong>two plausible codes</strong></p><p>- Cases that fluctuated across <strong>three or more codes</strong></p><p>That alone provides useful insight. But operationally, it is still incomplete.</p><p>The real question is:</p><p><strong>Does this variation actually matter?</strong></p><p>So we went one step further. What counts as material variation is not universal - it is defined entirely by your downstream decision engine.</p><p>We traced the downstream impact of these variations. In particular:</p><p>- How many fluctuating codes led to <strong>different pricing outcomes</strong>?</p><p>- When pricing changed, <strong>how large was the impact</strong>?</p><p>- Did the variation create <strong>material risk</strong>, or just harmless noise?</p><p>This step is often overlooked. It should not be. </p><p>Because improving consistency always comes at a cost. It could be more prompts, more validation logic, more tokens, more latency. Before investing in tighter consistency, you must first determine whether the variation reduction is economically meaningful <em>within the bounds of your system</em>.</p><p>Consistency improvement is not merely a technical exercise.  It is an <strong>ROI decision</strong>. </p><div><hr></div><h3>Understanding the sources of variation</h3><p>Once variation is measured, the next task is to understand why it exists.</p><p>GenAI systems, at their core, are probabilistic engines. Even with identical inputs, slight shifts in interpretation can lead to different outputs. But in practice, most variation is not random. It usually traces back to identifiable causes. In fact, the largest driver of variation was not the model, <em>but disagreement between reference sources that humans had silently worked around for years</em>. The list boils down to the usual suspects - conflicting signals in inputs, broad prompts, inconsistent reference material, genuine business ambiguity and some inherent randomness attributable to LLMs. </p><p>Let us understand it better with an example. In the example of business code determination,</p><ul><li><p>There could be multiple businesses with the same name, especially with the common names</p></li><li><p>Your reference manual to classify the business could be inconsistent or overlapping</p></li><li><p>The business could claim to do multiple things - which doesn&#8217;t neatly fit into a classification code</p></li></ul><h3>Improving consistency - knowing the trade-offs</h3><p>Once sources of variation are understood, the next step is choosing interventions. Each intervention improves consistency - but at a cost. Operational leaders must choose deliberately. </p><p>Some of the ways that worked for us,</p><p><strong>Strengthen input gating</strong></p><p>Many inconsistent outcomes originate from ambiguous inputs. By tightening input acceptance criteria&#8212;an idea introduced earlier in the <a href="https://ramsthere.substack.com/p/the-art-of-winnowing-dont-let-edge?r=3ice0">winnowing discussion</a>&#8212;you prevent problematic cases from entering automated paths.</p><p><em>Trade-off: Reduced coverage.</em></p><p>You improve stability, but fewer cases qualify for automation. In one of our implementations, we could only process 2/3rd of the eligible volume due to the strict gating criteria. Needless to mention, as we bring more clarity on the rules, we will be able to increase this number by winnowing more.</p><div><hr></div><p><strong>Break problems into atomic steps</strong></p><p>Complex decisions are often better handled as sequences of smaller, explicit judgments rather than a single large prompt.</p><p>Best practices include:</p><p>- Using structured prompts</p><p>- Providing clear examples</p><p>- Encouraging the system to abstain when uncertain</p><p><em>Trade-off:Increased latency and token usage</em></p><p>You gain clarity and control but consume more compute. In our case, each additional call to a reasoning LLM costed us more tokens and significantly more time (about 30 seconds per reasoning call)</p><div><hr></div><p><strong>Stabilize reference material</strong></p><p>If your decision depends on external knowledge, the reference layer must be reliable.</p><p>If reliance on model memory introduces ambiguity, consider using curated reference systems such as retrieval-based architectures(RAG)</p><p><em>Trade-off: Latency and infrastructure cost</em></p><p><strong>But the gain in determinism is often substantial.</strong> But it costs more. Even with a semantic search, this increased our token consumption by 3x. </p><div><hr></div><p><strong>Introduce corroboration logic</strong></p><p>When ambiguity cannot be avoided, narrow the set of possible outcomes and use additional evidence to confirm the final decision.  This mimics how experienced operators behave&#8212;cross-checking before committing.</p><p><em>Trade-off: Latency and system complexity. This requires you access to the web or database or other details of the application making the system more complex.</em></p><div><hr></div><p><strong>Use consensus mechanisms</strong></p><p>In high-risk decisions, multiple models or evaluators can be used to reach consensus.</p><p>Agreement across independent evaluations increases reliability.</p><p><em>Trade-off: Token consumption and processing time</em></p><p>This approach should be reserved for decisions where the cost of error is significant.</p><div><hr></div><p><strong>To sum it up, all of our endeavors in improving consistency from the initial implementation increased the cost by 3X token and latency by 6X. But it was a deliberate and conscious choice as the automation formed the bedrock of all the decisions that follow. </strong></p><p>It was a deliberate choice to get the foundation right, but not a permanent one. We are actively working on optimizing the system now that the consistency baseline is established.</p><p>The sequence matters: stabilize first, then optimize. Doing it the other way around is how you end up optimizing for the wrong thing.</p><div><hr></div><h3>Consistency is not a one-time certification</h3><p>Many teams treat consistency testing as a milestone. That is a mistake.</p><p>Consistency drifts over time.</p><p>Inputs evolve. Policies change. Reference data shifts. Models are upgraded.</p><p>Without periodic testing, drift goes unnoticed until failures surface in production.</p><p>Two practices are essential:</p><p>- Repeatability testing at defined intervals</p><p>- Continuous monitoring for decision drift</p><p>Golden datasets&#8212;introduced in earlier discussions on evaluation&#8212;remain valuable here. They provide a stable reference point for tracking change across system versions.</p><div><hr></div><h3>The real lesson</h3><p>Consistency improvement is not free.</p><p>Every additional layer of determinism consumes time, tokens, infrastructure, or coverage.</p><p>Before tightening your system, ask:</p><ul><li><p>Does the variation materially affect outcomes?</p></li><li><p> Does it introduce operational risk?</p></li><li><p>Does reducing it produce measurable value?</p></li></ul><p>If the answer is yes, invest deliberately.  </p><p>If not, accept bounded variation and move forward.</p><p>Operational excellence is rarely about perfection.  </p><p>It is about controlled reliability. </p><p>The wise old elf was right &#8212; variation is the spoiler that forces you to choose what actually matters. You can&#8217;t fight all of it. Pick your battles deliberately.</p><div><hr></div><h3>Where this leads next</h3><p>At this stage in the journey, most organizations have:</p><ul><li><p>Defined realistic evaluation metrics</p></li><li><p>Built momentum through winnowing</p></li><li><p>Stabilized decision consistency</p></li></ul><p>What remains is the final challenge - in the &#8216;think big, start small and scale fast&#8217; triad.</p><p>How do you scale the automation, measure the impact and govern these systems at scale. That&#8217;s where we go next. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Art of Winnowing: Don’t Let Edge Cases Hold Your GenAI Pilot Ransom]]></title><description><![CDATA[Why &#8220;Process Forks&#8221; are the secret to scaling automation]]></description><link>https://www.ramram.ai/p/the-art-of-winnowing-dont-let-edge</link><guid isPermaLink="false">https://www.ramram.ai/p/the-art-of-winnowing-dont-let-edge</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Sun, 15 Feb 2026 17:02:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is the third article in a multipart series on the nuances of deploying Gen AI for workflow automation. </p><p>In the previous articles, we discussed<a href="https://ramsthere.substack.com/p/why-genai-struggles-to-replace-your"> why context alone isn&#8217;t enough</a> and how to set <a href="https://ramsthere.substack.com/p/know-thy-process-socrates-on-ai-automation">evaluation metrics that reflect reality</a>. Once you have a statistically sound way to measure success, the next challenge is <strong>momentum.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The biggest killer of GenAI pilots is the &#8220;all-or-nothing&#8221; fallacy: the belief that if a model cannot handle the solid majority of cases, it is not ready for production. This perfection trap stalls teams indefinitely. The practical way out is the <strong>art of winnowing</strong> separating the straightforward cases from the messy ones, so you can ship, learn, and iterate without breaking the business.</p><p>A fair objection here is: <em>aren&#8217;t we just optimizing for what&#8217;s easy, not what&#8217;s valuable?</em> Sometimes the messy cases do carry most of the risk or economic impact. That is true. Winnowing is not a value strategy; it is a <strong>delivery strategy</strong>. It buys you learning velocity, operational trust, and stable foundations so you can tackle the high-impact complexity deliberately instead of betting everything on a brittle big bang.</p><p>---</p><h3>The Three-Step Framework</h3><p>To winnow effectively, you must identify <strong>process forks</strong>&#8212;the points where a case stops being straightforward and requires judgment, policy interpretation, or human context. These forks typically appear in three places.</p><p><strong>Step 1: Input Gating</strong></p><p>Analyze the segments of your input. Identify which ones are &#8220;clean&#8221; and which require nuanced treatment. If the data entering the system is ambiguous, downstream automation will almost certainly fail.</p><p><strong>Step 2: In-Process Logic</strong></p><p>Once the input is accepted, identify where rules blur during execution. These are the places in your SOPs that say, &#8220;It works this way for most, but it depends for the rest.&#8221;</p><p><strong>Step 3: Output Resolution</strong></p><p>Finally, look at the decision stage. Nuance often creeps in at the time of the final choice. If the AI cannot reach a high-confidence conclusion based on defined rules, the process must fork to a human.</p><div><hr></div><h3>A simple example: Industry code identification</h3><p>Consider a common task: identifying industry codes for a firm. At a high level, it sounds simple. In practice, the forks look like this:</p><p>- <strong>Input Forks:</strong> What if the business has no website? What if three firms have nearly identical names?  </p><p>    <em>Winnowing action: Automate only firms with a verifiable, unique digital footprint.</em></p><p>- <strong>In-Process Forks:</strong> What if the business claims to do multiple things? Which source do you trust if the website contradicts a registry?  </p><p>    <em>Winnowing action: Automate firms with a single dominant activity; flag multi-activity firms for review.</em></p><p>- <strong>Output Forks:</strong> What if a business qualifies for two codes with equal validity? How do you separate high-confidence from lukewarm matches?  </p><p>    <em>Winnowing action: Automate cases where the confidence delta clears a defined threshold; route the rest.</em></p><div><hr></div><h3>Fixing the running train</h3><p>Scaling AI in workflow automation is never a zero-to-one move. You are fixing a running train. Business continuity matters as much as transformation(if not more).</p><p>By prioritizing the straightforward subset, you keep evaluation honest and production stable. This provides the air cover needed to iterate on the messy cases that usually carry the most learning and, often, the most value. Over time, the boundary should move. </p><div><hr></div><h3>A note of warning: Stay close to the Gemba</h3><p>Winnowing carries a real risk: <strong>false segmentation</strong>. Your segments must be carved along the natural fissures of the work itself, not along what is convenient for data access or system boundaries. If you segment by region because the data is easy, but complexity is actually driven by product type, you will hit the same walls again.</p><p>Finally, be honest about the optics. Winnowing can make metrics look good if you forget to report <strong>coverage</strong>. Always pair performance with the percentage of volume automated. </p><div><hr></div><p><strong>Well begun is half done. Now, let&#8217;s get it done.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Know Thy Process — Socrates (on AI automation)]]></title><description><![CDATA[Why evaluation clarity must come before scaffolding]]></description><link>https://www.ramram.ai/p/know-thy-process-socrates-on-ai-automation</link><guid isPermaLink="false">https://www.ramram.ai/p/know-thy-process-socrates-on-ai-automation</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Wed, 28 Jan 2026 08:02:20 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Before we dive deep into specific scaffolding patterns, there is one foundational question you need to get clarity on:</p><p><strong>How are you evaluating your AI automation of operational workflows?</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This is the second article in a multipart series on AI-driven workflow automation. If you have not read the <a href="https://ramsthere.substack.com/p/why-genai-struggles-to-replace-your">previous article</a> on scaffolding, I recommend starting there, as this piece builds directly on it.</p><p>---</p><p><strong>The problem with evaluation metrics</strong></p><p>Many pilot programs, including ones I have personally worked on, begin with a simplistic and idealistic view of evaluation:</p><p><strong>&#8220;Replicate humans. Period.&#8221;</strong></p><p>By how much?  </p><p>&#8220;Let&#8217;s say 99 percent.&#8221;  </p><p>That sounds ambitious. Too ambitious?  </p><p>&#8220;Fine, let&#8217;s start with 90 percent.&#8221;</p><p>Nice round numbers. Comforting. Familiar.</p><p>They are chosen much like p-values in statistics and machine learning. Rarely questioned. And even when they are, the discussion is usually framed using analogies to surveys or industry benchmarks.</p><p>What is missing is a <strong>customized, objective view grounded in the reality of the specific process being automated</strong>.</p><p>An arbitrary evaluation criterion can do real damage.  </p><ul><li><p>Set it too low, and you build weak guardrails that create downstream risk.  </p></li><li><p>Set it too high, and the pilot never scales because the targets are unnaturally steep.  </p></li><li><p>Choose the wrong metric, and you gain false confidence.</p></li></ul><p><strong>So how should this be done?</strong></p><p>Yes, the intent is often to replicate human decision-making.  </p><p><em>But it needs to be done in a deeper and more honest way.</em></p><p><strong>Step 1: Understand the subjectivity of the metric you are trying to compare</strong></p><p>Think of any operational decision as lying on a spectrum.</p><p>At one extreme, it is completely objective.  </p><p><em>For example, counting the number of cheque bounces in a bank statement.</em></p><p><strong>At the other extreme, it is highly subjective.</strong>  </p><p><em>For example, deciding whether a slightly blurry bank statement is acceptable.</em></p><p>A simple rule of thumb helps here:</p><p><strong>Given the same input, is it possible that two competent operators arrive at different outcomes?</strong></p><p>If the answer is no, the decision is straightforward and strict matching may be appropriate.  </p><p>Most of the time, however, the answer is yes.</p><p>One important caveat:  </p><p><em>If subjectivity stems from an <a href="https://ramsthere.substack.com/p/variation-reduction-and-digital-transformation">ambiguous policy</a> no evaluation metric will stabilize automation. Those issues must be addressed first.</em></p><p><strong>Step 2: Identify the sources of subjectivity</strong></p><p>Once you acknowledge subjectivity, the next step is to understand where it comes from.</p><p>Consider a common step in lending workflows: identifying the industry of a business based on its name and description.</p><p>Different analysts may arrive at different classifications for several reasons:</p><p>1. The business may operate across multiple activities that do not map cleanly to a single industry code.</p><p>2. The business description may vary across sources such as the website, directories, and registrations.</p><p>3. The bank statement may reveal a revenue pattern that conflicts with the stated business activity.</p><p>4. Analysts may prioritize different sources or interpret the same information differently based on experience.</p><p>These are not edge cases. They are the norm.</p><p>The best way to uncover these nuances is by combining process intelligence, such as process mining and task mining, with <a href="https://ramsthere.substack.com/p/the-story-of-how-taiichi-ohno-helped">Gemba</a>. Observing operators in action builds intuition about both the <strong>causes</strong> and the <strong>extent</strong> of variation in your evaluation metric.</p><p> <strong>Step 3: Choose an evaluation criterion that reflects reality</strong></p><p>With this understanding, you can now choose an evaluation approach that mirrors how the process actually works.</p><p>In the industry classification example, is a strict one-to-one match the right metric?  </p><p>It can be measured. But it may not be the right gating criterion.</p><p>Alternatives such as top-k matches or common-intersection matches between humans and AI often better represent alignment with human reasoning.</p><p>If your downstream system insists on a single auditable value, you can still enforce deterministic resolution rules, while using top-k or intersection-based alignment as the evaluation gate.</p><p>There is no universally correct answer here.  </p><p><em>The right metric is the one that best reflects your process reality, risk tolerance, and decision context.</em></p><p><strong>Step 4: Build a statistically sound golden dataset</strong></p><p>Once the metric is defined, select a statistically significant golden batch.</p><p>This is not new territory. Use standard sampling principles to ensure the dataset is representative of real volume, variability, and complexity.</p><p>Also remember that golden datasets have an expiry date.  They remain golden only as long as they continue to represent the process. As policies, volumes, and behaviors drift, evaluation datasets must be refreshed.</p><p><strong>Step 5: Measure human-to-human variation</strong></p><p>Have multiple operators, with varying levels of expertise, independently execute the process on the golden batch in a blind manner.</p><p>Compare their outcomes.</p><p>This exercise gives you a clear picture of the natural variation that already exists among humans. This variation defines feasibility bounds and highlights where judgment, ambiguity, or inconsistency lives.</p><p>To be clear, this is not an argument to institutionalize human variation in automated systems. The goal is to minimize it. However, once you choose the metric you want to hold your AI system to, you need a well-rounded understanding of current human performance in the system.</p><p></p><p><strong>Why this matters</strong></p><p>By following these five steps, you achieve three outcomes.</p><p><strong>First</strong>, you define an evaluation metric that is fair, defensible, and grounded in your specific process.  </p><p><strong>Second</strong>, you quantify existing human variation, which allows you to set realistic and meaningful targets for your GenAI pilot.</p><p><strong>Third</strong>, you give decision makers confidence that automated performance is comparable to or better than current operations.</p><p><strong>Closing thoughts</strong></p><p>Replicating humans is not the ultimate goal of automation. Consistency, risk reduction, and scalability often matter more. But human performance remains a <strong>reasonable and practical starting reference.</strong></p><p>Workflow automation using GenAI is often a deeply reflective exercise. It forces organizations to confront their own process maturity, policy clarity, and decision consistency. Done systematically and objectively, this approach provides the confidence needed to move beyond pilots and into scaled deployment.</p><p>With evaluation clarity in place, we can now move forward to assessing AI performance against meaningful metrics and strengthening workflows through scaffolding.</p><p><strong>Well begun is half done.</strong></p><p>PS: Following are links to some of the arXiv papers I found useful on evaluation methodologies. The measurement system analysis in Six Sigma also deals elaborately with quantification of variation in the measurement system.</p><p><a href="https://arxiv.org/abs/2307.03109">https://arxiv.org/abs/2307.03109 </a>- A survey on evaluation of large language models </p><p><a href="https://arxiv.org/abs/2309.16349">https://arxiv.org/abs/2309.16349</a> - Human feedback is not gold standard</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why GenAI struggles to replace your process intern and what actually works]]></title><description><![CDATA[Better context is an incomplete answer]]></description><link>https://www.ramram.ai/p/why-genai-struggles-to-replace-your</link><guid isPermaLink="false">https://www.ramram.ai/p/why-genai-struggles-to-replace-your</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Sun, 18 Jan 2026 14:00:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever noticed something odd?</p><p>The same models that can solve extremely difficult exams, write code, reason about physics, and summarize dense legal texts often struggle to replace a junior operations analyst following your process.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Your first instinct might be to say, &#8220;Ah, the model is missing context.&#8221; But that explanation is too convenient, and more importantly, it does not actually solve the puzzle.</p><p><em>So what is the missing ingredient that turns GenAI brilliance into real automation?</em></p><p>The answer lies in scaffolding, and not just technical scaffolding that shapes the system. Equally important is non-technical scaffolding that shapes human behavior around the system.</p><p>Let me explain.</p><p>Anyone who has spent time in real operations knows that actual processes are far messier than what appears in your process maps or SOPs. They are full of exceptions, shortcuts, unwritten rules, judgment calls, and &#8220;this is how we do it here&#8221; habits that rarely make it into documentation.</p><p>GenAI experts do not usually come in to fix this mess. In fact, their biggest contribution is often to reveal it.</p><p>How? By building a GenAI proof of concept based strictly on your documented process, only to find that it matches human decisions for a surprisingly small fraction of real cases. Suddenly, what looked clean on paper looks chaotic in production.</p><p>The instinctive reaction at this point is predictable:<br>&#8220;See? It doesn&#8217;t work. Maybe the next version of ChatGPT will be smarter. Let&#8217;s move on to another POC.&#8221;</p><p>Except that this misses the real insight.</p><p>What the low match rate is actually telling you is that your process map was incomplete. The nuances, exceptions, and tacit knowledge never surfaced in the one-hour demo where an analyst walked the GenAI team through a few happy paths and a couple of edge cases.</p><p>At this stage, many organizations jump to a neat-sounding solution.</p><p>&#8220;Let&#8217;s fix the process first. Make it consistent. Document everything from L1 to L5. Then feed that context to GenAI. Problem solved.&#8221;</p><p>That sounds reasonable until you try doing it.</p><p>Anyone who has seriously worked on process improvement knows this can take months, years, or sometimes forever. Many &#8220;process gaps&#8221; are actually policy gaps in disguise. Others are pragmatic exceptions that keep the business moving under time pressure or incomplete information. Cleaning all of this up before touching GenAI is rarely realistic.</p><p><em>So what is a more practical way forward?</em></p><p>Instead of treating GenAI automation as an all-or-nothing, end-to-end transformation, think in salami slices.</p><p>A workable playbook looks like this:</p><ol><li><p><strong>Observe the real work, not just the process map.</strong><br>Use process analytics, but also do Gemba. Sit with analysts, watch how they actually make decisions, and listen to why they do what they do. This surfaces the &#8220;rules of the jungle.&#8221;</p></li><li><p><strong>Understand what today&#8217;s GenAI models are actually good at.</strong><br>More often than not, current models are capable enough to handle a meaningful portion of what humans already do, just not everything.</p></li><li><p><strong>Pick the cleanest segments first.</strong><br>Identify parts of the process where decisions are relatively consistent and rules are clear. Automate those first. Your match rates here will be high because these are genuinely straightforward cases.</p></li><li><p><strong>Adjust policy and process to support partial automation.</strong><br>Do not wait for perfect process. Improve it incrementally around what you automate.</p></li><li><p><strong>Repeat.</strong></p></li></ol><p>In practice, what has worked for us is a simple but powerful &#8220;cheat code.&#8221; Separate the straightforward cases from the messy ones. Let GenAI handle what is already consistent and high-agreement. Let humans continue managing the judgment-heavy, ambiguous cases, but make those cases visible and learn from them over time.</p><p>One slice at a time, before the next iteration - process gets cleaner, the automation gets broader, and the overall system becomes more reliable.</p><p><em>In my next article, I will describe the scaffolding patterns, both technical and non-technical, that have made this approach work in practice.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The story of how Taiichi Ohno helped our Gen AI initiative]]></title><description><![CDATA[Let me take you through the unlikely journey of how an underappreciated aspect of process intelligence reshaped the trajectory of a Gen AI initiative.]]></description><link>https://www.ramram.ai/p/the-story-of-how-taiichi-ohno-helped</link><guid isPermaLink="false">https://www.ramram.ai/p/the-story-of-how-taiichi-ohno-helped</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Fri, 28 Nov 2025 08:25:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let me take you through the unlikely journey of how an underappreciated aspect of process intelligence reshaped the trajectory of a Gen AI initiative. It is not flashy. It is not cool. But it is time tested, and it works.</p><p>Interested? Read on.</p><p>TLDR: Most Gen AI projects in operations do not fail because of weak models. They fail because the process they are meant to augment is only half understood. The real unlock comes from layering process intelligence end to end: the intended design, the execution patterns, and the human reality uncovered only through Gemba. A simple example from industry classification shows how each layer reveals hidden nuances that completely change how AI should be evaluated. If you want meaningful lifts in operational AI, start by understanding how the work is truly done.</p><p>WHY ENTERPRISE AI STALLS?</p><p>Enterprise AI adoption is nowhere near expectations.</p><p>The stock market calls it a bubble. McKinsey in its state of AI report notes that over 67% percent of projects have not taken off. Illiya Sutskever recently remarked that the &#8216;economic impact is dramatically behind&#8217;.</p><p>From my experience in operational AI, the missing ingredient in many cases is the lack of process intelligence.</p><p>Not model tuning.</p><p>Not larger context windows.</p><p>Not more GPUs.</p><p>Just a weak understanding of the process the model is meant to support.</p><p>MISSING LAYER &#8211; FULL STACK PROCESS INTELLIGENCE</p><p>Process intelligence is not one concept. It is a stack.</p><p>It has three layers:</p><p>* The intended design that sits in the process repository</p><p>* The execution patterns observed through process mining and task mining</p><p>* The human reality that drives how work is actually done</p><p>Most teams capture the first two. Very few understand and acknowledge the third.</p><p>Focus groups and demos cannot give you the tacit knowledge. They capture only the dominant and palatable version of the process. The exceptions, the shortcuts, the edge cases and the tribal knowledge emerge only through Gemba. That is, by observing operators in action, repeatedly and without filters.</p><p>HOW DOES THIS SHAPE GEN AI IMPLEMENTATION?</p><p>Let me illustrate this with an underwriting example.</p><p>Classifying a business into the right industry code is a foundational task. If you get this wrong, every downstream decision suffers. On the surface, the task is simple. Read the description. Compare it to a taxonomy. Pick the closest SIC or NAICS code. Prompt it. Evaluate it on a golden batch.</p><p>Done? Not really.</p><p>Here is how the understanding evolved.</p><p>STAGE 1: The Big Picture View</p><p>From the process repository and a few demos, the task looks straightforward.</p><p>You classify the business and match it to the code.</p><p>You think a simple prompt should work.</p><p>Evaluation should align with human choices. 95% of the times.</p><p>Except, it does not.</p><p>Why?</p><p>STAGE 2: The Process Mining View</p><p>Process mining shows that the average handling time is X minutes, but the distribution is bimodal. Some cases are very quick. Some take disproportionately long.</p><p>Why the split?</p><p>STAGE 3: The Task Mining View</p><p>Task mining shows that in long cases, operators do not rely only on the business website. They jump between many sources. Aggregators. Filings. News. Prior data. And sometimes pure intuition built over experience.</p><p>This is not randomness. It is accumulated judgment.</p><p>Why do they do this?</p><p>STAGE 4: Gemba Walk (or) watch</p><p>Only when you sit with operators, or better, do the task yourself, do the real insights emerge.</p><p>Questions start surfacing.</p><p>What if a business claims to do multiple things?</p><p>What if the business has almost no online presence?</p><p>When different sources list different services, which one do you trust?</p><p>What quirks in the classification system require compensating logic?</p><p>Should success really be measured by exact alignment?</p><p>**That last question flips the entire evaluation strategy.**</p><p>Many Gen AI projects fail not because the model or prompts are inadequate, but because the evaluation criteria are disconnected from operational reality.</p><p>FULL-STACK PROCESS INTELLIGENCE SHAPES AI SUCCESS</p><p>The real work is not linear. It is iterative. You slice through each layer of process intelligence, and you adjust your AI evaluation framework accordingly.</p><p>You start asking better questions. Process intelligence helps you with those questions.</p><p>This is the boring superpower. The unglamorous part that does not make headlines. Yet it determines whether AI initiative succeeds or stalls.</p><p>FINAL THOUGHTS</p><p>In Gen AI implementation, the model is rarely the problem.</p><p>The real unlock comes from the overlooked layers of process intelligence, especially the human reality you uncover only through Gemba.</p><p>Next time you want a next lift in your operational AI program? Plan a Gemba walk, or better, try becoming the operator.</p><p>PRO TIP</p><p>We were fortunate to accelerate this learning with a seasoned veteran like Steve guiding the process. Someone who has seen enough operations to know where the real constraints and real signals lie. Try finding your Steve.</p>]]></content:encoded></item><item><title><![CDATA[No AI Without PI]]></title><description><![CDATA[But What If Your PI Isn&#8217;t Perfect?]]></description><link>https://www.ramram.ai/p/no-ai-without-pi</link><guid isPermaLink="false">https://www.ramram.ai/p/no-ai-without-pi</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Fri, 28 Nov 2025 05:00:44 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s a well-accepted premise that there is no AI without PI, no artificial intelligence without process intelligence. But what do we really mean by PI? A pristine and connected process intelligence graph? You are lucky if you have one. </p><p>For most of us, work happens through fragmented processes across legacy systems. In such setups, even the best process mining tools hit a ceiling. So what&#8217;s the way out? One answer is task mining. When combined with AI, task mining helps you recover lost process intelligence, piece by piece. </p><p><strong>The Problem: Invisible Work Between the Systems</strong></p><p>Not all processes move at the same speed. Take SME lending as an example. Collections and legal recoveries stretch for months, but credit analysis happens in minutes. Hundreds of checks and judgments get compressed into a short burst. </p><p>Much of this work lives outside transactional systems, in spreadsheets, browsers, emails, documents and ad-hoc shortcuts. It doesn&#8217;t follow a strict sequence either. Seasoned analysts optimize for efficiency, similar to how an experienced trauma physician moves faster than a junior one. </p><p><strong>The result is clear</strong></p><p>Process mining sees only part of the truth.</p><p>Time and motion studies cannot scale. </p><p>Transformation priorities often get decided by anecdotes instead of data. </p><p><strong>The Bridge: Task Mining Meets AI</strong> </p><p>Task mining captures user-level logs such as applications opened, sites visited, and shortcuts used, while protecting sensitive data through anonymization. It is like putting a microscope on digital work. </p><p>But scaling this comes with noise. Chat windows, re-checks, and partial retries pollute the data. Without filtering that noise, the insights remain shallow. </p><p>This is where large language models come in. By combining pattern-recognition algorithms with language-model reasoning, we can identify and exclude irrelevant activity, segment meaningful sequences, and assign precise start and end times to each work block. </p><p>Feed that cleaned data into process mining, and suddenly you have visibility into fragmented human workflows, the missing layer between systems. </p><p><strong>The Payoff: From ROI to RO-AI</strong></p><p>Even if your core systems are not perfect, task mining gives you a way to make data-driven transformation decisions. You will know which activities consume time, where variability hides, and what training or policy refinements move the needle. It is not perfect, but gives you a good return on investment(ROI) and eventually, the multifold return on AI(ROAI). </p><p>Sure, there is no AI without PI. But with task mining and LLMs, AI can also create better PI, completing the circle of operational life.</p>]]></content:encoded></item><item><title><![CDATA[Variation Reduction and Digital Transformation – The Perfect Pair]]></title><description><![CDATA[Let me start by noting that this article was entirely written by a human, with AI involved only in grammar and sentence structure correction.]]></description><link>https://www.ramram.ai/p/variation-reduction-and-digital-transformation</link><guid isPermaLink="false">https://www.ramram.ai/p/variation-reduction-and-digital-transformation</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Mon, 24 Feb 2025 08:01:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let me start by noting that this article was entirely written by a human, with AI involved only in grammar and sentence structure correction. A couple of years ago, I never imagined I would need to include such a disclaimer at the start of an article. Technology keeps us in a constant state of flux.</p><p>Just like individuals, organizations face a similar conundrum. There is a subtle fear of missing out and a constant urge to adopt the latest technology that promises to boost productivity. This is largely positive since technology often brings the biggest efficiency gains. From ERPs and CRMs to automation, Generative AI, Agentic AI, and now the latest &#8216;Operator&#8217;&#8212;each of these technologies has successfully reduced grunt work in processes, allowing operators to focus on tasks they truly enjoy or, more broadly, those that machines cannot replicate.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>The boundaries of automation continue to expand. As a result, digital transformation has become an ongoing journey&#8212;organizations exist in a state of perpetual beta. But what can supercharge these efforts? A deep understanding of process variation.</strong> Let&#8217;s explore why addressing variation is critical to digital transformation.</p><h4><strong>Well begun is half done. The first step is to understand and attribute process variation.</strong></h4><p>Attributing variation in a process to the right cause helps us determine the most effective solution. Let&#8217;s examine three common sources of variation&#8212;<strong>policy, process sequence, and people</strong>&#8212;to understand their impact and how they can be addressed during transformation.</p><p><strong>1. Policy-Induced Variation</strong></p><p><strong>Organizations must constantly assess policies to determine whether they have outlived their purpose or are too complex to implement relative to their benefits.</strong> Some policy variations may have been introduced to address legitimate needs but, when left unchecked, can lead to inefficiencies&#8212;akin to the proverbial case of <a href="https://spiritscience.medium.com/the-parable-of-the-ritual-cat-28516f53a827">&#8220;tying a cat to a tree.&#8221;</a></p><p>Fortunately, policy variations are the easiest to address&#8212;<strong>by rationalizing, simplifying, or, in many cases, eliminating them</strong>. Through objective quantification, their impact can be measured, leading to informed decisions. In my experience, removing redundant policies before transformation delivers the highest returns with minimal effort.</p><p><strong>2. Process Sequence Variations</strong></p><p>Even when policies remain unchanged, the same set of activities might occur in different sequences. While this may seem counterintuitive, it is <strong>common in complex or organically evolved processes</strong>.</p><p>Studying the major variants in process sequences can reveal why these variations occur. They may arise due to practical execution preferences, tacit tribal knowledge within teams, or simply a lack of standardization.</p><p>To address this, organizations should apply simplification and standardization&#8212;the same approach used for policy variations. Additionally, <strong>augmented intelligence solutions (with humans in the loop)</strong> can help determine the optimal sequence based on real-time inputs. An intelligent assistant can further optimize execution by dynamically adapting to specific case requirements.</p><p><strong>3. People Variation</strong></p><p>After accounting for policy and process sequence variations, what remains is human variation. This stems from differences in knowledge and skill levels between operators <strong>(reproducibility)</strong> and variability in the same person&#8217;s performance over time <strong>(repeatability)</strong>.</p><p>While traditional solutions involve training and incentives, augmented intelligence can significantly mitigate these variations. <strong>Conducting a Gemba study (observing frontline operators) can uncover best practices and key performance drivers.</strong> <strong>Incorporating these insights into product design can enhance team-wide performance, ensuring that processes continually evolve to meet operational needs.</strong></p><h4>Optimizing Transformation Spend with Variation Analysis</h4><p>Understanding process variations allows organizations to optimize transformation costs in two key ways:</p><p><strong>Lean Spending:</strong> Eliminating or rationalizing unnecessary variations before transformation reduces complexity and ensures a more focused investment.</p><p><strong>High-Impact Prioritization:</strong> Not all transformation efforts yield the same ROI. By identifying and prioritizing the process areas with the highest potential, organizations can allocate resources more effectively.</p><h4>Ensuring Long-Term Process Control</h4><p>Beyond cost optimization, a deep understanding of process variations grants greater control over transformed processes. Organizations need to establish clear expectation norms and use available technology to detect deviations. Continuous monitoring enables proactive identification of new sources of variation, ensuring consistent and efficient outcomes.</p><h4>Final Thoughts</h4><p>Addressing process variations is not just a complementary effort&#8212;it is essential to successful digital transformation. With organizations operating in perpetual beta mode, having a hawk-like focus on process variation can deliver significant dividends by driving efficiency, consistency, and adaptability.</p><p><em>Do you agree? What has your experience been? What best practices do you follow to integrate variation reduction into digital transformation efforts?</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Evolving role of a process engineer]]></title><description><![CDATA[The proverbial jack of all trades]]></description><link>https://www.ramram.ai/p/evolving-role-of-a-process-engineer</link><guid isPermaLink="false">https://www.ramram.ai/p/evolving-role-of-a-process-engineer</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Mon, 09 Sep 2024 07:16:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>The Book of Knowledge</strong></p><p>A few days ago, my manager, an operations veteran, asked me to suggest a book that would help him get an overview of process engineering, especially in the services industry. I instinctively turned to the quick reference section of my bookshelf to suggest some titles. I had books on topics spanning data science, operations research, problem-solving, lean thinking, and the formidable Lean and Six Sigma Body of Knowledge (BOK). However, I couldn&#8217;t suggest a single title right away. What would give an overview of process engineering? This hesitation made me reflect: Is process engineering just the sum total of all this knowledge? Or worse, with the new fear that has been unlocked - have all these been made irrelevant in the age of large language models (LLMs) trained on process mining libraries?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>The Overlapping Skill Groups</strong></p><p>I&#8217;ll try to answer the question based on my experiences. My unconventional career path across disciplines has enabled me to collect a diverse set of skills. Over time, I&#8217;ve come to realize how these skills complement each other and help me excel in my role as a process engineer.</p><p>Let&#8217;s begin by exploring the broad skill groups essential for a process engineer in the services industry. These include interpersonal skills, structured problem-solving, data science, technical/IT skills, and management. While these skill groups often overlap, they are mentioned separately to underscore the importance of gaining deep experience in each area. This approach offers multiple benefits: not only does expertise help solve problems more efficiently, but it also enables the identification of opportunities for improvement before customers recognize a need.</p><p>As Henry Ford famously remarked, &#8220;If I had asked people what they wanted, they would have said faster horses.&#8221; Similarly, process owners or sponsors might not know what they truly need until it is presented to them. By developing expertise in these areas, process engineers&#8212;who closely observe how work gets done&#8212;are well-positioned to contribute significantly to the transformation agenda, making connections that drive meaningful change.</p><p><strong>Interpersonal Skills: The Foundation of Collaboration</strong></p><p>Think of processes as an invisible thread that connects and holds an organization together. This analogy highlights not only the all-encompassing nature of processes but also the fact that they often operate behind the scenes, out of direct sight. Processes are essentially a sequence of steps defined by business owners, executed by operations teams, and supported by product and IT systems developed and maintained by their respective teams. For process engineering teams to implement and sustain successful changes, they must collaborate closely with all these stakeholders. This requires building trust and mastering key interpersonal skills such as effective communication, empathy, negotiation, and conflict resolution. As one becomes more senior, these skills start getting more significant and can determine the effectiveness of the process engineering program.</p><p><strong>Structured Problem-Solving: The Playbook for Success</strong></p><p>Structured problem-solving is the backbone of process engineering. It involves breaking down complex issues into manageable parts, identifying root causes, and systematically developing solutions. Tools and frameworks like PDCA, Six Sigma, Lean thinking, and the Theory of Constraints are essential playbooks that help a process engineer navigate problem-solving with a sequence of what to do, when, and, importantly, what not to do. I believe that we have come past the days of DMAIC milestone reviews, but the essence of problem-solving frameworks is more relevant and valuable than ever.</p><p><strong>Data Science: Beyond segmentation analyses</strong></p><p>Advanced statistical data analyses and simulations have always been part of the process engineering toolkit. I remember that proficiency in Minitab was a prerequisite for passing the black belt certification exams. But stopping with that will not help us leverage the enormous potential that the data-science techniques offer. &nbsp;There are machine learning methods that can be used to do segmentation and correlation analyses at speed and scale that would be impossible for humans to replicate. Think of it as an intern who can augment your decision-making by doing a lot of pre-processing and mundane chores. I have also worked on some projects where techniques like clustering and market basket analysis yielded some insightful findings during the &#8216;analyse phase&#8217; of the project. More recently, at the intersection of data science and process science - &#8216;process mining&#8217; has been superpowering every stage of the process improvement or transformation workflow.</p><p><strong>Technical/IT Skills: Bridging the Gap Between Problems and Solutions</strong></p><p>Knowledge of technical / IT skills is becoming increasingly crucial for process engineers, especially in service industries where digital transformation is a key efficiency driver. Understanding the various options available &#8211; multiple types of automation, web / mobile applications, low code software, etc., will help you choose the best tool for the job. Considering the nature of the job and the constraints one is operating with; this knowledge will help decide the proper intervention for need. In addition to this, having a basic understanding of waterfall and agile software development approaches will help you effectively partner with IT teams in ensuring that the apt solutions get designed and implemented.</p><p><strong>Management Skills: Keeping Projects on Track</strong></p><p>Management skills are essential for process engineers, especially when leading complex projects that involve multiple teams and stakeholders. Project management, resource allocation, risk management, and leadership are all critical components of this skill set.</p><p>Effective management ensures that projects are completed on time, within budget, and with minimal disruption to operations. It also involves clear and consistent communication with all stakeholders, informing everyone of progress and any changes that might occur. By managing risks proactively, process engineers can avoid pitfalls and ensure that projects deliver the expected benefits.</p><p><strong>The Synergies Among the Skills</strong></p><p>The synergies among these skills are what make process engineers exceptional problem solvers. At their core, process engineers are tasked with tackling inefficiencies, addressing bottlenecks, and finding solutions to enhance service delivery. Structured problem-solving skills provide a systematic approach and a playbook for what to do and when to do it. Data science skills enable them to identify the root cause of issues quickly. In the solution stage, technical/IT skills allow them to collaborate effectively with the right teams and choose the best technology to implement solutions. Management skills ensure that projects stay on track, risks are managed, and communication is clear and consistent. Finally, given the interdepartmental nature of problem-solving, interpersonal skills are foundational&#8212;they multiply the effectiveness of all other skills, ensuring that collaboration leads to successful outcomes.</p><p>Reflecting on my manager&#8217;s request, I realized that no single book could encompass the full breadth of what process engineering entails, especially in the service industry. The role requires a deep understanding of various disciplines and, more importantly, the ability to integrate these disciplines into a cohesive approach that drives continuous improvement. More than any methodology or tool, it is about developing a mindset that embraces change, seeks out inefficiencies, and continually looks for ways to make things better.</p><p>Do you agree with this list? What other skill groups do you think are essential for a process engineer?? I look forward to seeing them in the comments!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Bundling and unbundling]]></title><description><![CDATA[A framework to understand change]]></description><link>https://www.ramram.ai/p/bundling-and-unbundling</link><guid isPermaLink="false">https://www.ramram.ai/p/bundling-and-unbundling</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Sun, 26 May 2024 06:14:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/24afe9f8-286e-4ea1-8338-43300ca8422c_1100x583.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>&#8220;There's only two ways to make money in business: One is to bundle; the other is unbundle&#8221; &#8212; Marc Andreessen</strong><br><br>Bundle-unbundle is one of the simple but powerful frameworks to understand change and adaption over time. Take newspapers for example. What started off as reporting major events transformed into a medium for consumption targeting different audiences (news, business, weather, children's section, gossip etc.,). After a while, each of these sections unbundled into exclusive editions targeting a sub-section of audience. And this pattern keeps repeating. Not just in newspapers, but across the board. <br><br>How does it apply to the process space? One of the earlier approaches to process improvements was the <strong>plan-do-check-act cycle (PDCA)</strong>. Next came the more specialized approaches in the form of <strong>Total Quality Management (TQM)</strong> which advocated a holistic approach, <strong>Six Sigma</strong> which focused on variation reduction and <strong>Lean</strong> which targeted waste reduction. <br><br><strong>It was a classic bundling of pre-existing concepts, techniques and tools into a unified methodology</strong>. One had to learn statistical techniques, quality management principles, project management practices, simulation methods, lean principles, change management techniques and so on. The conditions then became ripe for unbundling.<br><br>The way I see, unbundling is happening in two ways. <strong>Some of the tools or techniques evolve into a better version of the original ones</strong>. Take the case of application of statistical techniques to business problems. This discipline has now evolved into data science. Another quintessential example of this evolution is the process discovery. Simple SIPOC (Supplier Input Process Output Customer) diagrams gave way to more sophisticated BPM (business process management). Now a mix of BPM and data science has evolved into what is known as &#8216;Process Mining&#8217;. Better ways to achieve the same objective.<br><br><strong>Another way unbundling is happening is simply using the tool or combination of them in a context outside of process improvement.</strong> My favorite example of this is Criteria Based Matrix which I have used in numerous instances. Whenever a choice had to be made among closely competing options, this tool comes to my rescue. I have also found the combination of &#8216;Voice of customer&#8217; and &#8216;Critical to Quality&#8217; tree useful in metric definitions. Likewise, the famous combination of Fishbone diagrams and 5-whys can help you do better causal analysis. Many such innovative combinations can be applied to a lot of business problems in isolation.<br><br>Do you agree that this bundle-unbundle framework explains the evolution in the process space? Where, in the process space, do you think the next innovative split or join is going to happen?</p><p>Image credits: medium.com/hackernoon</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The digital detective - Unravel the mysteries of your process ]]></title><description><![CDATA[A non technical introduction to process mining]]></description><link>https://www.ramram.ai/p/the-digital-detective-unravel-the</link><guid isPermaLink="false">https://www.ramram.ai/p/the-digital-detective-unravel-the</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Sat, 27 Apr 2024 08:46:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, I will attempt to give a jargon-free primer of process mining and its importance. We will being with understanding what process mining is and how task mining complements it. Next, we will try to understand why this becomes so important. Finally, we will conclude the article with an overview of the broad process mining use-cases. </p><p>Imagine a detective trying to reconstruct the homicide crime sequence using the clues that she's got. She speaks to bunch of witnesses and understand the major events that unfolded during the crime day. Next, she further probes into the closed circuit camera feeds, call data records, access logs etc., Now her understanding of the day gets more granular. As the next step, if she get overlay the mobile phone's location data, she can get even more precise. All of these will enhance and provide a comprehensive understanding of events that happened during the crime day. Excuse me if the analogy sounded morbid, but this is exactly what the process mining attempts to do from the event logs. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>When you use your IT systems to process something, <strong>you keep creating traces of actions</strong>. It can be something like moving the application to the next stage or approving a request or anything that changes the state of the entity that is being processed. <strong>They have the basic information of who did what and when it was done. These are called event logs.</strong> In the case of a purchase order, you will have information of when it was created, approved, fulfilled and dispatched.  Process mining software reconstruct the process flow with these event logs. <strong>This initial step is also known as process discovery where, all the event logs are synthesized into one single visualization, with chronological sequence of events from start to finish. This is also known as the 'digital twin'.</strong>  </p><p>Does the 'digital twin' help you answer the process questions like speed, efficiency, cycle times etc.? If it does, then good enough. But there are times when you feel that event logs are not very granular. <strong>You might realize you need more visibility between two points in an event log because they quite long and a lot happens in-between. This is when 'task mining' comes to your rescue.</strong> These software collect the actions happening in the user desktop. You will be get insights on,</p><p>* How much time is spent on every application? </p><p>* How much time the user toggles between the spreadsheet and the email agent? </p><p>* How many items are copy - pasted? </p><p>There is wealth of information that gets captured, in an anonymized fashion, into the how work gets done. <strong>Now these insights can be consumed independently of process mining or can be converted into an event log and ingested into the digital twin that has already been constructed to get a more granular view of the process.</strong></p><p>To appreciate the importance of this technology, you should contrast it to the way process discovery happened in the 'not so distant' past. Big conference rooms with wide walls were booked. The wall was segmented vertically to different sections, with one for each department. Each task was assigned a particular color and each sub-task had one post it note. Understandably, with multiple stakeholders, it gets noisy, chaotic and, at times, contentious. <strong>Initially, all the stakeholders have an 'assumption' of what the process is. But during the interaction, all of them agree on a 'perceived' process. This might a politically acceptable version of the process and still be a different version from the actual 'as-is' process.</strong> The process mining technology bypasses all these gaps and gives you an objective view of the process based on the data. That is a big cost and time lift to the way processes discovery and transformation was done. It also opens up a lot of other possibilities.</p><p><strong>First it super charges business-intelligence</strong>. Which helps us get actionable insights on the past, present and the future. On the past - it helps us answer the basic questions like,</p><p>* How long did something take ?</p><p>* What was our processing capacity?</p><p>* How much was the utilization of the resources?</p><p>* What the extent of rework ?</p><p>* Where are the bottlenecks in the process?</p><p>* Did we meet the agreed delivery timelines?</p><p>and so on. </p><p>It also helps in answering more nuanced questions. For example, in an expense approval process, you might want to understand how many of them took the standard way and how many of them had to get an exception approval because of various reasons. Each of them is a variant of the process. <strong>Variants are different ways to reaching the same destination but through different routes.</strong> Not all variants are inefficient. But some of them definitely area. Process mining software helps you understand the different variants and compare the process measures across them. This will you focus on the ones need attention and also focus on standardization of the process, wherever possible.</p><p>Now lets discuss the present. Once you have this digital twin constructed and the various measures defined, you can use them to monitor your present. Once you plug in real time data into this system, at a very basic level, they can alert you if something is getting stuck in process. It can get more sophisticated than that. When a process sequence or state is encountered, they can even run some automations or take some preset action. It doesn't stop there. Plug this into a sophisticated AI system, they can mimic some of the human decision making as well. <strong>This suite of capabilities to manage the operational processes based on intelligence synthesized from the event logs is called 'execution management system'</strong></p><p><strong>The next logical thing is to extrapolate the past, use the present data and predict what could happen. Then prescribe certain actions based on the prediction.</strong> For example, based on the current work volume in progress  and historical cycle times, can you predict the wait time for the new customer order that just landed up? Once you're able to answer that question, you can configure the system to do send a communication or reprioritize resources or any form prescriptive action, based on the prediction that was made. The possibilities are endless.</p><p>Now that you have a crystal ball, that answers the past, present and future - what prevents you from answering - 'what if? ' questions. Using <strong>simulation capabilities</strong>, the process mining software help you simulate scenarios like,</p><p>* What if a step was eliminated or automated?</p><p>* What if you work on reducing the time taken for a particular step?</p><p>* What if you deploy more resources for an activity?</p><p>* What if you change some policies?</p><p>Various such  scenarios can be tested out and you can study how key process metrics change for each of them. This will help you make educated guesses about the process design and the corresponding tradeoff between various measures.</p><p>By now, you might have realized how each of these capabilities are going help in different phases of process management. Think of the impact it would have in <strong>digital transformation</strong> - where you first study the way work is getting done, then evaluate the kind of changes you're going to make, simulate multiple versions of it and compare them. Finally, make the changes and ensure that the processes are getting executed the way you wanted them to. Process mining software can add a lot of value in all these stages.</p><p>Another adjacent use case is in evaluating if things have happened according to the rules that have been laid out. Traditionally, during the process audits that happen on a periodic basis, random samples are selected. They are they evaluated if the necessary process steps were following in executing that case. <strong>Now with this capability, there is no need for a sample or periodic audits. All the entities are evaluated real time and any exception or deviation is flagged instantaneously.</strong> This will free up the auditing department to move up the audit value chains.</p><p>I hope this article helped you understand the essence of process discovery algorithms and big edifice that has been constructed on top of it. There are many vendors out there and they are approaching the market differently. You might want to read this <a href="https://open.substack.com/pub/ramsthere/p/its-quarter-century-gold?r=3ice0&amp;utm_campaign=post&amp;utm_medium=web">article </a>where I explore the evolution of process mining software over the last two decades. Rest assured, irrespective of the domain, this technology is going to have a widespread adoption across the board. </p><p>What are your thoughts?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A match made in heaven!]]></title><description><![CDATA[Process Mining in the era of GenAI]]></description><link>https://www.ramram.ai/p/a-match-made-in-heaven</link><guid isPermaLink="false">https://www.ramram.ai/p/a-match-made-in-heaven</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Thu, 18 Apr 2024 22:34:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What would the confluence of two rapidly emerging technologies look like? How would it have played out in California in the 1850s - when the gold mining ecosystem met the rail road technology?  That is what we are set to explore in the next few paragraphs, where we will study the intersection of process mining , which is growing at a 50% CAGR and generative AI, for which I will refrain from even making any  guesses on the growth numbers. There are a lot of questions and a broad spectrum of views - ranging from they are mutually exclusive to Gen AI will subsume process mining. My view is that together they complement each other. In fact, Gen AI will be a force multiplier to what process mining aspires to do.</p><p>Gen AI forms another stack or layer between the core process mining algorithms and the end user. This layer empowers the user with many possibilities that would have otherwise been complex or cumbersome. Across my reading and experimentation, I listed several of them and I was looking for a way to group and simplify the myriad use-cases.  <strong>I came across McKinsey's article on the impact of GenAI on operations, which grouped the capabilities into four categories viz., concision, creative content, customer engagement and coding</strong>. In the next few paragraphs, I will start with an explain of what each category is and the process mining use-cases that it could unlock.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>Concision</strong></h3><p><strong>Concision</strong> refers to capability of Gen AI to interpret large corpuses of unstructured data to identify and summarize relevant answers in the service and analysis contexts. In other words, help you get to the crux of what's relevant and important.</p><p>The first concision use-case is about <strong>assisting the process analyst in generating knowledge models</strong>. Process knowledge is supposedly contained in the SOPs and the manuals. But in reality, much more nuances are present as tacit knowledge with the people close to the process. That is why, during process discovery, Gemba visits are super helpful. Its just a fancy Japanese term for being in "the actual place" and watching the operators and the process in action. This is helpful because it gives you an uncensored insights into workings of the process. In the services context, it could mean observing the operators and having detailed discussions and interviews.  When these interviews are fed to the Gen AI systems,  the BPMN or DMN model can be autogenerated. Using these generated models, we could execute multiple process mining use cases like quality control, process conformance, auditing etc., </p><p>The next use case is about <strong>surfacing out the insights that need attention</strong>. This could translate to many possibilities like,</p><ul><li><p>Sentiment analysis could be performed on the in-flight cases and, in conjunction, with other process parameters like wait/cycle time, reworks, number of touches - corrective actions could be initiated if needed</p></li><li><p> One could build an <a href="https://www.oracle.com/in/artificial-intelligence/generative-ai/retrieval-augmented-generation-rag/">RAG</a>(Retrieval augmented generation) systems and automatically surface out insights of importance based on the context. Different from a querying or prompting approach, this is a complete push model to filtering out and consuming relevant insights</p></li><li><p>The RAG could be extended to give prescriptive insights based what could unfold and how it could impact given the context of the process? An example of this would be to alert when there could be a potential SLA miss for a customer with a penalty clause attached in contract</p></li></ul><p>The final use case in concision is about using the GenAI to <strong>do root cause analysis</strong>. It essentially involves studying the impact of multiple factors on the KPIs of interest. While many vendors have implemented this feature using data science algorithms, Gen AI could bring in external data that is relevant to the context of process and augment the list of factors.</p><h3><strong>Creative content</strong></h3><p>Next comes, the capability of generating 'Creative Content'.  It implies rapid tailoring of complex and structured documents to specific needs and contexts. Simply put, its about creating  or helping co-create solutions relevant to the context.</p><p>To start with, a very straightforward use case,  is about <strong>documentation of process</strong>, SOPs etc., from the technical specifications like <a href="https://www.processmaker.com/blog/the-separation-of-decision-modeling-from-bpmn/">BPMN, DMN</a> and control charts.</p><p>The next use case in 'creative content' is about <strong>gen AI partnering with process designer in devising a solution</strong>. In one of the interviews Marlon Dumas, the founder of Apromore mentions that, the process improvement life cycle there is "big chasm" from identifying the bottleneck or the root cause to devising a solution and feels Gen-AI could play a vital role there. He also mentions that in the pilot implementation, the Six Sigma practitioners found that Gen AI suggested solutions that they would otherwise not have thought of.  </p><p>Many at times, in an improvement or transformation journey, it is important to incorporate the domain expertise to meet the compliance norms or to prevent reinventing the wheel. While an experienced process engineer in the domain could bring in those design patterns, a Gen AI system could analyze the processes and suggest alternatives. Also in the process design, they could also help in the risk analysis of future state.  </p><h3><strong>Customer engagement</strong></h3><p>This is the most popular Gen AI use case and it refers to the capability of building copilots that can guide customers through their personalized journeys in the realm of customer engagement. More popularly known as the democratization of process mining journey.</p><p><strong>One straightforward use case, is in using a copilot for querying</strong>. Instead of writing complex SQLs / PQLs, the user can prompt their queries, which the Gen AI can convert to query language and return the results. This can be extended to creating complete dashboards with prompts. Many vendors have already implemented this features in different forms.</p><p>The next use case, is about <strong>making simulations more conversational</strong>. Once the bottlenecks are identified and addressed through solutions, the next logical step is to have a view of the future state. It could be having a view on new through throughputs, cycle times, SLAs, resourcing etc., Simulating multiple scenarios using the digital twin helps the process designer in answering those questions. Given that the nature of 'what - if' analyses is inherently conversational, having a co-pilot do that would be a great enabler.</p><p>Another interesting use case in this category is in the <strong>process story telling</strong>. To contextualize what each user needs to know from the process. Depending on the role of the user, the Gen AI could highlight aspects that they might interested in. For example, senior business user might be interested to know the broad KPIs and their trends. A department head might want to understand the KPIs, trends and causation factors in depth for her department. A process engineer might need to know the inefficiencies like long cycle times of segment, loops and reworks in the process. Like wise, someone looking for tech, automation and AI opportunities to refine the process will look for different things. Currently these requirements are partly catered by building different dashboards or views, but Gen AI can definitely do a much better job in contextualizing the content. Gen AI along with AI can be a powerful navigator.   </p><h3><strong>Coding and software</strong></h3><p>The final capability of Gen AI is in 'coding and software' - the capability that enables the user to co-create new software and migrate legacy systems at scale and speed. There were not many process mining use cases in the segment - but I did come across use-cases where entire user stories and software could be generated from the BPMNs and DMNs. Well, why not? With the AGI not far-away, I think we should aspire to reach <strong>'process autonomization</strong>'. Systems that automatically detect and act on compliance and performance gaps. And 'self-heal' too! </p><p>You might have noticed that this is not an exhaustive or a mutually exclusive set of use-cases. There are overlaps between them and I was more interested in laying of the different broad possibilities that could unfold. Undoubtedly, like in any other discipline, Gen AI would have a tremendous impact on process management. The technologies, both rapidly evolving, would complement each other and transform the way processes of tomorrow are designed, improved and maintained.</p><p>What are the ones that you find are most impactful to your industry?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Its quarter century (g)old]]></title><description><![CDATA[Evolution of process mining]]></description><link>https://www.ramram.ai/p/its-quarter-century-gold</link><guid isPermaLink="false">https://www.ramram.ai/p/its-quarter-century-gold</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Thu, 18 Apr 2024 22:14:40 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In 2022, Chat GPT made news for moving scaling to a million users in 5 days. In contrast, the first mention of neural networks dates back to 1943. After 80 years, a technology that was built using the underlying neural net concept, scaled at a lightening pace. So that leads us to ask - why do certain technologies scale faster while others don't? Its about the complex interplay between - relevance, demand and a supporting ecosystem. And some luck. Yeah, its about being at the right place at the right time. Going by that standard, process mining, for all its relevance and impact had a pretty slow start. At this juncture, when process mining is celebrating its silver jubilee, let us delve into the origin of process mining, its evolution, the current state and the potential directions that it could take from here.</p><p>It all started with this thought of Prof. Wil Van der Aalst - <strong>"What if I could reconstruct the process from its execution logs?"</strong>. This is the idea he had introduced in the year 1999 in his publication "Process Design by Discovery". For almost one decade after that, we could only find it in mentions of IEEE conference proceedings and in the Prom software. In parallel, this was the era when there was  massive digitization taking place across industries.  From 2010s, commercial software like Celonis, Disco and others gained prominence. It was also the time when I got introduced to this concept through the famous Coursera program - <strong>"Process Mining: Data science in action"</strong>. The commercial software were able to integrate themselves into the larger ecosystem of big data analytics and machine learning. Thanks to the live integrations with the source systems SAP, Salesforce these firms were able to provide an execution layer to support process management. The segment started growing leaps and bounds. </p><p>Fast forward to the present, there are over 30 process mining vendors and each of them is approaching the market in different ways. Some of the broad themes which the vendors are trying to address are,</p><ul><li><p><strong>Discovery and analysis</strong> - This is one of the basic features in all process mining tools. It is about the ability to read process event logs and generate the process maps. Using that model, almost all vendors, provide capabilities to configure your KPIs and dashboards.</p></li><li><p><strong>Process Improvement</strong> - Some software vendors have tailored their capabilities to support the framework of systematic process improvements like Six Sigma. That includes capabilities like automatic detection of opportunities, benchmarking, root cause analysis and monitoring of improvements   </p></li><li><p><strong>Comparative process mining</strong> - This refers to the capability to compare a process against some standards to compare the KPIs or execution. It could be useful in auditing for compliances against standard process or other process variants.</p></li><li><p><strong>Process Simulation</strong> - Using the digital twin of the process to conduct to predict the outcomes of process changes and do what-if analyses</p></li><li><p><strong>Automation enablement</strong> - Process mining and automation complement each other well. While process mining highlights the bottlenecks or inefficiencies, automation solves some of them. So many automation vendors have also acquired process mining capabilities to support the identification of areas for automation.</p></li><li><p><strong>Process Documentation</strong> - Some vendors have provided the capabilities to for end to end process management by including features to document and maintain BPMN models and compare it against the execution</p></li><li><p><strong>Task mining</strong> - This is an important feature that helps unpack the intelligence from the event data available in UI logs. These UI logs that are generated from the clicks, keystrokes and data entries are useful in understanding the manual effort involved in performing the tasks</p></li><li><p><strong>Other advanced use-cases</strong> like machine learning workbench, predictive process monitoring and prescriptive analyses are also provided by some of the vendors.</p></li></ul><p>I had just detailed out the major themes; a more detailed explanations can be found in research reports like Gartner and Forrester. Complementary version of these reports can be downloaded from many of the vendor websites.   </p><p>Now we enter treacherous zone of forecasting what the future evolution might look like. There are three major themes that could play out,</p><ul><li><p><strong>Process mining tools go 'invisible'</strong> - Don't get me wrong, they would continue to exist, but like electricity, they will be an invisible layer powering things. They could connect different legacy systems be the layer that integrates process data , rules of execution, alert / action definitions etc., Some of the analysis and visualization might be taken over by the BI tools and specialized simulators, process designers, automation systems - all powered by the 'invisible layer' - could evolve.  </p></li><li><p><strong>Gen AI unlocks new capabilities in process intelligence</strong> - This is an evolution I am near certain about. Gen AI will become another layer between the user and the core process mining engine. This will help in the realizing many gen AI use cases like - contextualization, democratization, content creation etc., . A more detailed article on this will follow.   </p></li><li><p><strong>OCPM - Object centric approach to process mining</strong> is a novel way that overcomes the limitations of  traditional process mining. It captures the interaction between systems and objects so that processes can be followed more holistically across entire business. The adaption might be steep, but I believe, will bring about a lot of value to many complex processes.</p></li></ul><p>Its a journey that's just getting started. This was meant to give a broad overview and outline without getting bogged into the details. I will follow this up with detailed articles on some of these topics. Meanwhile, please share any of the broad use-cases or trends that you find most relevant to your industry. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Six Sigma is dead, long live Six Sigma]]></title><description><![CDATA[Has Six Sigma lost its relevance?]]></description><link>https://www.ramram.ai/p/six-sigma-is-dead-long-live-six-sigma</link><guid isPermaLink="false">https://www.ramram.ai/p/six-sigma-is-dead-long-live-six-sigma</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Thu, 18 Apr 2024 22:08:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If I had to rank the list of management fads of this century, I wouldn&#8217;t be surprised if Six Sigma gets voted to the top. What was once seen as the foolproof solution to every business problem has now been relegated to a chapter or two in the operations textbooks. This is not something recent. Starting early 2000&#8217;s Six Sigma started losing sheen. The decline has been steady since then. This is not an obituary to something that passed away long back. Far from it. In this short read, I will try to explain the fall and try to reason why the fundamentals of Six Sigma are more relevant than ever. Having practiced these techniques for many years, I could think of a few shortcomings. </p><p>First, <strong>Six Sigma was sold as a panacea</strong>, a cure-it-all framework to all management problems. Not every problem needs to go through the Six-Sigma rigor. </p><p>Second, <strong>the paraphernalia of the &#8216;belt culture&#8217;</strong>. While it was an attempt to bring greater clarity to the overall scheme of things, trying to enforce stringent rules for project or role classification, at times, turned out be borderline farcical. The belts also gave it an elitist flavor, which further alienated people. </p><p> Finally, majority of practioners, <strong>did not embrace the technology developments</strong> in process-analysis and transformation. </p><p>In the recent years, technologies like ML/AI, automation, process-mining, cloud-computing, blockchains, etc., have all had transformational impact on the way we do things. This led to multiple transformation initiatives that were executed in parallel within the same organization - from the process team, technology team, customer experience team etc., Understandably, this led to chaos and contributed to the irrelevance of the Six Sigma to the transformation agenda. That was a lot of nitpicking! </p><p>Before we write it off, let us take a step back. In essence, <strong>what Six Sigma tries to do is to delineate and address the root-cause of variation / errors in a well-defined problem using a data-driven approach</strong>. It trains you on the tools / techniques that help you achieve this. Essentially, it makes you focus on what matters the most for the objective that you seek to attain. Everything else is built on top of this premise. Pretty commonsensical, right? Seeing it at this level, will make Six Sigma relevant to a lot of use-cases. </p><p>Let us take case of <strong>digital transformation programs</strong>. In this article [1] McKinsey points that the lack of broad and holistic understanding is one of the primary impediments. Some view it as an IT program, some see it as digital marketing or analytics program. What is often lost is the clear and tangible understanding of the impact that the digital solutions could have on the business processes and its eventual impact to ROI of the program. </p><p>Next let us understand the reasons for failure in <strong>process automation programs</strong>. A Forbes article[2] points out the primary reasons why automation programs fail is the lack of clear to-be state, lack of change management and, not surprisingly, automating the wrong processes. An incorrect understanding of the interplay between business-processes, people, and the important key process indicators (KPIs) could lead to investing effort in automating things that might not move the needle. </p><p>Finally, a few learnings from the <strong>analytics programs</strong> that I have watched from proximity. Overzealous business intelligence developers, in an attempt give a comprehensive view of what&#8217;s happening, end up creating very complex and crowded dashboards. This often leads to fatigue and inertia from the user and results in the classical case of signal getting lost in a noisy dashboard. There have also been instances where jazzy visualizations get used for the sake of it. In all these cases, adapting the Six Sigma approach could have avoided some of these pitfalls. </p><p>The big-picture orientation, having clarity on the right set of metrics, ensuring the robustness of your measurement systems, addressing the right set levers that would bring about the change and ways to sustain the change are all foundational principles of Six Sigma. <strong>Someone recently encapsulated the essence of Six Sigma as &#8216;scientific common sense&#8217;.</strong> I couldn&#8217;t agree more. Once you start seeing it as a philosophy, approach, and an expanding toolset to solve process-problems, things start becoming relevant. Going forward there might not be fancy belt titles or DMAIC milestone reviews, but Six Sigma would have its legacy through the commonsensical and methodical approach to problem solving.</p><p>[1] [https://www.mckinsey.com/industries/retail/our-insights/the-how-of-transformation](https://www.mckinsey.com/industries/retail/our-insights/the-how-of-transformation) </p><p>[2][https://www.forbes.com/sites/forbestechcouncil/2021/10/12/why-process-automation-initiatives-fail-and-how-yours-can-succeed/?sh=55d539683b62](https://www.forbes.com/sites/forbestechcouncil/2021/10/12/why-process-automation-initiatives-fail-and-how-yours-can-succeed/?sh=55d539683b62)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Transformation ^ 2]]></title><description><![CDATA[Transformation of ( Process Improvement X My career )]]></description><link>https://www.ramram.ai/p/transformation-2</link><guid isPermaLink="false">https://www.ramram.ai/p/transformation-2</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Thu, 18 Apr 2024 22:00:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Data warehousing - Six Sigma Green Belt - Black Belt - Master Black Belt - Business Analytics and Intelligence - Automation - Process Mining - Generative AI</strong></p><p>If you are wondering about these terms and their sequence, they happen the courses that I did during the last 18 years. In the next few articles, I invite you to accompany me as I recount my journey. More than my journey, it is the journey and evolution of process improvement and transformation as a discipline. Along the journey, I will take you through some interesting applications of these methods with the objective of transforming processes.</p><p>After my B.Tech in information technology, I joined TCS from the campus. I was put through a two month learning program. Lots were drawn at the end of the program, I was destined to join data warehousing. For a fortune 5 customer, who insisted that IT vendors were also Six Sigma compliant. Shh, we are not supposed to name them! That&#8217;s how I pivoted from data warehousing to Six Sigma. </p><p>Initially, Six Sigma programs in IT firms were primarily about compliances. Ensuring that you have certain number of &#8216;belts&#8217;. In the process of acquiring those belts, people did improvement projects. The Six Sigma teams had to enable them to execute those projects and &#8216;drive&#8217; the compliance and the savings target. It was okay till it was not, more on this in the next article. After spending some time in corporate offices, consulting, I moved on to the world of hard dollar benefits. This was a different experience and I learnt more about persuasion and change management than anything else.</p><p>In parallel, the data-science was picking up momentum. There were HBR articles written touting the sexiest job of 21st century, books written and courses offered. &#8216;Given my inclination to anything quantitative, I enrolled for a program with IIM Bangalore. It was worth it. More than venturing into standalone analytics engagements, I was able to do process transformations much better and faster. Be it classification, regression or clustering, every method had a rightful place in the transformation arsenal.</p><p>The biggest leap to my practice came in the form of process mining. For folks who have done lean action workouts, you can imagine the process discovery days where there giant whiteboards and post-its and heated discussions on how the process is actually getting executed. All of that was gone. You had an objective, data-based view of your process. With the digital twin in place, the product firms began to offer a host of other features like conformance checking, automation, simulation, prediction etc., Its only a start and its no wonder that the market is expected to grow at 45% a year till 2030.</p><p>While all these methods helped you understand the as-is process and get a view of bottlenecks and other causes of variation, automation, data-products and Gen-AI, has helped us implement some quick fixes or band aid solutions and make the process better. From the days of excel macros for RPA, we have come a long way in the form of intelligent automation. Gen AI, though in its initial stages, has offered us tremendous potential to improve operations. We have tested out some promising use-cases in the area of agent enablement and customer servicing. </p><p>Its an exciting time to be alive. With all these tools in our arsenal, executing transformations has never been more exciting. In the next few weeks, I will share my experiences of using these tools with a dose of gyaan, that inevitably comes along with grey hairs. Stay tuned.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Process &#8594; Insights&#8594; Action! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Hello world!]]></title><description><![CDATA[print("Hello, world!")]]></description><link>https://www.ramram.ai/p/hello-world</link><guid isPermaLink="false">https://www.ramram.ai/p/hello-world</guid><dc:creator><![CDATA[Ram]]></dc:creator><pubDate>Sun, 14 Apr 2024 15:51:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!74zJ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff917c3e1-5b8d-4789-b5b7-ea0c71be1b52_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.ramram.ai/subscribe?"><span>Subscribe now</span></a></p><p>It has been a long cherished dream to have a blog with this evergreen poem as the first post. Every line of Tagore, resonates with the values that I hold dear and I invite you to join my journey..</p><p><strong>Where The Mind Is Without Fear</strong></p><p>Where the mind is without fear and the head is held high<br>Where knowledge is free<br>Where the world has not been broken up into fragments<br>By narrow domestic walls<br>Where words come out from the depth of truth<br>Where tireless striving stretches its arms towards perfection<br>Where the clear stream of reason has not lost its way<br>Into the dreary desert sand of dead habit<br>Where the mind is led forward by thee<br>Into ever-widening thought and action<br>Into that heaven of freedom, my Father, let my country awake.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.ramram.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading RR&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>