<?xml version="1.0" encoding="UTF-8"?>
  <rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
      <title>Somrit Dasgupta</title>
      <link>https://somritdasgupta.in</link>
      <description>Recent Content on Somrit Dasgupta's Blog</description>
      <language>en-us</language>
      <pubDate>Sun, 18 Jan 2026 19:10:53 GMT</pubDate>
      <lastBuildDate>Sun, 18 Jan 2026 19:10:53 GMT</lastBuildDate>
      <ttl>1440</ttl>
      <atom:link href="https://somrit.vercel.app/rss" rel="self" type="application/rss+xml" />
      <copyright>© 2026 Somrit Dasgupta</copyright>
      <generator>Somrit Dasgupta's RSS Generator</generator>
      <item>
            <title>So, Everything &apos;AI-Powered&apos; Now?</title>
            <link>https://somritdasgupta.in/blog/ai-hype</link>
            <description>A reality check on the AI hype everywhere. My take on this marketing fluff and talk about what&apos;s real, what&apos;s not, and why every company on earth suddenly has an &apos;AI strategy&apos;.</description>
            <pubDate>Sun, 07 Sep 2025 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/ai-hype</guid>
            
            
            <category>ai</category><category>opinions</category><category>industry</category>
            <content:encoded><![CDATA[<p>You can't escape it. Every press release, every product launch, every goddamn coffee maker is now &quot;AI-powered.&quot; It's the new &quot;organic&quot; or &quot;gluten-free&quot;—a magic phrase slapped on a product to make it sound futuristic and justify a 20% price hike. I swear, I'm waiting for the announcement of an AI-powered toothbrush that &quot;optimizes my brushing strategy.&quot;</p>
<p>The tech world is in the middle of a full-blown AI frenzy, and I can't help but feel like we're all getting caught up in a hype machine running at maximum overdrive.</p>
<blockquote>
<p>&quot;AI-powered is tech’s meaningless equivalent of all-natural.&quot; – Devin Coldewey, TechCrunch</p>
</blockquote>
<p>This quote nailed it years ago, and it's only gotten more true.</p>
<h3>Let's Get Real: What 'AI' Actually Means</h3>
<p>Before we go further down the rabbit hole, let's do a quick sanity check. As developers, we know that &quot;AI&quot; isn't some magic dust. At its core, real AI—the stuff that's genuinely impressive—is about systems making decisions based on patterns in data, not just a bunch of hard-coded <code>if-else</code> statements.</p>
<p>Think large language models with billions of parameters, or a neural network that can actually identify cancer in medical scans. That's one end of the spectrum. On the other end, you have a script that suggests a playlist based on the weather. One is complex statistical modeling; the other is a glorified switch statement. Guess which one gets labeled &quot;AI&quot; in marketing copy?</p>
<p>&lt;Callout&gt;
The problem is that &quot;AI&quot; is now a catch-all term for everything from genuine
machine learning to basic automation that's been around for 20 years. Our job
is to spot the difference.
&lt;/Callout&gt;</p>
<h3>The Hype is the Product</h3>
<p>The marketing departments have figured out that the letters 'A' and 'I' print money. An &quot;automated&quot; system sounds boring. An &quot;AI-powered&quot; system sounds like you're living in <em>Blade Runner</em>. So now your email client has 'AI' to sort your spam, your CRM has 'AI' to predict which customer to call, and your food delivery app has 'AI' to suggest tacos. Again.</p>
<p>The issue isn't that these features are bad. The issue is that calling them &quot;AI&quot; sets an expectation that the reality can't match. It's dishonest.</p>
<h3>Case Study: The Nvidia Rocket Ship</h3>
<p>If you want a flashing neon sign that says HYPE, look no further than the stock market. Look at Nvidia. Their GPUs are the essential shovels in this AI gold rush, and their stock went absolutely ballistic, up 800% in a ridiculously short time.</p>
<p>&lt;Image
src=&quot;https://media.ycharts.com/charts/d90c5e9e14298e1adde87eac758700a0.png&quot;
alt=&quot;NVDA stock surge Q4 2023&quot;
/&gt;</p>
<p>&lt;Callout emoji=&quot;🚀&quot;&gt;
This is a testament to their incredible tech, no doubt. But it's also a clear
signal of a speculative frenzy. When the valuation of one hardware company
starts to rival the GDP of small countries, you know the market is high on
something.
&lt;/Callout&gt;</p>
<p>This isn't just about Nvidia; it's about the entire ecosystem. The money is flowing not just to real innovation, but to anything with an AI label attached.</p>
<h3>The Old 'Automation' Trick in a New 'AI' Hat</h3>
<p>This is the part that gets intentionally blurred by marketing teams. Let's make it simple.</p>
<ul>
<li><strong>Traditional Automation:</strong> A dumb robot doing <em>exactly</em> what you told it to do, over and over. Think of a cron job that runs every hour to back up a database. It's predictable, reliable, and follows a script.</li>
<li><strong>Actual AI:</strong> A smart robot that <em>learns</em> and gets better over time. Think of a system that analyzes market data and adjusts its trading strategy without human intervention.</li>
</ul>
<p>A lot of companies right now are selling you the dumb robot but putting the smart robot's picture on the box. It’s a classic bait-and-switch.</p>
<h3>The Hard Truths Nobody Puts in the Press Release</h3>
<p>As engineers, we know AI isn't a magic wand. It comes with a truckload of limitations that the marketing department conveniently leaves out.</p>
<ul>
<li><strong>Garbage In, Garbage Out:</strong> ML models are ravenously hungry for data. They need mountains of <em>clean, well-labeled</em> data to not be incredibly stupid. The reality is more like &quot;biased garbage in, confidently wrong garbage out at scale.&quot;</li>
<li><strong>Bias is a Feature, Not a Bug:</strong> These models are mirrors. They perfectly reflect the biases present in the data we feed them. If your historical hiring data is biased, your new &quot;AI recruitment tool&quot; will just become a faster, more efficient way to maintain the status quo.</li>
<li><strong>The Black Box Problem:</strong> Ask me why my sorting algorithm works, and I can walk you through the code line by line. Ask a deep learning model why it denied someone a loan, and it basically shrugs. This &quot;explainability&quot; problem is a massive issue when the stakes are higher than recommending a movie.</li>
</ul>
<h3>How to Cut Through the <em>&quot;Noise&quot;</em></h3>
<p>So how do we, as developers and savvy tech consumers, not get played? It's about staying skeptical and asking the right questions.</p>
<blockquote>
<p>“As the technologies in this hype cycle are still at an early stage, there is significant uncertainty about how they will evolve. Such embryonic technologies present greater risks for deployment, but potentially greater benefits for early adopters.”<br>
–Melissa Davis, Vice-President Analyst at Gartner</p>
</blockquote>
<p>Gartner's take is the corporate way of saying &quot;it's a chaotic mess, tread carefully.&quot; Here's my version:</p>
<ol>
<li><strong>Ask &quot;So What?&quot;:</strong> Does this &quot;AI feature&quot; actually solve a real problem for me, or does it just add complexity? My &quot;AI-powered&quot; fridge telling me I'm out of milk is useless if it doesn't also go to the store and buy the milk.</li>
<li><strong>Demand the &quot;How&quot;:</strong> If a company can't explain in simple terms <em>what</em> their AI is actually doing (e.g., &quot;it's using a regression model to predict churn based on user activity&quot;), it's probably just marketing fluff.</li>
<li><strong>Follow the Money:</strong> Is the AI a core part of the product's function, or is it a premium add-on designed to get you to upgrade? The answer usually tells you how essential it really is.</li>
</ol>
<h3>Wrapping It Up...</h3>
<p>Look, real AI is genuinely transformative. The progress in the last few years is staggering, and it's changing industries. But the current landscape is also a minefield of marketing gimmicks and inflated promises.</p>
<p>Slapping an AI label on a simple algorithm is the tech equivalent of calling a puddle an ocean. Our job, as the people who actually build and understand this stuff, is to stay skeptical. To ask the hard questions. And to remember that if a product's best feature is the &quot;AI&quot; label itself, it probably doesn't have any real features at all.</p>
]]></content:encoded>
            
            
          </item><item>
            <title>How I Learned to Build with LLMs Without Losing My Mind</title>
            <link>https://somritdasgupta.in/blog/llm-building</link>
            <description>A survival guide to self-teaching Generative AI. We&apos;ll cover the struggles of prompt engineering, the victories of a working RAG pipeline, and the repeatable framework for learning that stops me from rage-quitting.</description>
            <pubDate>Sat, 15 Feb 2025 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/llm-building</guid>
            
            
            <category>ai</category><category>development</category><category>guides</category><category>career</category>
            <content:encoded><![CDATA[<p>It’s a Sunday morning. I’m scrolling through my feed, and it’s unavoidable. Demo after demo of some new mind-blowing AI tool. Someone built a chatbot that can talk to their entire codebase. Someone else made an app that turns rough notes into a polished essay. It’s all powered by LLMs, LangChain, vector databases—a whole new vocabulary that seemingly appeared overnight.</p>
<p>And the familiar feeling washes over me—a potent cocktail of genuine curiosity and profound exhaustion. &quot;Oh great,&quot; I think, &quot;another massive thing I'm supposed to be an expert in already.&quot;</p>
<p>This is the treadmill every developer is on. A new paradigm emerges, and the clock starts ticking. Staying still means getting left behind.</p>
<p>But how do you actually learn this stuff—which feels more like alchemy than engineering—without sacrificing your sanity? After diving headfirst into the GenAI rabbit hole, I’ve realized the secret isn't about being a genius. It's about having a system. A repeatable framework for learning itself.</p>
<p>Let me walk you through how I tackled this new world of AI and the &quot;how to not quit&quot; system I've come to rely on.</p>
<h3>Phase 1: The Magic of the First API Call</h3>
<p>Every new tech journey starts here. It's the &quot;Hello, World!&quot; phase. You're following a tutorial, you've plugged in your API key, and everything just <em>works</em>.</p>
<p>With LangChain, it was a simple script calling an OpenAI model. I asked it, &quot;What are three fun facts about the Indian Space Research Organisation (ISRO)?&quot; and in seconds, it streamed back a perfect, coherent answer. It felt like I had summoned a ghost in the machine. For a brief, glorious moment, I felt like a wizard. &quot;This is easy,&quot; I thought. &quot;I'll have my own personal Jarvis by next month.&quot;</p>
<p>&lt;Callout emoji=&quot;💡&quot;&gt;
<strong>Pro-Tip:</strong> Always start with the official &quot;Getting Started&quot; guide or
cookbook. Seriously. The GenAI space is moving so fast that any YouTube
tutorial older than three months is probably deprecated. The official docs are
your ground truth.
&lt;/Callout&gt;</p>
<p>This phase is all about building a little momentum. The real fight is just around the corner.</p>
<h3>Phase 2: First RAG Pipeline Disaster</h3>
<p>This is the make-or-break stage. It's the moment you step off the perfectly paved path of a simple Q&amp;A and try to build something that uses your <em>own</em> data. For me, that meant building a &quot;Chat with your PDF&quot; app using the popular RAG (Retrieval-Augmented Generation) pattern.</p>
<p>I thought it would be simple. Just point the library at a PDF and start asking questions. Three days later, I was drowning in a sea of new concepts: document loaders, text splitters, embedding models, vector stores, similarity search...</p>
<p>My &quot;simple&quot; app was a disaster. It was giving me confidently wrong answers pulled from the wrong parts of the document, or worse, just making stuff up entirely (a phenomenon I learned is called &quot;hallucination&quot;). And with every test run, I had a nagging fear in the back of my mind about my OpenAI billing page.</p>
<pre><code class="language-python"># This looks simple, but the magic and the madness are hidden
# in how each of these components is configured. My first attempt
# was a mess of incorrect chunk sizes and mismatched embeddings.

from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain.chains import RetrievalQA

# 1. Load the document (The easy part)
loader = PyPDFLoader(&quot;my_document.pdf&quot;)
docs = loader.load()

# 2. Split the text (The part where I got everything wrong)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=20)
splits = text_splitter.split_documents(docs)

# 3. Create the vector store (What's an embedding model again?)
vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())

# 4. Ask a question (Why is it making things up?!)
qa_chain = RetrievalQA.from_chain_type(llm=ChatOpenAI(), chain_type=&quot;stuff&quot;, retriever=vectorstore.as_retriever())
result = qa_chain.invoke({&quot;query&quot;: &quot;What was the key finding on page 12?&quot;})

print(result)

</code></pre>
<p>The breakthrough wasn't a grand epiphany. It was when I finally visualized the chunking strategy and realized my text was being split mid-sentence, creating meaningless vectors. The moment the retriever pulled the <em>exact</em> right chunk of text and the LLM summarized it perfectly... that was the sweetest victory I'd had all month. It wasn't magic anymore; it was engineering.</p>
<p>All that struggle was to power something that looks deceptively simple, like this interactive sandbox below. Seeing the end goal, even as a mock-up, is what keeps you going through the frustration.</p>
<p>&lt;LiveCode mode=&quot;preview&quot; fileNames={[&quot;RAGchatSandBox.js&quot;]} template=&quot;react&quot; /&gt;</p>
<h3>Phase 3: My &quot;How to Not Quit&quot; Learning Framework</h3>
<p>After surviving enough of these battles, I realized I was unconsciously following a pattern. I've since formalized it into my personal learning framework.</p>
<ol>
<li>
<p><strong>Scope It Down to a Micro-Project.</strong> Don't set a goal to &quot;Learn AI.&quot; That's a recipe for failure. Make it brutally specific and small. My goal became: &quot;Build a CLI tool that lets me ask questions about one specific 20-page PDF.&quot; That's it. A finite problem with a clear finish line.</p>
</li>
<li>
<p><strong>Build Something Stupid, but Real.</strong> The &quot;Chat with your PDF&quot; tool was my &quot;stupid project.&quot; It had no business value. But it forced me to engage with the entire RAG pipeline: loading, chunking, embedding, retrieving, and prompting. The goal is reps, not a resume piece.</p>
</li>
<li>
<p><strong>Read Good Code.</strong> After I got my feet wet, I found a popular, well-regarded open-source AI project on GitHub and just... read the code. How do they structure their prompts? How do they chain calls together? How are they handling conversation history? It’s a masterclass in practical application.</p>
</li>
<li>
<p><strong>Teach It to a Rubber Duck.</strong> When I was stuck on a concept like vector embeddings, I would literally open a text file and try to explain it in plain English, as if I were teaching a beginner. The act of articulating it forces your brain to connect the dots and reveals what you don't actually understand.</p>
</li>
</ol>
<h3>The Snowball Effect: Why the Next Tech is Always Easier</h3>
<p>Here's the real secret: the specific library you're learning doesn't matter as much as you think. The real skill you're building is the <em>process of learning itself</em>.</p>
<p>After you've wrestled with one RAG pipeline, you understand the core pattern. You realize that all of these GenAI apps are just variations on a theme: data prep, retrieval, and clever prompt engineering. Whether you're using LangChain, LlamaIndex, or just making raw API calls, the fundamental thinking is the same.</p>
<p>Learning prompt engineering had an unexpected side effect: it made me a better communicator, even with humans. You learn to be precise, to provide context, to define your desired output format, and to anticipate how your instructions might be misinterpreted.</p>
<p>The &quot;Trough of Sorrow&quot; still exists for every new technology, but my visits get shorter. I now recognize the frustration not as a sign that I'm stupid, but as a sign that I'm actually learning something meaningful.</p>
<h3>Your Most Valuable Skill</h3>
<p>The goal isn't to master every new AI library that appears on your feed. That's impossible and a direct path to burnout.</p>
<p>The goal is to master your own learning process.</p>
<p>So next time a new technology drops and you feel that familiar mix of excitement and dread, remember the framework. Scope it down, build something stupid and tangible, read the work of others, and explain it back to yourself. You're not just learning a tool. You're leveling up your single most valuable skill as a developer—the ability to learn.</p>
<p>Now go build a useless chatbot. It’s the smartest thing you can do.</p>
]]></content:encoded>
            
            
          </item><item>
            <title>The new passwordless game</title>
            <link>https://somritdasgupta.in/blog/the-new-passwordless-game</link>
            <description>Why Passwordless is Taking Over the Old traditional Way of how we login to Our accounts</description>
            <pubDate>Tue, 29 Oct 2024 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/the-new-passwordless-game</guid>
            
            
            <category>industry</category><category>opinions</category><category>development</category>
            <content:encoded><![CDATA[<p>Alright, so here we are in end of 2024, and passwordless is becoming the new norm — those frustrating strings of letters, numbers, and weird symbols — are finally on their way out (<em>Kinda</em>). Anyone else tired of having to remember if they capitalized the second letter of their first pet’s name? Or worse, the anxiety of whether that familiar password will work on yet another new website? Yep, me too. And here’s the good news: passwordless tech is making these struggles a thing of the past.</p>
<p>I mean, think about it: how many times have you used <em><strong>“forgot my password”</strong></em> links this year? I can’t even count anymore. That’s why it’s such a relief that we’re moving towards a world without traditional passwords. So let’s break down what <em>passwordless</em> actually means, why it’s replacing both old passwords and systems like OAuth and SSO, and why this shift is so crucial for all of us.</p>
<h3>Passwords: Why We’ve Had Enough</h3>
<p>Let’s start with a fact: the idea of passwords was fine back in the early internet days, but it just doesn’t hold up now. Years ago, having a password was a quick, simple way to keep things secure. But then, suddenly, every site wanted you to make a “unique” password. Then add a special character. Then don’t use the last one you just created… It's exhausting, right?</p>
<p>And as much as we might try, most of us just end up reusing the same password variations. Because who has the time to remember a hundred different random combinations? The sad irony here is that the harder we try to make passwords strong, the easier they actually become for hackers to crack — especially when they can use stolen databases to guess common patterns. Password managers were supposed to help, but to be honest, they’re just another app you have to remember to open!</p>
<p>&lt;Callout emoji=&quot;🤔&quot;&gt;
Isn’t it wild that for years, the tech industry kept adding more hoops for us
to jump through to <strong>“secure”</strong> passwords, yet they kept getting easier to
hack? Turns out, this entire <strong>“make it stronger”</strong> trend wasn’t as foolproof
as we thought.
&lt;/Callout&gt;</p>
<h3>So, What’s Passwordless All About?</h3>
<p>Passwordless basically means saying “goodbye” to remembering passwords and instead relying on <em>something more reliable</em> to prove that it’s you. This could be a face scan, a fingerprint, or even a device you already trust, like your phone or a security key. These are things that are a lot harder to steal or guess, making them much more secure than any password you or I could dream up.</p>
<p>Imagine this: instead of trying to remember whether your current password is “Fluffy@123” or “Fluffy123!,” you can just use your fingerprint or face to log in. Done. No fuss, no mess. And here’s the best part — this approach is usually faster and way more secure.</p>
<h4>How Passwordless Actually Works</h4>
<p>It all boils down to three kinds of “proofs” to show that you’re really you:</p>
<ol>
<li><strong>Something you <em>have</em></strong> - like your phone or a security key.</li>
<li><strong>Something you <em>are</em></strong> - like a face scan or fingerprint.</li>
<li><strong>Something you <em>know</em></strong> - like a simple PIN.</li>
</ol>
<p>So instead of just one method (your password), you’re using two or more of these things together, which is what we call <em>multi-factor authentication</em> (MFA). It’s like the security guard at the door asking for a couple of IDs instead of just one — it’s a lot harder for anyone else to fake being you.</p>
<h3>The Extra Layer of <em><strong>2-Factor Authentication</strong></em> (2FA)</h3>
<p>Now, if you’ve ever enabled 2FA (two-factor authentication) for an account, then you’re already familiar with the principle. It’s all about having that second proof of identity — so even if someone <em>did</em> manage to guess your password, they’d still need another piece of info to log in. In traditional systems, 2FA adds an extra layer of security, often in the form of a one-time code sent to your phone, an email, or a separate authentication app.</p>
<p>Passwordless authentication can actually work with 2FA too. Many passwordless logins will still ask for a second authentication factor like a PIN or a quick tap on your phone. Here’s where it differs, though: instead of relying on your password as one of the factors, you’re using something more secure, like your biometrics. This combination — usually called <em>multi-factor authentication</em> — means you have two proofs of identity that are much harder for hackers to break through.</p>
<p>So, in a way, passwordless is the next evolution of 2FA. It keeps the idea of “two layers of security” but gets rid of the weakest link: passwords.</p>
<h3>What About <em><strong>Single Sign-On</strong></em> (SSO)?</h3>
<p>SSO, or Single Sign-On, has been a lifesaver for those of us who just want to click “Sign in with Google” or “Sign in with Facebook” and get into our accounts without typing in new credentials. The principle of SSO is simple: you authenticate once with a trusted provider (like Google or Microsoft), and then you can access other sites without needing to log in each time.</p>
<p>But here’s the catch: even with SSO, passwords are still in the background. When you use Google or Facebook to log into another service, you’re relying on the security of that original password on Google or Facebook’s system. So, if someone gets hold of your Google or Facebook password, they have access to everything else you’ve signed into with SSO.</p>
<p>Passwordless tech could potentially <em>enhance</em> SSO by removing the need for a password altogether. Imagine an SSO system that’s linked to your biometrics or a security key instead of a password. This means you’d authenticate once, say with a fingerprint or face ID on your trusted device, and that trusted login would carry over across services, <em>without</em> relying on a traditional password behind the scenes.</p>
<p>This approach could make SSO even more secure. By combining SSO with passwordless methods, we can create a world where we log in once with a truly secure method and then stay logged in across all our connected services without fear of password theft.</p>
<p>&lt;Callout emoji=&quot;🔍&quot;&gt;
<strong>SSO</strong> made things easier, but passwordless tech can make it safer. By moving
away from passwords in SSO systems, we’re adding a layer of trust and security
without adding complexity for users.
&lt;/Callout&gt;</p>
<h3>Here’s Why <strong>Passwordless</strong> is Becoming the New Normal</h3>
<p>As more and more people are adopting passwordless tech, we’re learning a few things about the pros and cons.</p>
<p>&lt;ProConsComparison
pros={[
&quot;Improved security over traditional passwords.&quot;,
&quot;Seamless and faster login experience.&quot;,
&quot;Reduced customer support costs for password resets.&quot;,
&quot;Fits modern expectations for simplicity and speed.&quot;,
&quot;Reduces reliance on risky single-password models.&quot;,
]}
cons={[
&quot;Relies on biometrics, which are still not foolproof.&quot;,
&quot;Increased privacy concerns with biometric data.&quot;,
&quot;Device dependency (e.g., losing a trusted device).&quot;,
&quot;Setup may require some learning curve.&quot;,
&quot;Backup options need to be strong and secure.&quot;,
]}
/&gt;</p>
<h4>Why Passwordless is Here to Stay (and Keep Growing)</h4>
<ol>
<li>
<p><strong>Security</strong>: We’re constantly hearing about data breaches, and many of them stem from weak or reused passwords. Passwordless systems are safer because if someone wants access to your account, they need <em>you</em> (your face, fingerprint, or personal device). No amount of guessing can replicate that.</p>
</li>
<li>
<p><strong>Convenience</strong>: With passwordless, we’re saying goodbye to “forgot my password” links and constant resets. Instead, it’s a smooth, one-step login that feels like a breath of fresh air. And in a world where we’re juggling dozens of apps and accounts, that simplicity goes a long way.</p>
</li>
<li>
<p><strong>Cost Savings for Companies</strong>: This may not seem like a big deal for users, but companies spend a lot of money on password recovery and support. Going passwordless not only cuts down on these costs but also improves user experience. It’s a win-win — no more forgotten passwords, and companies save on backend costs.</p>
</li>
<li>
<p><strong>It Fits Modern Tech Expectations</strong>: We’re all used to tapping a phone or scanning our faces to do things quickly. Passwordless logins fit into this fast-paced, intuitive tech world, giving users exactly what they want: less friction.</p>
</li>
</ol>
<h3>Why Passwordless Beats <em><strong>OAuth</strong></em> (You Know, Those “Login with Facebook/Google” Buttons)</h3>
<p>Now, if you’re like, <em>“Wait, I can already log in with my Google or Facebook account?”</em> — you’re right. Those login buttons use a method called OAuth, which basically allows you to use an existing account to access a new one without creating a new password. It’s convenient, but here’s the thing: OAuth <em>still uses passwords in the background</em>. Sure, you’re not typing it in every time, but that password is still sitting out there, which means it can still be stolen.</p>
<p>Passwordless, on the other hand, doesn’t rely on any passwords at all. No password on your end, no password on their end. It’s simpler, more secure, and doesn’t have that hidden weakness.</p>
<h3>What You Need to Know About Going Passwordless</h3>
<ol>
<li><strong>Biometrics Are Getting Safer</strong>: Yes, there have been stories of “hacked” fingerprints, but technology is catching up fast. Today’s biometrics are a lot more secure than they were a few years ago. Are they perfect? Not quite, but it’s a whole lot harder for someone to</li>
</ol>
<p>replicate a fingerprint than to guess a password.</p>
<ol start="2">
<li>
<p><strong>Keep Your Device Locked</strong>: Since passwordless methods often rely on devices (like your phone), keeping that device secure is super important. Make sure it’s locked with a PIN, password, or biometric lock so that if you lose it, your accounts are still protected.</p>
</li>
<li>
<p><strong>Backup Options Are Essential</strong>: Some days, the face scan just doesn’t want to work, or your phone might be on the other side of the house. Thankfully, most passwordless setups have backup methods, like a one-time code or a PIN, so don’t skip setting those up!</p>
</li>
<li>
<p><strong>Understand the Privacy Side</strong>: Passwordless methods often mean sharing more personal data (like biometrics) with companies. It’s always a good idea to know how much data is being collected and decide if you’re comfortable with it.</p>
</li>
</ol>
<h3>Why I’m Totally Onboard with Passwordless (and Why You Should Be, Too!)</h3>
<p>To be honest, I never thought I’d be this excited about logging in. Passwords have been a necessary evil, but they’re just not keeping up with how we live our digital lives today. With passwordless, we’re finally stepping into a future that’s not only safer but also way more convenient. It’s freeing to think we can just be <em>us</em> without the stress of remembering what combination of symbols we used.</p>
<p>And sure, there’s a learning curve. But if it means saying goodbye to password fatigue, then I’m all in. The world is evolving, and so is our need for security — passwordless is just the next step.</p>
<p>So, here’s to a life where logging in is as easy as showing up. I, for one, can’t wait to fully say goodbye to the days of password resets. 😊</p>
]]></content:encoded>
            
            
          </item><item>
            <title>The crowdstrike &quot;fiasco&quot;</title>
            <link>https://somritdasgupta.in/blog/crowdstrike-fiasco</link>
            <description>CrowdStrike&apos;s disastrous update that caused a global system meltdown</description>
            <pubDate>Thu, 08 Aug 2024 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/crowdstrike-fiasco</guid>
            
            
            <category>analysis</category><category>technology</category><category>events</category>
            <content:encoded><![CDATA[<p>On <strong>19'July 2024</strong>, CrowdStrike's Falcon cloud-based endpoint security platform experienced a catastrophic global outage, causing widespread system failures across multiple industries. With over 23,000 enterprise customers affected, this incident became one of the most significant cybersecurity infrastructure failures in recent history&lt;Footnote id={1} text=&quot;According to incident reports, the outage affected approximately 85% of CrowdStrike's enterprise customer base, impacting millions of endpoints globally&quot; /&gt;.e crowdstrike &quot;fiasco&quot;&quot;
publishedAt: &quot;2024-08-08&quot;
summary: &quot;CrowdStrike's disastrous update that caused a global system meltdown&quot;
tags: [industry, opinions, development]</p>
<hr>
<p>itle: &quot;The crowdstrike &quot;fiasco&quot;&quot;
publishedAt: &quot;2024-08-08&quot;
summary: &quot;CrowdStrike’s disastrous update that caused a global system meltdown&quot;
tags: [analysis, technology, events]</p>
<hr>
<p>On <strong>19'July 2024</strong>, CrowdStrike's Falcon cloud-based endpoint security platform experienced a catastrophic global outage, causing widespread system failures across multiple industries. With over 23,000 enterprise customers affected, this incident became one of the most significant cybersecurity infrastructure failures in recent history&lt;Footnote id={1} text=&quot;According to incident reports, the outage affected approximately 85% of CrowdStrike's enterprise customer base, impacting millions of endpoints globally&quot; /&gt;.</p>
<h3>The Scale of Impact: By the Numbers</h3>
<p>The incident's impact was unprecedented:</p>
<ul>
<li><strong>Financial Markets</strong>: $5.4B in market value wiped from CrowdStrike's stock and fortune 500 companies book value in 24 hours</li>
<li><strong>Aviation</strong>: 3,200+ flights delayed across 18 major airlines</li>
<li><strong>Healthcare</strong>: 76 hospitals reported critical system disruptions</li>
<li><strong>Retail</strong>: Estimated $312M in lost transactions globally</li>
<li><strong>Manufacturing</strong>: 142 production lines halted across automotive and electronics sectors</li>
</ul>
<p>&lt;Callout emoji=&quot;📊&quot;&gt;
The cascade effect rippled through 47 countries, affecting over 1.2 million
endpoints simultaneously. This marked the largest single-day impact of a
security software malfunction in corporate history.
&lt;/Callout&gt;</p>
<h3>Technical Deep Dive: What Actually Happened</h3>
<p>The root cause analysis revealed a complex chain of events&lt;Footnote id={2} text=&quot;Post-incident technical analysis by Microsoft's Security Response Center revealed the specific driver conflict that triggered the cascade failure&quot; /&gt;:</p>
<ol>
<li><strong>Initial Trigger</strong>: A kernel-mode driver update contained an incompatible memory management routine</li>
<li><strong>Cascade Effect</strong>: The driver conflict created a race condition in Windows kernel space</li>
<li><strong>Amplification</strong>: Cloud-based automatic updates distributed the faulty driver globally</li>
<li><strong>System Impact</strong>: The conflict caused widespread BSOD errors with stop code: <code>SYSTEM_THREAD_EXCEPTION_NOT_HANDLED</code></li>
</ol>
<pre><code class="language-powershell"># Common error signature in Windows Event Logs
Event ID: 1001
Source: BugCheck
Description: The computer has rebooted from a bugcheck.
Bugcheck code: 0x1000007E
</code></pre>
<h3>Industry-Wide Implications</h3>
<p>The incident exposed several critical vulnerabilities in modern cybersecurity infrastructure:</p>
<ul>
<li><strong>Cloud-Dependent Security</strong>: The reliance on cloud-based security updates created a single point of failure</li>
<li><strong>Automatic Update Risks</strong>: Rapid deployment capabilities became a liability</li>
<li><strong>Incident Response Gaps</strong>: Many organizations lacked offline recovery procedures</li>
</ul>
<p>&lt;ProConsComparison
pros={[
&quot;Incident led to improved testing protocols across the industry&quot;,
&quot;Sparked development of offline failsafe mechanisms&quot;,
&quot;Accelerated adoption of multi-vendor security strategies&quot;,
&quot;Enhanced focus on degraded-mode operations&quot;,
]}
cons={[
&quot;Exposed critical dependencies in security infrastructure&quot;,
&quot;Revealed inadequate testing of kernel-mode components&quot;,
&quot;Highlighted risks of centralized security platforms&quot;,
&quot;Demonstrated gaps in business continuity planning&quot;,
]}
/&gt;</p>
<h3>Technical Aftermath and Industry Response</h3>
<p>Major changes implemented across the cybersecurity industry:</p>
<pre><code class="language-python"># New industry standard for driver deployment
class DriverDeployment:
    def __init__(self):
        self.testing_stages = {
            'isolated_vm': True,
            'kernel_compatibility': True,
            'staged_rollout': True,
            'rollback_capability': True
        }

    def verify_deployment_safety(self):
        return all(self.testing_stages.values())
</code></pre>
<h3>Recovery and Resilience Strategies</h3>
<p>Organizations implemented new recovery protocols:</p>
<pre><code class="language-bash"># Example of new recovery procedure
#!/bin/bash

# 1. Detect CrowdStrike service failure
if ! systemctl is-active --quiet csfalcon; then
    # 2. Switch to fallback security mode
    systemctl start fallback-security

    # 3. Notify incident response team
    alert_ir_team &quot;CrowdStrike failure detected&quot;

    # 4. Enable enhanced monitoring
    systemctl start enhanced-monitoring
fi
</code></pre>
<h3>New Industry Standards Emerging</h3>
<p>Post-incident changes that reshaped the industry:</p>
<ol>
<li><strong>Staged Rollout Requirements</strong></li>
</ol>
<pre><code class="language-json">{
  &quot;deployment_policy&quot;: {
    &quot;initial_deployment&quot;: &quot;0.1%&quot;,
    &quot;monitoring_period&quot;: &quot;1h&quot;,
    &quot;error_threshold&quot;: &quot;0.001%&quot;,
    &quot;rollback_trigger&quot;: &quot;automatic&quot;,
    &quot;geographic_distribution&quot;: &quot;multi-region&quot;
  }
}
</code></pre>
<ol start="2">
<li><strong>Enhanced Testing Protocols</strong></li>
</ol>
<pre><code class="language-yaml">driver_validation:
  kernel_compatibility:
    - windows_versions: [&quot;10&quot;, &quot;11&quot;]
    - kernel_builds: [&quot;all_current&quot;]
    - memory_models: [&quot;standard&quot;, &quot;low-resource&quot;]
  testing_environment:
    - isolated_vm
    - production_simulation
    - stress_testing
  validation_metrics:
    - performance_impact: &quot;&lt;5%&quot;
    - memory_usage: &quot;&lt;100MB&quot;
    - cpu_utilization: &quot;&lt;3%&quot;
</code></pre>
<p>On <strong>19'Jul 2024</strong>, CrowdStrike, a major player in cybersecurity known for its Falcon cloud-based endpoint security, faced a monumental global outage.</p>
<p><strong>The result?</strong> A cascade of blue screens of death (BSOD) that crippled airlines, hotels, live broadcasts, medical equipment, and more. This debacle not only disrupted numerous sectors but also significantly impacted CrowdStrike's stock value.</p>
<h3>The crowdstrike outage, when the digital world crashed...</h3>
<p>What started as a routine software update quickly spiraled into a global catastrophe. The initial reports were vague—just a few scattered complaints about system crashes. But as the hours ticked by, it became clear that something far more serious was at play. Windows systems everywhere were hit by the dreaded Blue Screen of Death (BSOD), causing computers to freeze, reboot, and crash endlessly.</p>
<p>The situation escalated to the point where critical infrastructure was affected. Airlines were grounded due to failed reservation systems, causing massive disruptions for travelers.</p>
<p>Hospitals faced significant challenges as medical equipment and patient monitoring systems went offline, raising serious concerns about patient safety. Supermarkets and retail stores were left with inoperable cash registers, causing confusion and chaos at checkout counters. In short, everyday life was thrown into disarray.</p>
<blockquote>
<p>The issue arose from unvetted updates pushed to the Falcon software (EDR)&lt;Footnote id={1} text=&quot;CrowdStrike blog post about Falcon Sensor issue targeting CrowdStrike customers&quot; link=&quot;https://www.crowdstrike.com/blog/falcon-sensor-issue-use-to-target-crowdstrike-customers/&quot; /&gt;
, which wreaked havoc on Falcon agents across Microsoft Windows systems.</p>
</blockquote>
<p><em>In the past decade, the internet has surged dramatically, driven notably by the COVID-19 pandemic, which accelerated the shift to remote work, online education, and digital services.Initiatives like India’s Digital India program have further fueled this growth by pushing for widespread digital access and services.</em></p>
<p><em>Additionally, government efforts such as the EU's Digital Single Market and China's Digital Currency Electronic Payment (DCEP), along with advancements in cloud computing, the rise of digital currencies, and the expansion of IoT devices, have all contributed to this unprecedented internet expansion.</em></p>
<p>&lt;Iframe src=&quot;https://ourworldindata.org/grapher/number-of-internet-users?tab=map&quot; /&gt;</p>
<p>&lt;Callout&gt;
<strong>But what if it's down?</strong> July 19th, 2024 will forever be known as
&quot;International Blue Screen of Death Day&quot;—CrowdStrike’s disastrous update that
caused a global system meltdown and worldwide halt.
&lt;/Callout&gt;</p>
<h3>Then experts jump in</h3>
<p>As the scale of the disaster became apparent, cybersecurity experts and organizations like MITRE ATT&amp;CK sprang into action. They quickly identified the issue as a &quot;Cloud-based EDR Faulty Driver Update DoS&quot;—a new and alarming technique that used a faulty update to disrupt entire networks. The revelation left the cybersecurity community in shock.
The term &quot;Cloud-based EDR&quot; refers to endpoint detection and response systems that rely on cloud-based updates. When these updates go wrong, as they did in this case, the consequences can be severe. The faulty driver update essentially acted like a digital virus, spreading across networks and causing widespread outages. The situation was so severe that it made headlines around the world.</p>
<p>&lt;Callout emoji=&quot;🤯&quot;&gt;
In no time, people were scrambling for cash as stores struggled to process
transactions. It was a full-blown technological disaster.
&lt;/Callout&gt;</p>
<h3>Who’s to blame?</h3>
<p>The search for the culprit became a media frenzy. Speculation ran rampant: Was it a case of sabotage? Was a rogue intern trying to make a name for themselves? Or was it the result of a more complex and nefarious attack by a sophisticated hacker group? Theories abounded, but the true cause remained elusive.
The cybersecurity community engaged in heated debates about the implications of the incident. Many pointed to the need for a &quot;No-Fault Culture&quot; where mistakes are viewed as opportunities for learning rather than opportunities for blame. Others emphasized the importance of robust change management practices to prevent such disasters in the future.</p>
<p><strong>But whatever it was, crowstrike's(CRWD) stock price got crushed down the lane</strong>&lt;Footnote id={2} text=&quot;LinkedIn post by Alistair Ross about wise-cracks and personal reflections&quot; link=&quot;https://www.linkedin.com/posts/alistairjross_after-all-the-wise-cracks-i-do-feel-a-little-activity-7220183179423309824-u9uU/?utm_source=share&amp;utm_medium=member_desktop&quot; /&gt;</p>
<p>&lt;Image
src=&quot;https://www.investopedia.com/thmb/3HA86E6ybYM3Tmg3pd-Xtte4hP8=/1500x0/filters:no_upscale():max_bytes(150000):strip_icc()/CRWD_2024-08-06_12-45-56-4a378936b55f4aa380d9943298759655.png&quot;
alt=&quot;Crowdstrike (CRWD) stock crash&quot;
/&gt;</p>
<h3>A day that was too blue</h3>
<p>So here’s to &quot;International Blue Screen of Death Day&quot;—a day that will live on in infamy. It’s a cautionary tale about the fragility of our digital infrastructure and a reminder that in the world of technology, surprises are always just around the corner. Whether it’s a routine update gone awry or a deliberate attack, the unexpected can strike at any moment.</p>
<blockquote>
<p>“In technology, the unexpected is always just a heartbeat away. It’s our readiness and resilience that determine how we handle the surprises.”</p>
</blockquote>
<h3>What the cybersecurity community is saying</h3>
<p>The crowdstrike outage quickly became a hot topic across cybersecurity forums on LinkedIn, X, Reddit, and Slack. Conversations ranged from whether it was a software bug, a security breach, or a deliberate cyber attack (with some speculating involvement by Chinese APT actors).
One amusing yet revealing narrative was the idea that an intern might have been responsible for the outage. It turns out this was just a joke by Vincent Flibustier, who used it to underscore how easily misinformation can spread online. Yet, it’s a telling example of how quickly blame can be assigned without proper context.</p>
<blockquote>
<p>&quot;Mistakes are part of the landscape. No matter how robust a company’s systems are, errors will occur.&quot;</p>
</blockquote>
<p>On a more serious note, crowdstrike’s denial of the intern theory via TeamBlind was important. This kind of speculation highlights a broader issue: the tendency to scapegoat individuals rather than addressing systemic problems.
Rick C&lt;Footnote id={3} text=&quot; Rick C 'no-fault situation' LinkedIn Post&quot; link=&quot;https://www.linkedin.com/posts/rickatx_nofault-activity-7220113420430323712-RXZ0/?utm_source=share&amp;utm_medium=member_desktop&quot; /&gt;
shared a personal anecdote on LinkedIn about a similar incident from his early career, where a BSOD occurred during a company-wide update. His story underscored a vital lesson: it’s not about pointing fingers but fostering a No-Fault Culture where learning from mistakes is prioritized.
Gil Barak&lt;Footnote id={4} text=&quot;LinkedIn post by Gil Barak about CrowdStrike and Microsoft cybersecurity&quot; link=&quot;https://www.linkedin.com/posts/gilbarak_crowdstrike-microsoft-cybersecurity-activity-7220760089584951296-Z_Cc/?utm_source=share&amp;utm_medium=member_desktop&quot; /&gt;
also weighed in, emphasizing that the cybersecurity industry’s success hinges on community collaboration. Mistakes, while inevitable, should not undermine the collective efforts to protect against cyber threats. Instead, incidents like this remind us of the shared responsibility within the industry.</p>
<p>&lt;Callout type=&quot;warning&quot;&gt;
The incident raised significant concerns about the reliability of critical
security updates and their potential impact on global infrastructure.
&lt;/Callout&gt;</p>
<p>&lt;Image
src=&quot;https://www.thegadgetman.org.uk/wp-content/uploads/2024/07/crowdstrike.jpg&quot;
alt=&quot;CrowdStrike's update&quot;
/&gt;</p>
<h3>Reflecting on the crowdstrike outage, several key thoughts come to mind</h3>
<p>Reflecting on the CrowdStrike outage&lt;Footnote id={5} text=&quot;CrowdStrike blog post on likely e-crime actor capitalizing on Falcon Sensor issues&quot; link=&quot;https://www.crowdstrike.com/blog/likely-ecrime-actor-capitalizing-on-falcon-sensor-issues/&quot; /&gt;
, several thoughts come to mind. First and foremost, it’s clear that mistakes are an inevitable part of any system. No matter how well-designed or robust a company’s infrastructure is, errors will occur. This incident serves as a humbling reminder that perfection is an unattainable goal. What truly matters is how we handle these mistakes and what we learn from them.
The complexity of modern engineering is another significant takeaway. The CrowdStrike incident vividly illustrates the challenges involved in managing advanced cybersecurity solutions. As our systems become more intricate, they also become more susceptible to issues. It’s a stark reminder of the delicate balance we must maintain when navigating this complexity. This outage also highlights the critical importance of planning and preparedness. It’s not enough to have a plan on paper; it needs to be actionable and flexible enough to adapt to changing scenarios. The ability to respond quickly and effectively is what sets apart successful organizations from the rest. Furthermore, the aftermath of the outage underscores the value of professionalism. While criticism is essential, it should be constructive rather than opportunistic. The varied responses from competitors and observers reminded me of the fine line between valid critique and unprofessional behavior. Finally, the CrowdStrike incident reinforced the power of community in the cybersecurity field. Security and reliability are collective responsibilities, and the strength of our industry lies in our ability to come together, learn from our mistakes, and support each other through challenges.</p>
<p>&lt;FootnoteList /&gt;</p>
]]></content:encoded>
            
            
          </item><item>
            <title>Right, So Why Am I Still Coding at 1 AM?</title>
            <link>https://somritdasgupta.in/blog/Right-So-Why-Am-I-Still-Coding-at-1-AM</link>
            <description>Let&apos;s be honest. The workday is over, but the CLI is still open. Why I code for fun, and why it&apos;s not as crazy as it sounds.</description>
            <pubDate>Sun, 28 Apr 2024 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/Right-So-Why-Am-I-Still-Coding-at-1-AM</guid>
            
            
            <category>career</category><category>development</category><category>opinions</category>
            <content:encoded><![CDATA[<p>The clock hits 1 AM. I've just spent the last two hours wrestling with a legacy Java service that has more XML configuration than actual logic. The final PR is up, Slack is closed, and my brain feels like a wrung-out sponge. The sane, logical human response is to shut the lid on the work laptop, grab some non-oily food, watch the football highlights, and let my brain cells go into low-power mode.</p>
<h2>But that's not what's happening. The work machine is off, but my PC is on. The terminal is open, and I'm elbow-deep in a Rust script to scrape financial data because I got annoyed with an API's rate limits.</h2>
<p>The clock hits 1 AM. I've just spent the last two hours wrestling with a legacy Java service that has more XML configuration than actual logic. The final PR is up, Slack is closed, and my brain feels like a wrung-out sponge. The sane, logical human response is to shut the lid on the work laptop, grab some non-oily food, watch the football highlights, and let my brain cells go into low-power mode.</p>
<p>But that's not what's happening. The work machine is off, but my personal Mac is on. The terminal is open, and I'm elbow-deep in a Rust script to scrape financial data because I got annoyed with an API's rate limits.</p>
<p>Why? It's the ultimate question. Are we just gluttons for punishment? Addicted to the screen's glow? Maybe. But the truth is, this isn't unpaid overtime. This is the antidote to my day job.</p>
<h3>My Sandbox, My Rules, My Mistakes</h3>
<p>Let's be brutally honest about the 9-to-5. Even in the best jobs, you're playing in someone else's sandbox. You're a highly-skilled professional working within a labyrinth of constraints: the tech debt from a decade ago, the architectural decisions made by people who have long since left the company, the coding style guide that insists on <code>kebab-case</code> for some reason. You're a cog. A vital, well-compensated cog, but a cog nonetheless.</p>
<p>My personal project folder is my escape from that. It's my own private universe where I am the benevolent, and occasionally reckless, dictator.</p>
<p><strong>It's a vacation from technical debt.</strong> My day job is often about carefully navigating and paying down debt. My side project starts with a beautiful, pristine <code>git init</code>. There is no legacy code. There are no existing users to disappoint. It's a clean slate—a feeling so rare and wonderful it's almost therapeutic.</p>
<p><strong>I have absolute, unquestioned control.</strong> Last month, I spent an entire weekend over-engineering a smart home dashboard using Docker, Grafana, and a Raspberry Pi, just to display the temperature in my room. The stock app does it in one tap. But my dashboard has a slick, custom font and updates in real-time via WebSockets. It was a ridiculous, pointless, and deeply satisfying endeavor.</p>
<p>&lt;Callout emoji=&quot;💡&quot;&gt;
This is the purest form of problem-solving. There's no bureaucracy. No
meetings to &quot;align on the go-forward strategy.&quot; It's just me, a problem I
invented, and the raw, uncut dopamine hit when the damn thing finally compiles
and runs. It's a video game where I write the code for the game <em>and</em> get the
high score.
&lt;/Callout&gt;</p>
<h3>Down the Open-Source Rabbit Hole</h3>
<p>This solitary obsession is often the gateway drug to something bigger: the global, chaotic, and wonderful world of open source. It never starts as some grand ambition. It starts with a tiny annoyance. A typo in the documentation. A misleading error message.</p>
<p>You think, &quot;Someone should fix that.&quot; Then you think, &quot;Wait, <em>I</em> can fix that.&quot;</p>
<p>I'll never forget the sheer terror of my first PR. I was convinced I was about to be flamed into oblivion for missing some unwritten rule. I clicked &quot;Create pull request&quot; and braced for impact, expecting a comment like, &quot;Did you even read the contribution guidelines, you amateur?&quot;</p>
<p>Instead, what I got was: &quot;Hey, thanks for catching this! Good spot. Could you add a quick test case for it?&quot; It was welcoming. It was constructive. And when that PR was finally merged, it was a bigger thrill than shipping a major feature at work. My code—my tiny, insignificant change—was now part of a tool used by thousands of people.</p>
<p>And it's not just about code. Sometimes the best contribution is triaging new issues, improving the documentation, or just helping someone else in a discussion forum. It's a community where your reputation is built on the quality of your ideas and your willingness to help.</p>
<p>This world operates on a different currency. The motivations are night and day compared to corporate life.</p>
<p>&lt;Table
data={{
headers: [&quot;Factor&quot;, &quot;Day Job Project&quot;, &quot;Open Source Contribution&quot;],
rows: [
[
&quot;The Goal&quot;,
&quot;Ship the feature on the roadmap, on time.&quot;,
&quot;Fix the one thing that's broken or annoying me.&quot;,
],
[
&quot;The Code&quot;,
&quot;Follows established patterns, is risk-averse.&quot;,
&quot;Must be clean, well-tested, and justify its existence.&quot;,
],
[
&quot;The Feedback&quot;,
&quot;Polite, structured, sometimes vague.&quot;,
&quot;Direct, brutally honest, but almost always educational.&quot;,
],
[
&quot;The Reward&quot;,
&quot;Paycheck and maybe a 'good job' on Slack.&quot;,
&quot;A 'thank you' from a stranger and your name in the <code>git log</code>.&quot;,
],
],
}}
/&gt;</p>
<h3>My Personal R&amp;D Department</h3>
<p>I'll be blunt: a huge part of this is about career self-preservation. The tech world moves at a terrifying speed. The framework that's hot today is legacy tomorrow. Your company isn't going to halt production for six months so everyone can learn the new &quot;Next.js killer.&quot;</p>
<p>So, you have two choices: become a dinosaur or run your own R&amp;D department on nights and weekends.</p>
<p>At work, if I want to introduce a new library, it involves a proposal, three meetings, a security review, and a presentation to an architectural committee. On my side project? It's <code>yarn add cool-new-lib</code> and I'm testing it 30 seconds later. I can figure out if it's genius or garbage on my own time.</p>
<p>This interactive component captures the essence of those late-night coding sessions - tracking commits, coffee consumption, and that unique productivity that comes with coding in the quiet hours:</p>
<p>&lt;LiveCode mode=&quot;preview&quot; fileNames={[&quot;LateNightCoding.js&quot;]} template=&quot;react&quot; /&gt;</p>
<p>This isn't just &quot;learning.&quot; It's hands-on, practical research. It's how I form opinions that are based on experience, not just on a Hacker News comment thread. It’s my hedge against stagnation.</p>
<h3>So, Is It Really Worth It?</h3>
<p>This all sounds great, but let's not romanticize it. This lifestyle is a double-edged sword. It demands real sacrifice.</p>
<p>&lt;ProConsComparison
pros={[
&quot;You learn new tech way faster than at any job.&quot;,
&quot;You have total creative and technical control.&quot;,
&quot;It builds a portfolio that proves you're passionate.&quot;,
&quot;It's genuinely fun if you're a puzzle-solver.&quot;,
&quot;You can connect with awesome people in open source.&quot;,
]}
cons={[
&quot;Burnout is a massive, ever-present risk.&quot;,
&quot;It eats into your free time for other hobbies and life.&quot;,
&quot;The 'always-on' mindset can be unhealthy as hell.&quot;,
&quot;Your projects folder becomes a graveyard of good intentions.&quot;,
&quot;Explaining 'yak shaving' to a non-tech friend is impossible.&quot;,
]}
/&gt;</p>
<p>That &quot;graveyard of unfinished projects&quot; isn't a failure, though. It's a library of lessons. Each one taught me something before I got distracted by the next shiny thing. That's a feature, not a bug.&lt;Footnote id={1} text=&quot;Or so I tell myself to sleep at night.&quot; /&gt;</p>
<p>So, why am I still sitting here at 11 PM, staring at a screen?</p>
<p>It’s not some grand, poetic calling. It’s simpler. The day job is where I apply my skills. My side projects are where I forge them. It’s how I stay curious, how I keep from getting cynical, and how I remind myself that I don't just sling code for a paycheck—I genuinely love this stuff.</p>
<p>Okay, this has been a solid bit of procrastination. But this Rust compiler isn't going to argue with itself. Back to it.</p>
<p>&lt;FootnoteList /&gt;</p>
]]></content:encoded>
            
            
          </item><item>
            <title>On mark&apos;s llama-verse</title>
            <link>https://somritdasgupta.in/blog/llama-verse</link>
            <description>Mark Zuckerberg&apos;s all about LLaMA now, what&apos;s up with that? What happened to his metaverse ambition?</description>
            <pubDate>Sun, 07 Jan 2024 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/llama-verse</guid>
            
            
            <category>ai</category><category>opinions</category><category>industry</category>
            <content:encoded><![CDATA[<p>So, LLaMA has able to became a favorite among developers community as top choice LLM to work with. If not the favorite, it's definitely a top pick for many. But why devs are choosing it over the many popular LLM already available? Well, for starters, it’s open-source, turning it into a playground where developers can experiment and customise freely. Unlike other AI models like &lt;u&gt;&lt;a href=&quot;https://ai.plainenglish.io/anthropic-dominates-openai-a-side-by-side-comparison-of-claude-3-5-sonnet-and-gpt-4o-8cca145a466f&quot;&gt;<strong>Anthropic's Claude or OpenAI's GPT</strong>&lt;/a&gt;&lt;/u&gt;, which are often locked behind paywalls, LLaMA is open for anyone to try out and customize. You can even scale it up for your business.<strong>But but...</strong> if your business grows to make more than $700 million, Meta— actually the company behind LLaMA will require you to get a <strong>commercial-license</strong>. And, honestly, a tech to grow a $700 million in business is no small oppurtunity to have, and worth a shot.</p>
<blockquote>
<p><strong>&quot;Open-source AI is the future. By making LLaMA accessible, Meta is not just advancing technology but also fostering a community of innovation.&quot;</strong><br>
— Tech Analyst</p>
</blockquote>
<h3>Open-source is a game-changer...</h3>
<p>why, I am calling Open-source is a <strong>game-changer</strong>&lt;Footnote id={1} text=&quot;Meta’s new LLama model could be a game changer&quot; link=&quot;https://finance.yahoo.com/news/meta-llama-model-could-game-182156325.html&quot; /&gt; because it allows for incredible flexibility and creativity. Developers can tweak and enhance LLaMA in ways we might not even have imagined yet. This open approach encourages a vibrant community to push the boundaries of what’s possible and there are so many independent talents who are eager to have a hands-on with LLaMA.</p>
<p>However, there’s a catch. Running LLaMA can sometimes be more expensive than expected. For example, using an A100 GPU to run LLaMA costs about $4.20 per million tokens. In contrast, OpenAI’s GPT-3.5 Turbo is cheaper at $2 per million tokens.</p>
<ul>
<li>
<p><strong>Hardware Costs:</strong> LLaMA requires high-performance hardware, which can add up.</p>
</li>
<li>
<p><strong>Customization Complexity:</strong> While open-source customization is a huge advantage, it also adds layers of complexity and cost.</p>
</li>
</ul>
<p>import Image from &quot;../components/mdxComponents/Image&quot;;</p>
<p>&lt;Image
src=&quot;https://tii.imgix.net/production/articles/11630/504ef468-9520-4cf7-882f-9e5896f17068-s6kmbP.png?auto=compress&amp;fit=crop&amp;auto=format&quot;
alt=&quot;Open-source vs. Closed-source LLMs Pricing&quot;
/&gt;</p>
<p>&lt;Callout&gt;
Even though OpenAI’s GPT-4 model is priced at $60 per million tokens, running
LLaMA on different hardware setups can sometimes end up being more expensive
than using GPT-3.5 Turbo.
&lt;/Callout&gt;</p>
<h3>Where's metaverse?</h3>
<p>The reason why &lt;u&gt;<strong>Facebook changed its name to Meta</strong>&lt;/u&gt;&lt;Footnote id={2} text=&quot;Why has facebook changed it’s name to meta&quot; link=&quot;https://www.newscientist.com/article/2295438-why-has-facebook-changed-its-name-to-meta-and-what-is-the-metaverse/&quot; /&gt; was solely due to metaverse's first mover and as described and potrayed it was the next big thing. The idea was a virtual universe where we could live, work, and play and have our virtual avatars. But, well, things haven’t quite panned out that way. Despite a hefty investment, the metaverse is still struggling to gain traction combined with falling stock prices.</p>
<blockquote>
<p><strong>&quot;The metaverse is still a niche market. Until VR becomes as commonplace as smartphones, it might remain a lofty dream.&quot;</strong><br>
— Nick Clegg, Medium Blogger</p>
</blockquote>
<p>Here’s a look at why the metaverse isn’t quite hitting the mark:</p>
<ul>
<li>
<p><strong>Gear Cost:</strong> VR headsets are expensive—around $1,000 each—which makes them a hard sell for many people.</p>
</li>
<li>
<p><strong>Technological limitations:</strong> Creating a seamless and immersive virtual world is no easy task. There are significant hurdles in both hardware and software.</p>
</li>
<li>
<p><strong>Limited Adoption:</strong> The metaverse hasn’t really caught on yet. Only about 10% of U.S. adults own a VR headset, according to Pew Research.</p>
</li>
</ul>
<blockquote>
<p><strong>&quot;Despite Meta’s massive investments, the metaverse is still a niche market. Until VR becomes as common as smartphones, it might remain a lofty dream.&quot;</strong><br>
— Pew Research Center</p>
</blockquote>
<h3>How AI Could Enhance the Metaverse</h3>
<p>Mark isn’t giving up on the metaverse; instead, he’s pairing it with LLaMA. Here’s how AI might boost the metaverse:</p>
<ul>
<li>
<p><strong>Better Experiences:</strong> AI could make virtual worlds feel incredibly real and immersive.</p>
</li>
<li>
<p><strong>Simplified Development:</strong> AI could help developers create richer virtual environments with less effort.</p>
</li>
</ul>
<h3>Bold move or recovery move?</h3>
<p>Meta’s strategy involves using AI to tackle some of the metaverse’s biggest challenges. If AI can improve the metaverse, it could become a much more engaging and practical reality.</p>
<p>Of course, there are risks involved:</p>
<ul>
<li>
<p><strong>Security Issues:</strong> Open-source AI can be misused for things like deepfakes and cyberattacks.</p>
</li>
<li>
<p><strong>Resource Allocation:</strong> Splitting resources between AI and the metaverse could stretch Meta’s budget too thin.</p>
</li>
</ul>
<p>The metaverse is still developing, and skepticism remains. Without a major breakthrough, it might stay a niche market. The reliance on VR technology, which isn’t widely adopted yet, adds to the uncertainty.</p>
<p>Mark’s approach is like playing a high-stakes game—investing in both LLaMA and the metaverse. If one of these bets doesn’t pan out, it could be tough. Imagine Mark at the end of a long day, looking at spreadsheets full of losses, and thinking, &quot;Looks like we need to cut back on some expenses.&quot;</p>
<p>The tech world is always changing and worth the bet! Personally, I think LLaMA has the potential to shake up AI, but the metaverse still has a long road ahead mostly because it requires potential amount of support from the consumers side also.</p>
<p>If both LLaMA and the metaverse don't succeed, Meta might face significant financial challenges. But we know that Mark's previous ventures, which we think were genius moves tomorrow, are often labeled as madness today. Taking measured risk has always been the differentiator between a good business and a great business.</p>
<p>&lt;FootnoteList /&gt;</p>
]]></content:encoded>
            
            
          </item><item>
            <title>The developer&apos;s way of life</title>
            <link>https://somritdasgupta.in/blog/The-developers-way-of-life</link>
            <description>We talk about salaries and tech stacks, but we ignore the real challenges: the crushing mental load, the constant imposter syndrome, and the slow creep of burnout. Let&apos;s get honest about the invisible price we pay.</description>
            <pubDate>Tue, 11 Jul 2023 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/The-developers-way-of-life</guid>
            
            
            <category>career</category><category>development</category><category>opinions</category>
            <content:encoded><![CDATA[<p>On the surface, we have it made, right? The salaries are great, the demand for our skills is off the charts, and we get to build the future from a laptop. From the outside, the life of a software engineer looks like a well-structured, logical, and highly rewarding career. It’s a clean, polished narrative.</p>
<p>But that's the surface. The reality of this job is messy, chaotic, and carries a significant invisible tax. It's the stuff we don't talk about at stand-up, the challenges that don't show up on a performance review, but affect every line of code we write.</p>
<p>I'm talking about the constant, crushing mental load; the little voice in your head that says you're a fraud; and the slow, creeping fog of burnout that can drain all the passion from the craft. This isn't a sign of weakness. This is an occupational hazard.</p>
<h3>The Mental Load: The 100 Browser Tabs in Your Brain</h3>
<p>Ever tried to explain to a non-dev what your day is like? It's almost impossible. Because the hardest part of our job isn't the typing; it's the thinking. And more specifically, the <em>holding</em>.</p>
<p>At any given moment, a developer is holding a staggering amount of context in their brain. It’s like having 100 browser tabs open, all at once:</p>
<ul>
<li>Tab 1: The legacy codebase you're working in, with all its quirks and undocumented features.</li>
<li>Tab 2: The new API you have to integrate with, and its slightly-wrong documentation.</li>
<li>Tab 3: The specific business logic for the ticket you're currently on.</li>
<li>Tab 4: The state of the CI/CD pipeline and why it failed this morning.</li>
<li>Tab 5: A vague memory of a Slack conversation from three weeks ago that's suddenly relevant.</li>
<li>Tab 6-100: Every other micro-decision, potential edge case, and architectural pattern you need to consider.</li>
</ul>
<p>Now, imagine you're deep in thought, juggling all of this, and someone pings you with, &quot;Hey, can you look at this for a sec?&quot; That's not a five-minute interruption. That's a catastrophic crash. You have to painstakingly save the state of all 100 tabs, open 100 new ones for their problem, solve it, and then try to reload your original context, knowing damn well you've lost half of it.</p>
<p>&lt;Callout emoji=&quot;🧠&quot;&gt;
This is why you'll find us staring blankly at the wall for 20 minutes. We're
not slacking off. We're garbage-collecting our own brain. The mental load is
relentless, and it doesn't clock out at 6 PM.
&lt;/Callout&gt;</p>
<h3>Imposter Syndrome: The &quot;Any Day Now&quot; Feeling</h3>
<p>Here’s a dirty little secret of software engineering: the more you know, the more you feel like a fraud. You'd think it would be the opposite. You'd think with experience comes confidence. And it does, in some ways. But with experience also comes a terrifyingly clear picture of the vast, infinite ocean of things you <em>don't</em> know.</p>
<p>Imposter syndrome is the persistent, nagging feeling that you don't belong here, that your successes are just luck, and that any day now, everyone's going to find out you're just faking it.</p>
<p>It's triggered by the simplest things:</p>
<ul>
<li>A junior dev solves a bug in 10 minutes that you've been stuck on for hours.</li>
<li>You're in a planning meeting, and everyone is throwing around an acronym you've never heard, so you just nod along while frantically Googling it under the table.</li>
<li>You look at the source code for a popular open-source library and think, &quot;I could never write something this clean.&quot;</li>
<li>A PR comment feels particularly harsh, and your brain translates &quot;This could be more efficient&quot; into &quot;You're a terrible programmer.&quot;</li>
</ul>
<p>This isn't a junior-level problem. I know staff engineers and architects at major tech companies who confess to feeling this way. In a field that changes this fast, we are all professional beginners. The most powerful phrase a senior dev can learn isn't some complex algorithm; it's, &quot;I don't know, but I can figure it out.&quot;</p>
<h3>Burnout: The Slow Fade to Gray</h3>
<p>People think burnout is about working too many 80-hour weeks. That's part of it, sure. But the more insidious, more common form of burnout isn't an explosive flameout. It's a slow, quiet fade to gray.</p>
<p>It’s the gradual erosion of your passion and energy. It’s caused by a thousand little cuts, not one big wound.</p>
<ul>
<li><strong>The Culture of &quot;Urgency&quot;:</strong> When every ticket is a P0 and every project is a &quot;fire,&quot; your adrenaline system is constantly maxed out. That's not sustainable. Eventually, your brain just goes numb to it.</li>
<li><strong>Lack of Agency:</strong> Being treated like a &quot;code monkey.&quot; You're handed fully-specced-out tickets with no input on the &quot;why&quot; or &quot;how.&quot; You're just a pair of hands, and your creative problem-solving brain starts to atrophy.</li>
<li><strong>The Sisyphean Task:</strong> Working on a legacy system where every bug you fix seems to create two new ones. You feel like you're pushing a boulder up a hill every day, only to have it roll back down. You're busy, but you're making no meaningful progress.</li>
</ul>
<p>The biggest red flag for burnout isn't feeling stressed. It's feeling <em>nothing</em>. It's when you open your IDE to work on a technically interesting problem, and you just feel... tired. A deep, soul-level tired. The passion is gone. That's when you know you're in trouble.</p>
<h3>So, What's the Fix? (Spoiler: It's Not &quot;More Yoga&quot;)</h3>
<p>Telling a developer with burnout to &quot;take a vacation&quot; is like putting a band-aid on a broken leg. It might help for a week, but you'll come back to the same broken environment.</p>
<p>The real fixes are less glamorous and much harder.</p>
<ol>
<li><strong>To Manage Mental Load:</strong> Be ruthless about protecting your focus. Block out &quot;no-meeting&quot; time on your calendar. Turn off notifications. Document your work so you can offload it from your brain. Learn to say &quot;no,&quot; or at least, &quot;not right now.&quot;</li>
<li><strong>To Fight Imposter Syndrome:</strong> Talk about it. Seriously. Find a trusted colleague and say, &quot;Hey, do you ever feel like you have no idea what you're doing?&quot; I guarantee you, their answer will be &quot;Yes, all the time.&quot; Also, keep a &quot;brag document&quot;—a private list of your accomplishments. When you feel like a fraud, read it.</li>
<li><strong>To Prevent Burnout:</strong> Set hard boundaries. Log off. Don't check Slack on your phone at 10 PM. Push back on unrealistic deadlines. And most importantly, recognize that if the problem is a toxic environment, the only real solution is to <em>change the environment</em>. Your health is more important than any job.</li>
</ol>
<p>These aren't weaknesses. They are the predictable outcomes of a profession that demands an incredible amount of invisible, high-stakes mental labor. You're not broken for feeling this way. You're a software engineer.</p>
<p>We're all in this together. Now go close some of those mental tabs for a bit. You've earned it.</p>
]]></content:encoded>
            
            
          </item><item>
            <title>Developers co-pilot</title>
            <link>https://somritdasgupta.in/blog/developers-copilot</link>
            <description>A look into how generative AI, especially when combined with Retrieval-Augmented Generation (RAG), is revolutionizing development tasks.</description>
            <pubDate>Thu, 11 May 2023 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/developers-copilot</guid>
            
            
            <category>ai</category><category>development</category><category>opinions</category>
            <content:encoded><![CDATA[<p>Alright, let's talk about this one, <strong>developer's copilot</strong>? We've seen how generative AI already came a long way from just being a neat little tool that can generate sentences or images. It’s now something that can seriously assist developers in their day-to-day tasks—cutting through the mundane and helping with everything from writing boilerplate code to debugging complex systems. The future of coding is shifting, and I’ve seen it firsthand, both in my work and what others are experiencing.</p>
<p>Let’s think about how developers approach their tasks. In the past, so much time was spent doing repetitive tasks—setting up basic structures, building out templates, writing test cases, or even formatting code. Sure, it’s part of the job, but it doesn’t always add a ton of value to the final product. And that’s where AI steps in. Take GitHub Copilot, for example—it’s like having a colleague sitting next to you who can write code as you’re typing your thoughts out loud. Hmm, it’s not just that though, is it? It <em>understands</em> your context, at least to an extent. I’ve been in situations where I was struggling to refactor some code or implement a function, and Copilot just threw up suggestions that were not only functional but pretty close to what I had in mind.</p>
<p>&lt;Callout emoji=&quot;💡&quot;&gt;
Generative AI tools like Copilot are transforming how developers work, by
speeding up mundane tasks and helping us focus on higher-level design.{&quot; &quot;}
&lt;/Callout&gt;</p>
<p>And it’s not just me. Developers across the globe are finding that these AI copilots are freeing up their mental bandwidth. A lot of folks I’ve chatted with or read about have similar experiences—they use it to write documentation, test cases, or even suggest better ways to structure their projects. There’s a whole spectrum of possibilities here, and that’s what makes it so compelling.</p>
<h3>RAG is making the difference</h3>
<p>But you know what really takes this to another level? Retrieval-Augmented Generation, or RAG. Hmm, think of it like this: AI copilots, as they stand now, are incredibly helpful, but what if they could be even more precise? What if they could pull specific information or code examples based on <em>exactly</em> what you’re working on? That’s what RAG does. Instead of generating generic code or suggestions, it reaches into a database, a set of documents, or even APIs you’ve been working with and then spits out answers that are not just contextually relevant but almost tailor-made for your problem.</p>
<p>For instance, there was this one time I was working on a project involving a pretty niche API—one that didn’t have a ton of documentation out there. Normally, I would’ve spent hours digging through forums or hoping someone on Stack Overflow had encountered something similar. But with an AI tool leveraging RAG, it retrieved exactly what I needed—combining code snippets from the API’s sparse documentation with examples from other projects that used similar functions. It was like magic. Well, almost—I still had to tweak a few things here and there, but it cut my research time by at least half. &lt;Footnote
id={1}
text=&quot;Learn more on RAG, check out this detailed article by HuggingFace&quot;
link=&quot;https://huggingface.co/blog/ray-rag&quot;
/&gt;</p>
<h3>Real-World Applications</h3>
<ol>
<li><strong>Debugging</strong>: Debugging is often one of the most time-consuming aspects of software development. AI copilots can streamline this process by analyzing error messages and known bugs from repositories like GitHub Issues or Stack Overflow. They can even suggest fixes based on historical data, allowing developers to address issues more swiftly.</li>
<li><strong>Writing Test Cases</strong>: Writing test cases is another area where AI can provide substantial assistance. By understanding the structure of the code, AI tools can automatically generate a significant portion of test cases, freeing developers to focus on higher-level design and architecture tasks.</li>
<li><strong>Documentation</strong>: Developers are also using AI copilots to assist in writing documentation, which is often neglected but crucial for maintaining project clarity and onboarding new team members.</li>
<li><strong>Code Review</strong>: Imagine an AI tool that not only helps you write code but also reviews it! Some advanced AI copilots are beginning to include features that analyze your code for potential bugs or improvements before you even run it. This proactive approach can lead to cleaner code and fewer bugs down the line.</li>
<li><strong>Learning New Technologies</strong>: For developers venturing into new frameworks or languages, AI copilots can serve as an interactive learning resource. They can provide real-time examples and explanations while you code—making the learning curve less steep and more engaging.</li>
</ol>
<p>&lt;ProConsComparison
pros={[
&quot;Boosts efficiency by automating repetitive tasks.&quot;,
&quot;Provides context-aware suggestions.&quot;,
&quot;Enables faster debugging with relevant fixes.&quot;,
&quot;Facilitates writing test cases and documentation.&quot;,
]}
cons={[
&quot;Suggestions are sometimes incorrect or irrelevant.&quot;,
&quot;There’s a risk of over-relying on AI assistance.&quot;,
&quot;Can introduce errors if not carefully reviewed.&quot;,
&quot;generate generic codes if context not provided.&quot;,
]}
/&gt;</p>
<h3>The Beauty of Automation</h3>
<p>The beauty of AI copilots is in their ability to assist in those in-between tasks too—the things that aren't hard but time-consuming. Writing test cases, for example. If you’re anything like me, you might find testing to be one of those “necessary evils.” AI tools can take your code, understand the structure, and write a good portion of the test cases for you. It's like someone swooping in to do the housekeeping while you focus on the high-level architecture or design.</p>
<p>There's even an article on <a href="https://remoteskills.io/blog/will-github-co-pilot-replace-developers-and-kill-their-jobs">remote skills</a> &lt;Footnote id={2} text=&quot;Will github co-pilot replace developers and kill their jobs&quot; link=&quot;https://remoteskills.io/blog/will-github-co-pilot-replace-developers-and-kill-their-jobs&quot; /&gt; that discusses how Copilot is cutting down the time developers spend on writing repetitive code by a huge margin. It's crazy to think how this is already shifting workflows!</p>
<h3>Of course, there are challenges</h3>
<p>It's not all roses. Generative AI copilots can get things wrong—sometimes hilariously so—and if you're not careful, you might miss that a suggestion doesn't actually apply to your specific case. It’s a tool after all—not a sentient co-developer. That said, the more these models evolve—especially with things like RAG—the better they’ll get at understanding context and specific technical requirements.</p>
<p>Moreover, there are ethical considerations surrounding data privacy and security when using these tools in sensitive projects or proprietary environments. Developers must remain vigilant about what data they share with these AI systems.</p>
<p>If you're curious about what other developers are saying about these tools and their impact on workflows and productivity levels compared to traditional coding methods, there's a discussion on <a href="https://remoteskills.io/blog/will-github-co-pilot-replace-developers-and-kill-their-jobs">reddit</a>.</p>
<h3>What I think</h3>
<p>No, generative AI or tools like Copilot aren’t here to replace developers or take over any professions. They’re not meant to do that. But if used the right way, they can definitely be a huge help. These tools let us focus on more important things by handling the repetitive, less critical tasks that often take up time.</p>
<p>The real point is, developers who use AI assistants are going to have an advantage over those who don’t. It’s not about AI taking jobs—it’s about making our work easier and more productive. Developers who learn how to work with these tools will likely outperform those who stick to old methods, and that’s where the future is heading.</p>
<p>&lt;FootnoteList /&gt;</p>
]]></content:encoded>
            
            
          </item><item>
            <title>Objects as-in OOP&apos;s</title>
            <link>https://somritdasgupta.in/blog/objects-programming-languages</link>
            <description>Know the key differences and similarities between Java objects and JavaScript objects</description>
            <pubDate>Sat, 15 Apr 2023 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/objects-programming-languages</guid>
            
            
            <category>development</category><category>guides</category><category>opinions</category>
            <content:encoded><![CDATA[<p>One thing I must say that I've always found objects to be a key part of programming, especially in Object-Oriented Programming (OOP). Objects are essentially pieces of code that represent real-world things or concepts, making it easier to build software that works well and is easy to understand.
With time I'm learning more and I’ve realized that understanding how different programming languages handle objects is important for writing efficient and maintainable code. Java and JavaScript, for example, both use objects but in different ways. While they share some similarities, there are also several key differences that can affect how we write and organize our code.</p>
<p>I will try to break it down what actually I'm talking about. Let's start!!</p>
<h3>Pretty much everything is an object...</h3>
<p>In <strong>Java</strong>, objects are defined using classes. A class serves as a blueprint or template for creating objects, specifying the properties (fields) and behaviors (methods) that each object of that class will possess. This approach enforces a clear structure and type safety, which can be beneficial for larger applications. Here’s a simple example:</p>
<pre><code class="language-java">public class Person {
    private String name;
    private int age;
    // Constructor to initialize the object
    public Person(String name, int age) {
        this.name = name;
        this.age = age;
    }
    // Method to introduce the person
    public void introduce() {
        System.out.println(&quot;Hi, I'm &quot; + name + &quot; and I'm &quot; + age + &quot; years old.&quot;);
    }
}
</code></pre>
<p>To create an object based on this class, you would use the <code>new</code> keyword:</p>
<pre><code class="language-java">Person john = new Person(&quot;John&quot;, 30);
john.introduce(); // Output: Hi, I'm John and I'm 30 years old.
</code></pre>
<p>In this example, the <code>Person</code> class has a constructor that initializes the <code>name</code> and <code>age</code> properties. The <code>introduce</code> method allows the object to display its information.</p>
<p>In <strong>JavaScript</strong>, the approach to defining objects is more flexible and dynamic. You can create objects using object literals, constructor functions, or the <code>class</code> syntax (introduced in ES6). Here’s an example using object literals:</p>
<pre><code class="language-javascript">const person = {
  name: &quot;John&quot;,
  age: 30,
  introduce: function () {
    console.log(`Hi, I'm ${this.name} and I'm ${this.age} years old.`);
  },
};
person.introduce(); // Output: Hi, I'm John and I'm 30 years old.
</code></pre>
<p>In this case, we define a <code>person</code> object directly using curly braces <code>{}</code>, specifying its properties and methods inline. This approach allows for quick object creation and modification, making JavaScript particularly suited for rapid development and prototyping.</p>
<h3>The <em>this</em> Keyword</h3>
<p>The <code>this</code> keyword is a fundamental concept in object-oriented programming, but its behavior can vary significantly between Java and JavaScript.</p>
<p>In <strong>Java</strong>, <code>this</code> is a reference to the current object instance. It is used to access the object's properties and methods from within the object itself. The usage of <code>this</code> is straightforward and predictable:</p>
<pre><code class="language-java">public class Car {
    private String model;
    public Car(String model) {
        this.model = model; // 'this' refers to the current Car instance
    }
    public void displayModel() {
        System.out.println(&quot;The model of this car is &quot; + this.model);
    }
}
</code></pre>
<p><code>this.model</code> clearly points to the <code>model</code> of the current <code>Car</code> object. This clarity is one of the strengths of Java's object-oriented design.</p>
<p>In <strong>JavaScript</strong>, however, the value of <code>this</code> is determined by how a function is called, not by where it is defined. This can lead to unexpected behavior if you're not careful. For instance:</p>
<pre><code class="language-javascript">function showVar() {
  console.log(this.globalVar);
}
const obj = {
  globalVar: &quot;I'm from the object!&quot;,
  showVar: showVar,
};
showVar(); // Output: undefined
obj.showVar(); // Output: I'm from the object!
</code></pre>
<p>In the first <code>showVar()</code> call, <code>this</code> refers to the global object (e.g., <code>window</code> in a browser environment), and since <code>globalVar</code> is not defined there, it outputs <code>undefined</code>. In the second call, <code>obj.showVar()</code>, <code>this</code> refers to the <code>obj</code> object because <code>showVar</code> is called as a method of <code>obj</code>.</p>
<p>Understanding the behavior of <code>this</code> is crucial when working with objects in JavaScript, as it can lead to confusion and bugs if not handled properly. To mitigate these issues, developers often use arrow functions, which lexically bind <code>this</code> to the surrounding context:</p>
<pre><code class="language-javascript">const obj = {
  globalVar: &quot;I'm from the object!&quot;,
  showVar: () =&gt; {
    console.log(this.globalVar); // 'this' refers to the surrounding lexical context
  },
};
obj.showVar(); // Output depends on the surrounding context
</code></pre>
<h3>Inheritance: Class-based vs. Prototypal</h3>
<p>Inheritance is a fundamental concept in object-oriented programming, allowing objects to inherit properties and methods from other objects. Java and JavaScript approach inheritance differently.</p>
<p>In <strong>Java</strong>, inheritance is achieved through classes. A subclass inherits from a superclass using the <code>extends</code> keyword. This structure allows for a clear hierarchy and promotes code reuse:</p>
<pre><code class="language-java">public class ElectricCar extends Car {
    private int batteryCapacity;
    public ElectricCar(String model, int batteryCapacity) {
        super(model); // Call the constructor of the parent class
        this.batteryCapacity = batteryCapacity;
    }
    public void chargeCar() {
        System.out.println(&quot;Charging the &quot; + super.getModel() + &quot; with a battery capacity of &quot; + batteryCapacity + &quot; kWh.&quot;);
    }
}
</code></pre>
<p>In this example, <code>ElectricCar</code> inherits from <code>Car</code>, gaining access to its properties and methods while also introducing its own unique features.</p>
<p>In <strong>JavaScript</strong>, inheritance works through prototypes. Every object in JavaScript has a prototype, which is another object. When you try to access a property or method on an object, JavaScript first looks for it on the object itself, and if not found, it follows the prototype chain until it finds the desired property or method or reaches the end of the chain. Here’s an example using the <code>class</code> syntax:</p>
<pre><code class="language-javascript">class ElectricCar extends Car {
  constructor(model, batteryCapacity) {
    super(model); // Call the parent class constructor
    this.batteryCapacity = batteryCapacity;
  }
  chargeCar() {
    console.log(
      `Charging the ${this.model} with a battery capacity of ${this.batteryCapacity} kWh.`
    );
  }
}
</code></pre>
<p>Both Java and JavaScript support inheritance, but the way they implement it is quite different. Understanding these differences can help you choose the right approach depending on the language you’re using.</p>
<h3>Object mutability is a game-changer</h3>
<p>Another important aspect of objects is mutability. In <strong>Java</strong>, objects are generally mutable, meaning you can change their state after they’ve been created. However, if you create an object with the <code>final</code> keyword, it becomes immutable:</p>
<pre><code class="language-java">public final class ImmutablePerson {
    private final String name;
    public ImmutablePerson(String name) {
        this.name = name;
    }
    public String getName() {
        return name;
    }
}
</code></pre>
<p>In this case, once you create an <code>ImmutablePerson</code>, you can’t change their name. This immutability can be beneficial in multi-threaded environments, where shared data can lead to inconsistencies.</p>
<p>In <strong>JavaScript</strong>, objects are mutable by default. You can easily add, remove, or change properties at any time:</p>
<pre><code class="language-javascript">const person = {
  name: &quot;John&quot;,
  age: 30,
};
person.age = 31; // Changing the age
console.log(person.age); // Outputs: 31
</code></pre>
<p>This flexibility is great for rapid development, but it also means you need to be careful about unintended changes to your objects. To create immutable objects in JavaScript, you can use methods like <code>Object.freeze()</code>:</p>
<pre><code class="language-javascript">const person = Object.freeze({
  name: &quot;John&quot;,
  age: 30,
});
person.age = 31; // This will not change the age
console.log(person.age); // Outputs: 30
</code></pre>
<h3>The thing you should keep in mind...</h3>
<p>As a developer, you might find yourself working with both Java and JavaScript in different contexts. Both have their own unique approaches to object manipulation, each with its own strengths and use cases.</p>
<p>In <strong>Java</strong>, objects are created using classes, which define properties and methods. Key concepts like <strong>encapsulation</strong> help protect an object's data by restricting access to its internal state, while <strong>inheritance</strong> allows classes to inherit properties and methods from other classes, promoting code reuse. <strong>Interfaces</strong> provide a way to define a contract that classes can implement, ensuring consistency across different implementations. <strong>Polymorphism</strong> enables objects of different classes to be treated as objects of a common superclass, allowing for flexible and dynamic code.</p>
<p>In <strong>JavaScript</strong>, objects are more flexible and can be created using object literals or constructors. JavaScript objects can hold multiple values as key-value pairs, making them versatile for various applications. The principles of encapsulation and inheritance also apply, although JavaScript uses prototypes for inheritance rather than classes. Polymorphism is achieved through duck typing, where the type of an object is determined by its behavior rather than its class.</p>
<p>Java's class-based inheritance and static typing provide a structured and predictable approach, while JavaScript's prototypal inheritance and dynamic typing offer flexibility and expressiveness.</p>
]]></content:encoded>
            
            
          </item><item>
            <title>The markdownX guide</title>
            <link>https://somritdasgupta.in/blog/mdx</link>
            <description>A detailed demonstration of various MDX features including callouts, tables, live code examples, and more supported in the blog</description>
            <pubDate>Sun, 01 Jan 2023 00:00:00 GMT</pubDate>
            <guid>https://somritdasgupta.in/blog/mdx</guid>
            
            
            <category>guides</category><category>development</category>
            <content:encoded><![CDATA[<p>This is certainly more fun than I had expected, in this post I will be shortly guiding you to the amazing MDX <strong><em>(that X in here is for React-Components)</em></strong> features and also custom MDX components that I can use to compose those interesting blog posts that I do rarely write 😂</p>
<h4><em>Alright Let's start with basics then~</em></h4>
<p>Here is a simple example of a footnote&lt;Footnote id={1} text=&quot;Reference to the resource that provides information about a topic&quot; /&gt; that you might use in your blog posts.
You can add cited text pointing to a source.&lt;Footnote id={2} text=&quot;Short summary about the source&quot; /&gt;
Also, you can add the link to the source you are reffering to and&lt;Footnote id={3} text=&quot;This reference is accompained with a link pointing to the external source&quot; link=&quot;https://example.com&quot; /&gt; you might have use in your blog posts.</p>
<h3>Formatting</h3>
<h4>Heading Level 4</h4>
<h3>Heading Level 3</h3>
<h2>Heading Level 2</h2>
<h1>Heading Level 1</h1>
<p>&lt;u&gt;This is how underlined text looks like&lt;/u&gt;</p>
<p><strong>This is how bold text looks like</strong></p>
<p><em>This is how italic text looks like</em></p>
<p><s>This is how strikethrough looks like</s></p>
<h4>Ordered List</h4>
<hr>
<ol>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ol>
<h4>Unordered List</h4>
<hr>
<ul>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
<h3>BlockQuote</h3>
<blockquote>
<p>&quot;The best part of beauty is that which no picture can express&quot;– Francis Bacon</p>
</blockquote>
<p>&lt;Image
src=&quot;https://picsum.photos/900/600&quot;
alt=&quot;This is an auto-image example from picsum&quot;
/&gt;</p>
<h3>Table</h3>
<p>Tables are useful for presenting structured data in a clear, organized manner. Here's an example of a table I use to display various levels of experience, learning appetite, project sizes, and solutions:</p>
<p>&lt;Table
data={{
headers: [
&quot;Experience Level&quot;,
&quot;Learning Appetite&quot;,
&quot;Project/Team Size&quot;,
&quot;Recommended Solution&quot;,
],
rows: [
[&quot;Beginner&quot;, &quot;Low&quot;, &quot;Small&quot;, &quot;useState&quot;],
[&quot;Beginner&quot;, &quot;Medium&quot;, &quot;Small to Medium&quot;, &quot;useContext + useReducer&quot;],
[&quot;Intermediate&quot;, &quot;High&quot;, &quot;Large&quot;, &quot;Redux Toolkit&quot;],
[&quot;Advanced&quot;, &quot;High&quot;, &quot;Medium to Large&quot;, &quot;Jotai, Valtio&quot;],
[&quot;Advanced&quot;, &quot;High&quot;, &quot;Large&quot;, &quot;Recoil (or Relay for GraphQL)&quot;],
],
}}
/&gt;</p>
<h3>Pro-Con Cards</h3>
<p>&lt;ProConsComparison
pros={[
&quot;SSR boosts SEO and performance.&quot;,
&quot;SSG enables faster page loads.&quot;,
&quot;Built-in routing and API routes.&quot;,
&quot;Automatic code splitting.&quot;,
&quot;Strong TypeScript support.&quot;,
]}
cons={[
&quot;Complex setup for large apps.&quot;,
&quot;Frequent updates need maintenance.&quot;,
&quot;Steep learning curve for advanced features.&quot;,
&quot;SSR increases server load.&quot;,
&quot;Third-party tools dependency.&quot;,
]}
/&gt;</p>
<h3>Inline Code</h3>
<p><code>HashMap&lt;Integer, String&gt;</code></p>
<p><code>SELECT Count(*) FROM users</code></p>
<p><code>float x=9.99</code></p>
<p><code>const func=() =&gt; {}</code></p>
<h3>Code Block</h3>
<pre><code class="language-jsx">import React from &quot;react&quot;;
const Greeting = () =&gt; {
  React.useEffect(() =&gt; {
    console.log(&quot;Hello, world!&quot;);
  }, []);
  return &lt;div&gt;Check the console for a greeting!&lt;/div&gt;;
};
export default Greeting;
</code></pre>
<h3>Live Code</h3>
<p>Live code allows interactive code snippets and their outputs directly in my webpage. Here's an interactive MDX editor that demonstrates the power of mixing markdown with React components:</p>
<p>&lt;LiveCode
mode=&quot;preview&quot;
fileNames={[&quot;MDXDemo.js&quot;, &quot;MDXComponents.js&quot;]}
template=&quot;react&quot;
/&gt;</p>
<p>&lt;Callout emoji=&quot;💡&quot;&gt;
<strong>Callout:</strong> Alright this block is a callout block to highlight essential
information that readers shouldn't overlook. Below is how the tweet card looks
like.
&lt;/Callout&gt;</p>
<p>&lt;Tweet id=&quot;1505668279678824448&quot; /&gt;</p>
<p>&lt;FootnoteList /&gt;</p>
]]></content:encoded>
            
            
          </item>
    </channel>
  </rss>