<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Build with Talia]]></title><description><![CDATA[Hello World! I'm Talia, a Staff Developer Advocate at Postman and international keynote speaker.]]></description><link>https://buildwithtalia.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 19 Apr 2026 07:42:53 GMT</lastBuildDate><atom:link href="https://buildwithtalia.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[8 Ways to Make Your APIs AI-Ready]]></title><description><![CDATA[If you don’t know where to start with AI, you’ve come to the right place.
In November 2022, OpenAI released ChatGPT. It was a shiny new toy that we were all so impressed with. Suddenly, we could ask a machine to write code, summarize documents, or an...]]></description><link>https://buildwithtalia.com/8-ways-to-make-your-apis-ai-ready</link><guid isPermaLink="true">https://buildwithtalia.com/8-ways-to-make-your-apis-ai-ready</guid><category><![CDATA[APIs]]></category><category><![CDATA[AI]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Postman]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Mon, 18 Aug 2025 16:32:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755534593779/51ce5800-19c4-4da5-a25e-41f9161a9ce6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you don’t know where to start with AI, you’ve come to the right place.</p>
<p>In November 2022, OpenAI released ChatGPT. It was a shiny new toy that we were all so impressed with. Suddenly, we could ask a machine to write code, summarize documents, or answer questions like a human. We were blown away by what it could do.</p>
<p>But here’s what most people don’t realize: AI agents don’t actually <em>do</em> anything on their own. They don’t fetch data, they don’t book meetings, they don’t trigger workflows.</p>
<h3 id="heading-they-call-apis">They call APIs.</h3>
<p>If I go into ChatGPT and ask it, <em>“What’s the weather like this weekend in NYC?”</em> the agent makes a call to a weather API, gets the data, and formats the response in natural language. It’s not “thinking” or “knowing.” It’s orchestrating 💡.</p>
<p>Every time you see an AI assistant schedule an event, generate a report, or spin up an environment, it’s not magic. It’s just well-structured API calls, executed by a reasoning engine instead of a human.</p>
<p>That means if you can build APIs, you can build AI.</p>
<p><img src="https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExbWdwNmxoNGUzeTJlZDhheDE5dndvbDJqa29lb2piNnV0eXhpMTc3ZiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/udmx3pgdiD7tm/giphy.gif" alt /></p>
<p>Here’s the catch: REST APIs power the web, but they weren’t designed with AI in mind. For APIs to be truly useful, and usable by AI systems (especially agents and LLMs), they need to have eight critical characteristics. Let’s break them down.</p>
<hr />
<h2 id="heading-1-your-apis-need-to-have-machine-consumable-metadata">1. Your APIs need to have machine-consumable metadata</h2>
<p>Humans and AI process information fundamentally differently. When a human developer reads this:</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUck3IXCvBfaZjhM7O5PllGHqbl3T2zGybpjatm61g86RXiQkVkx3wX0ZvQYJBnujtXzPo5lpd3XkJvUerMlawbXbciYUEtBlwtqVE4THLoz5tmKb-21aeVybbQ0JCHQr2nkpjbZKw=s2048?key=HxzbfhhzJ7QklZDTFS0sTg" alt /></p>
<p>They fill in the gaps with contextual knowledge. They might think, "Oh, this must be a <code>GET</code> request that takes a UUID and probably returns a JSON object with preference key-value pairs.” But an AI Agent has none of that context and has no idea what this means.</p>
<p>Humans can look at a poorly documented API, experiment a little, and eventually figure it out. AI agents can’t. They don’t “guess” in the way we do. They need clear, machine-consumable metadata that tells them exactly what’s possible.</p>
<p>That’s where specifications like OpenAPI come in. They describe endpoints, parameters, request/response types, and constraints in a way that machines can understand.</p>
<p>Think of metadata as the the instruction manual, but instead of being written for developers, it’s written for machines. If your API doesn’t have it, AI agents are less likely to use it successfully.</p>
<hr />
<h2 id="heading-2-your-apis-need-rich-error-semantics">2. Your APIs need rich error semantics</h2>
<p>The way we handle errors needs to change dramatically for AI-ready APIs. Consider this typical error response:</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUf1MYMx9x_48Un0jYxflktoQIFZ0wUhk2P_mmN1XrVn96YvJ7LE5PoHmPm6TchLfck8t5_x4C0GNnhxMhKICTlNM7ASBOSmCWOQy5a4BHB9BF7Lr7GbcfV7VH1mAPnRD1uFE7eWkg=s2048?key=HxzbfhhzJ7QklZDTFS0sTg" alt /></p>
<p>When a human developer sees this, you have experience of what you need to do. You might check if the user ID exists, if you have permission to access the resource, or if you formatted the request correctly. You draw on implicit knowledge about common failure patterns.</p>
<p>But an AI system has none of this troubleshooting intuition. It can't "just know" that "something went wrong" might mean the user ID is in the wrong format or that the resource doesn't exist.</p>
<p>Here's what an AI-ready error response looks like:</p>
<p><img src="https://lh7-rt.googleusercontent.com/slidesz/AGV_vUeoTMC_bHCNVZH9JMtfVkbiqtZK5pPnMQ-nb_zcTEvT5Q9q5dBCCl9lMnjeToOAaxtt0ImnklTutcpF5BgwaqduzQUPbbeLzMelLfsp2fuMuBW69xBzWSeHLzvDEuIgAJV_KCt9SQ=s2048?key=HxzbfhhzJ7QklZDTFS0sTg" alt /></p>
<p>The AI-ready version not only explains exactly what went wrong, but provides explicit guidance on how to fix it. The “received” and “expected” fields are really valuable for AI systems because they essentially tell the AI, "Here's how you can solve this problem."</p>
<p>Remember: The richer your error semantics, the smarter your AI integrations become.</p>
<hr />
<h2 id="heading-3-your-apis-need-introspection-capabilities">3. Your APIs need introspection capabilities</h2>
<p>AI agents should be able to query your API for complete schema definitions, available endpoints, parameters, and capabilities.</p>
<p>Humans can navigate incomplete schema definitions by making educated guesses, for example, assuming that a <code>GET /users/{id}</code> endpoint returns a user object, even if it’s not clearly documented. You rely on familiar patterns like RESTful conventions, common status codes, and naming conventions. If the docs are missing or ambiguous, a human can open Postman, send a few trial requests, look at the responses, ask teammates, or dive into the source code. Humans are flexible and can handle ambiguity, you can fill in gaps with logic and intuition.</p>
<p>An AI agent, however, can’t infer or guess. It must rely on structured data to understand your API. That includes a full OpenAPI schema with detailed operation IDs, parameter definitions, request/response formats, and error codes.</p>
<p>It needs clearly defined relationships between endpoints and examples of how data flows through the system. Without this, the AI is essentially guessing, which can lead to hallucinated behavior or broken workflows. For AI systems to successfully parse, plan, and reason over your API, you must expose complete introspection through specs, structured docs, and testable collections.</p>
<hr />
<h2 id="heading-4-your-apis-need-to-have-consistent-naming-patterns">4. Your APIs need to have consistent naming patterns</h2>
<p>AI systems are much better than humans at detecting and exploiting patterns. Look at these two endpoint styles:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755533011995/95443e82-80c5-443e-b772-91a336c2a757.png" alt class="image--center mx-auto" /></p>
<p>Which one follows consistent REST conventions? Take a closer look.</p>
<p><img src="https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExZmZ3NW94cjJ5MnExM29vcWFraHI2aHMxbnluaXpuYnE3NTYwaHN0ZSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/MZq4dIAU2b9h0WkSHm/giphy.gif" alt /></p>
<p>If you guessed the the one on the right, you’re correct! The version on the right is using a <code>PUT</code> to update an endpoint, making it possible for AI to predict how your API works across endpoints it hasn't seen before. </p>
<p>Consistent naming conventions in APIs ensure clarity and predictability, which are essential for AI models to accurately understand and interact with endpoints. Additionally, choose a naming convention like <code>snake_case</code> or <code>camelCase</code> and use it consistently throughout your code. When names follow logical, structured patterns, AI systems can more easily infer relationships, purposes, and required parameters. This consistency reduces ambiguity, enabling more effective automation, reasoning, and integration in AI-driven workflows.</p>
<hr />
<h2 id="heading-5-your-apis-need-to-be-predictable">5. Your APIs need to be predictable</h2>
<p>AI agents expect APIs to return the same structure and format for the same inputs, every time.</p>
<p>When human developers get an error, they have context into debugging. For example, if a <code>GET /users/123</code> call fails with a 404, a human might check if the ID exists, confirm they’re authenticated, or look at recent database changes. You bring prior knowledge, pattern recognition, and deductive reasoning to troubleshoot inconsistencies.</p>
<p>But AI agents don’t have that context. They can’t "investigate" or "assume,” they rely entirely on the structure and behavior they’ve seen before. If your API returns <code>user_id</code> in one response and <code>uuid</code> in another, or if an object sometimes comes nested and sometimes flattened, the AI won’t know which is correct. This unpredictability leads to hallucinated requests, broken flows, or faulty outputs.</p>
<p><strong>Inconsistency</strong> \= <strong>unreliable agent behavior</strong></p>
<p>The more consistent your API is, the more reliable and accurate AI agents can be when integrating with it, which reduces the risk of unpredictable behavior.</p>
<hr />
<h2 id="heading-6-your-apis-need-to-be-well-documented">6. Your APIs need to be well-documented</h2>
<p>For a human developer, if you don’t understand something, you can search the internet, go to various websites, or even ask a teammate for help.</p>
<p>However, AI Agents can’t use what it doesn’t understand. Your APIs need to be well-documented so that your agents can learn everything about your system. If your API isn’t well-documented, an agent has no reliable way to discover what endpoints exist, what parameters to pass, what data to expect in return, or how to recover from an error.</p>
<hr />
<h2 id="heading-7-your-apis-need-to-be-reliable-and-fast">7. Your APIs need to be reliable and fast</h2>
<p>AI agents don’t operate in isolation. They function as orchestrators, making rapid, sequential, and sometimes parallel API calls to gather information, take action, and respond intelligently. In these real-time scenarios, your API’s reliability and speed directly determine the success of the agent’s task.</p>
<p>Human developers may wait for responses from an API, but AI agents will often timeout and break down. For example, if a human developer makes a request and the API takes 5–10 seconds to respond, they might just wait it out, reload the request, or troubleshoot the latency manually. They have the patience, tools, and judgment to handle slowness.</p>
<p>AI agents don’t have that luxury. They operate in milliseconds, often chaining multiple API calls together in real time to complete a task. If just one request takes too long or fails to respond, the entire chain breaks. The agent may time out, retry ineffectively, or worse: attempt to "fill in the blanks" with a hallucinated response based on partial information. That creates serious reliability issues in production.</p>
<p>For APIs to be AI-ready, they must be not just available, but consistently <strong>fast and reliable</strong>. That means:</p>
<ul>
<li><p>Low, predictable latency</p>
</li>
<li><p>Clear timeout behavior</p>
</li>
<li><p>Graceful error messages when delays occur</p>
</li>
<li><p>Infrastructure that can handle parallel requests at scale</p>
</li>
</ul>
<h3 id="heading-ai-agents-are-only-as-good-as-the-apis-they-depend-on-if-your-api-cant-keep-up-your-ai-app-wont-either"><mark>AI agents are only as good as the APIs they depend on. If your API can't keep up, your AI app won’t either.</mark></h3>
<hr />
<h2 id="heading-8-your-apis-need-to-be-discoverable">8. Your APIs need to be discoverable</h2>
<p>Finally, even the best API won’t be used if nobody knows it exists.</p>
<p>Human developers can often find APIs through documentation portals, internal wikis, Slack threads, or by asking around.</p>
<p>AI agents can’t do that. If your API isn’t published with clear metadata, discoverable endpoints, and open access, it might as well not exist. Agents can’t ask teammates for links. They can’t dig through internal dashboards. They rely on structured, searchable, and standardized metadata to locate and integrate APIs on their own.</p>
<p>For your API to be truly AI-ready, it must be visible and accessible.</p>
<p>High visibility is the key to AI integration. Publishing your API on the <a target="_blank" href="https://learning.postman.com/docs/collaborating-in-postman/public-api-network/public-api-network-overview/">Postman API Network</a> ensures that agents and developers alike can find, understand, and start using your API instantly, no guesswork, no gatekeeping.</p>
<p>The Postman API Network is the largest public hub of APIs, with over 100,000 publicly available APIs that are purpose-built for visibility, discoverability, and machine-readiness:</p>
<ul>
<li><p>Global Discovery: Developers and AI systems alike can browse thousands of public APIs organized by category, publisher, and use case.</p>
</li>
<li><p>Verified Publishers: Trust and transparency are built in. APIs from brands like Stripe, Notion, Twilio, and PayPal are marked as verified. To get verified on PAN, follow <a target="_blank" href="https://learning.postman.com/docs/collaborating-in-postman/public-api-network/verify-your-team/">these instructions</a>.</p>
</li>
<li><p>Collections: Postman Collections let you define, group, and share API requests in a consistent, structured format. These serve as examples that teach AI agents how to use your API. Collections act like training data: the clearer and more complete they are, the better the agent performs.</p>
</li>
<li><p>Searchable Metadata: AI agents and devs can query APIs by tags, protocols (REST, GraphQL, etc.), and capabilities, increasing automation and integration potential.</p>
</li>
</ul>
<h2 id="heading-to-sum-up"><strong>To Sum Up</strong></h2>
<p>If you’re building or maintaining APIs, start making them AI-ready now. Try adding machine-consumable metadata, improving your error semantics, or tightening up your naming conventions. For more tips on making your APIs AI-ready, and for the full developer toolkit, head to <a target="_blank" href="https://www.postman.com/">Postman</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Permissions for AWS Lambda functions]]></title><description><![CDATA[Picture this: you’ve just finished writing your Lambda function, and everything seems ready to go. You hit deploy, trigger the function... and nothing happens. Then it hits you—permissions. That nagging afterthought you were sure you’d figure out lat...]]></description><link>https://buildwithtalia.com/understanding-permissions-for-aws-lambda-functions</link><guid isPermaLink="true">https://buildwithtalia.com/understanding-permissions-for-aws-lambda-functions</guid><category><![CDATA[lambda]]></category><category><![CDATA[serverless]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Fri, 31 Jan 2025 18:24:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738347563587/bbcf5fa2-8985-4010-b297-a1ec40477b78.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Picture this: you’ve just finished writing your Lambda function, and everything seems ready to go. You hit deploy, trigger the function... and nothing happens. Then it hits you—permissions. That nagging afterthought you were sure you’d figure out later is now the only thing standing between you and a working application. Did you configure that policy correctly? Does the event source have permission to invoke the function? What about the execution role—did you forget something? Without proper preparation, permission problems can turn a simple deployment into a debugging nightmare.</p>
<p>In this blog post, I'll guide you through setting up permissions for your AWS Lambda functions. We'll cover IAM policies in detail, and I'll share a handy shortcut using AWS SAM to streamline the process.</p>
<p>Before we dive in, let’s quickly review how Lambda functions work in an <a target="_blank" href="https://aws.amazon.com/blogs/compute/getting-started-with-event-driven-architecture/">event-driven application</a>. Lambda functions are invoked when the corresponding event source triggers the Lambda function. However, before the event source can trigger the function, it needs the correct <a target="_blank" href="https://aws.amazon.com/iam/">AWS Identity and Access Management (IAM)</a> policy configured. This ensures that your Lambda function can interact with the necessary services <em>before</em> the event source even fires. Then, you'll need to ensure that the Lambda execution role is set up properly so the function can interact with other services <em>after</em> the function is invoked.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcUvVMD0o2l81q1w7LCUhaE6rWiNPXZ7Qunzf5gphLHKgb1Y6bu6Eg8O3naN2ieF5h75KTN7sAXKcQ33f6cH4flcYLwv2mVsVz50kFDn_MlJ5aVfsU-vF9hLbeJ8R0IK4QpAnjKIsA47qkRi6R8huM?key=XVtl1mYrL4iiG4Gfl3cZGg" alt /></p>
<h1 id="heading-identity-based-policies">Identity-based Policies</h1>
<p>There are two different types of IAM policies you can add to your Lambda function. The first is an <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_id-based">identity-based policy</a>. Identity-based policies are used to grant users in your account access to Lambda. They can apply to users directly, or to groups and roles that are associated with a user. Think of identity-based policies as VIP passes you hand out to people who need to work with your Lambda function.</p>
<p>For example, you can attach the policy to the IAM user named John, stating that he is allowed to perform the Amazon EC2 <code>RunInstances</code> action. The policy could further state that John is allowed to get items from an Amazon DynamoDB table named <code>MyCompany</code>. You can also allow John to manage his own IAM security credentials.</p>
<p>The beauty of these policies is how flexible they are - you can make them as broad or specific as you need, depending on what your team members need to get their jobs done.</p>
<h1 id="heading-resource-based-policies">Resource-based Policies</h1>
<p>The second type of IAM policy you can add to your Lambda function is a <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_resource-based">resource-based policy</a>.</p>
<p>Instead of attaching these to users, you stick them directly onto AWS resources themselves. You specify who has access to the resource, and what actions they can perform on it.</p>
<p>For example, you can attach resource-based policies to Amazon S3 buckets, Amazon SQS queues, and VPC endpoints.</p>
<p>One feature that makes this process much easier is that whenever you create a new Lambda function, AWS automatically sets up a special IAM role for it. To view or edit the policy, navigate to the Lambda function you created, and choose the Configuration tab. Then choose Permissions from the left panel. You can see the resource-based policy below for this Lambda function. This role has permission to create an Amazon CloudWatch Log Group for the Lambda invocation, as well as create log streams and put events to those streams.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf0C2U-J02zAZAd8AKg0nvVRxY_NN96949upEkZ5gt99aNk0PsFFLLKmuRu7imFMlVkyPbfPSwz1RcQ0f8-zjGpmSfYKhbcisFtzDA94iZWHv0zGY9qgzLdnW1TqQvSuXGDbOdvU2E4OWPrGetaA_Jp9cEn8G-wOVZeruDsPpUsz_AmrTZGDDI?key=XVtl1mYrL4iiG4Gfl3cZGg" alt /></p>
<p>To give other services access to your Lambda function, choose the Role name. Choosing this link will navigate you to the IAM console.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXc3wRwmGtVFAq_uKDn5XHeMIRz9GhnKJ6qePQSrb_MpvSZWZchybt80UY_l5eCqAbzVtob61FEkgXMosF2n7ldxo0oO3yGW7dqvnOtTrjp2LOwCx4grBV5NYDjV3FH0dxBzLdXIVFMtJGhYZPFCaZ33th6CYAbnbip-BAt2vXAgIir6HwOy8ac?key=XVtl1mYrL4iiG4Gfl3cZGg" alt /></p>
<p>Choose the policy name and then choose Edit Policy.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdZ6gMqpTrTkWcEESl9g20NF0hxqocBRLg4N0YbyYr2RpOccBWTUKXIvAxO51nWbbCNzW8bW3TmrcMxZfrQTHpj_SB4iYH3_epHrUG6HvN406p9bqe6Ac2kdQzT-5qFNIRl54Y6psTpWe-c1LowYZbVJw8-kOMBYP0kVpv6XjNiQkduI9qW?key=XVtl1mYrL4iiG4Gfl3cZGg" alt /></p>
<p>Here, you can add permissions for any AWS service to access your Lambda function. (For the full list of permissions, see the <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/APIReference/API_Operations.html">AWS Documentation</a>).</p>
<p>You can choose to configure permissions in the visual editor or write the configuration in JSON. This is an example of a resource-based permission policy to allow Lambda to get an object from the S3 bucket named mybucket-image-processing. It also allows Lambda to put an object into an S3 bucket named mybucket-image-processing-resized.</p>
<pre><code class="lang-javascript">{
    <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-string">"Statement"</span>: [
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: [
                <span class="hljs-string">"logs:PutLogEvents"</span>,
                <span class="hljs-string">"logs:CreateLogGroup"</span>,
                <span class="hljs-string">"logs:CreateLogStream"</span>
            ],
            <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"arn:aws:logs:*:*:*"</span>
        },
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: [
                <span class="hljs-string">"s3:GetObject"</span>
            ],
            <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::mybucket-image-processing/*"</span>
        },
        {
            <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-string">"Action"</span>: [
                <span class="hljs-string">"s3:PutObject"</span>
            ],
            <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::mybucket-image-processing-resized/*"</span>
        }
    ]
}
</code></pre>
<h1 id="heading-aws-lambda-execution-role">AWS Lambda execution role</h1>
<p>Now that we’ve established security for our Lambda function before it’s invoked, let’s go through Lambda security <em>after</em> it’s invoked. When you create a Lambda function, you assign it an execution role, and whenever the function runs, AWS temporarily assumes that role to determine what actions it can perform.</p>
<p>For example, let’s say every time your Lambda function is triggered, it sends a notification via SNS. In this case, the Lambda execution role needs to have the correct permissions to publish to SNS. Without this permission, your function won’t be able to perform the intended action, even if the rest of the setup is correct. You can view and edit what resources your Lambda function has access to by looking at the <em>Resource summary</em> section in the Lambda console below.</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfUNUGz5ioASkSziGHtjiCPm7BQj-3LZJmR2qjq-02z8_8aFWr3FIq2oiSOV4rkryqCyH2FCMXaGmDwjjoq4kDX14JAKR4_OuM_r6t8eJaqgIIsd3iUmDwHFDyttwqtDp38uAM3if9nVfcPMD_8ITq5axwZ5kwEnKb9MB8Uks6qRuUz6cL18QY?key=XVtl1mYrL4iiG4Gfl3cZGg" alt /></p>
<p>By carefully configuring the execution role and adhering to the <a target="_blank" href="https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started-reduce-permissions.html">principle of least privilege</a>, you minimize the risk of unauthorized access or unintended actions. Regularly reviewing and auditing these permissions, leveraging monitoring tools like AWS CloudWatch, and implementing additional security measures such as VPC restrictions or encryption can further enhance your Lambda security posture.</p>
<h1 id="heading-benefits-of-configuring-permissions-with-aws-serverless-application-model">Benefits of configuring permissions with AWS Serverless Application Model</h1>
<p>In the previous sections, we explored how to manually configure IAM policies and Lambda execution roles. However, if you prefer to simplify the process, you can use AWS SAM as a shortcut. With the <a target="_blank" href="https://aws.amazon.com/serverless/sam/">AWS Serverless Application Model (AWS SAM)</a>, permissions for your event source to invoke your Lambda function are automatically created when you deploy the SAM template.</p>
<p>For instance, the following SAM template sets up an Amazon API Gateway API, a Lambda function, and a DynamoDB table, with API Gateway serving as the event source for the Lambda function. Upon deployment, SAM automatically grants API Gateway the necessary permissions to invoke the function. This not only saves time but also reduces the risk of configuration errors!</p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcpCCkFT0MBEZq4ioPI9PMm0iwKxX-c1Gu4YxIC1zQVYL2Iu_Q5LIp_LPD62v1CwzcGRdN6etyavxYqpNzjPv3oBiM_n4kMmmPJ3LmwrgyXkYN_lnQsLFsMZwdn3usKOkS4nM2TkSALzj_TVIB4cz0-nBSZ9NDf7srkyEQfac0_KsxPcvQU7J0?key=XVtl1mYrL4iiG4Gfl3cZGg" alt /></p>
<p>This is another reason why I love using <a target="_blank" href="https://buildwithtalia.com/embracing-infrastructure-as-code-the-key-to-devops-excellence">Infrastructure as Code</a> for cloud deployments!</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>When you add a Lambda function to your serverless application, there are two important security principles that are critical to understand: IAM policies <em>before</em> your Lambda function is invoked, and Lambda execution roles <em>after</em> your Lambda function is invoked. It’s also a great idea to use SAM when you deploy Lambda functions because the permissions for your event source to invoke your Lambda function are created <strong>automatically</strong> for you when you deploy the SAM template.</p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Embracing Infrastructure as Code: The Key to DevOps Excellence]]></title><description><![CDATA[Over the years, I have given many versions of my talk on building applications with infrastructure as code (IaC). Building with IaC is a practice that I firmly believe every DevOps practitioner should embrace, and it's one that has led me to be a fre...]]></description><link>https://buildwithtalia.com/embracing-infrastructure-as-code-the-key-to-devops-excellence</link><guid isPermaLink="true">https://buildwithtalia.com/embracing-infrastructure-as-code-the-key-to-devops-excellence</guid><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[multicloud]]></category><category><![CDATA[AWS]]></category><category><![CDATA[GCP]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Tue, 30 Apr 2024 16:41:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714495236974/14b105e8-6171-48c9-9e51-2c158b00e984.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the years, I have given many versions of my talk on building applications with infrastructure as code (IaC). Building with IaC is a practice that I firmly believe every DevOps practitioner should embrace, and it's one that has led me to be a frequent speaker on the topic at developer events and conferences.</p>
<p><img src="https://lh7-us.googleusercontent.com/mQr06RmooQGSJNFw8ptPcW-uwI4uA4WCA5cfVWZzZh7XVTx0YTkneI0FOh1A263rD4GhpxRMhWE7-tZoDXj5cNEi7Ol4c4baQNvnS7k2c021HTZvsjtn05-XwQBmn66HjUwk1PoV3uuFf0grGAVSgto" alt /></p>
<p>There’s so many reasons why I’m on #teamIaC. Let’s talk about some of them.</p>
<h2 id="heading-you-cant-rely-solely-on-a-cloud-providers-ui">You can’t rely solely on a cloud provider’s UI</h2>
<p>When I see tutorials that build from a cloud provider’s UI, they are often clear and easy to follow. There's something reassuring about following along with step-by-step instructions and clicking through the UI to achieve your desired outcome. It's like having a helpful guide holding your hand every step of the way. This is a great way for developers to learn.</p>
<p>However, there's a catch, and it's a big one: UIs are notorious for their propensity to change—frequently and sometimes drastically. With cloud providers constantly rolling out new features, enhancements, and interface redesigns, what once was a familiar landscape can quickly morph into uncharted territory. And if you rely solely on screenshots to illustrate the steps in a tutorial, well, you're playing a game of catch-up that you're bound to lose.</p>
<p>Think about it: a tutorial that showcases screenshots of a particular UI layout may be spot-on at the time of publication. But fast forward a few months, or even weeks, and those screenshots could be woefully outdated. Buttons might shift positions, menus might undergo a makeover, and new features might render old workflows obsolete. Suddenly, following the tutorial feels like trying to navigate with an outdated map—you're bound to get lost.</p>
<p>So, what's the solution? Infrastructure as code, of course.</p>
<h2 id="heading-one-source-of-truth-for-deployment">One Source of Truth for Deployment</h2>
<p>Before I was a developer advocate, I was a test engineer. I was in charge of QA, automation, and end-to-end testing for some pretty big companies. One of the things I had to do was open defect reports for things that weren’t working in the product, and then send them to our development team to fix. However, SO MANY times they would kick back the ticket to me and say, “it works on my machine.” This would cause a screenshot war between QA and dev, and would create an uncomfortable work environment.</p>
<p><img src="https://lh7-us.googleusercontent.com/DWrhlBecPHRcHAqu2F2g4u9MI8k-nOzBugV9c_fsQSZ53gWRINRJGwjX1q_ra4joDAJYS9Cf6WXHo7UAwIagJ_M8ad9bK4QyiBB9ZkRWDp61W2jM_Im4BL9CMEfFMCUaBguUfSnmRcLtwMO_Yc2aQhA" alt /></p>
<p>If they had built their applications with infrastructure as code, they would have put all of their infrastructure (networks, compute instances, databases, storage, etc.) in a configuration file that can be deployed repeatedly. By using the same configuration files, they could have ensured consistency across environments and accounts. That’s why I’m so vocal about IaC.  It gets rid of the “it works on my machine” phenomenon by ensuring that there is one source of truth - this configuration file.</p>
<p>One IaC tool that I've used and recommend is Terraform. When you build templates with <a target="_blank" href="https://registry.terraform.io/browse/modules">Terraform modules</a>, you can manage your infrastructure efficiently and effectively. With modules, you encapsulate your infrastructure configurations into reusable components. This means you can define common infrastructure patterns once and reuse them across multiple projects or environments. It saves time and effort by avoiding the need to rewrite similar configurations repeatedly.</p>
<p><img src="https://lh7-us.googleusercontent.com/oeRPBs4NY1Dj-3nKli2GKsce12EpRpJkCPlW4b4968qh6u8zgbDAgwIpDKJbKBVfIhbHMWNJ8EwaJlMIdYfWyxLgYkk04YXseobLIOxJV9ykmgsF8isOGrKrUHntAiuq0FTHse_y13bP6i-S2IwBass" alt /></p>
<p>Let’s go through an example. In the above configuration file, I am asking Terraform to create 2 resources: an AWS IAM role, and an AWS Lambda function. Configuration files are declarative, meaning that they describe the end state of your infrastructure. You do not need to write step-by-step instructions to create resources because Terraform handles the underlying logic. You can see here in a format that’s simple and human readable, you describe the overall topology. When you deploy the configuration file, these resources will be created or updated depending on If they exist or not. In this example configuration file, we are saying hello Terraform, please create this IAM role with these configurations, and please create this Lambda function with these configurations. Thank you, have a nice day. Thats it! I don’t need to spell out how to create these resources. Your cloud provider (in this case AWS) will handle the logic to create or update the resources in your config file.</p>
<p>Unlike imperative languages that dictate the step-by-step execution of tasks, declarative languages allow users to specify the desired state of their infrastructure without getting bogged down in implementation details. This declarative approach not only simplifies the process of defining infrastructure configurations but also promotes consistency and repeatability across environments.</p>
<h2 id="heading-empowering-developers-to-self-service-their-infrastructure">Empowering Developers to Self-Service their Infrastructure</h2>
<p>Another reason IaC is important is because it empowers developers to take control of their infrastructure and service their own cloud resources. When you use IaC, you have one place with all the steps and configuration for your application that you can reuse whenever you need. This allows you to treat your infrastructure as code. And what do you do with code? You version it, you put it in some kind of repo – in GitHub, or CodeCommit, or something like that. So you can deploy a version, then work on it, then merge a new version, and you can rollback if you need to.</p>
<p>When you move to infrastructure as code, you can do the same thing with your infrastructure as you do with your code. If something breaks and your infrastructure isn’t working, you have the ability to roll back that change, or try a new version, or move it to another environment. With IaC, you can treat infrastructure as you do application code—versioned, tested, and deployed using familiar development tools and workflows.</p>
<p>It’s important to me that developers understand how IaC can enable them in this way to build faster, scalable solutions because this process facilitates collaboration, scalability, and reproducibility of infrastructure changes over time. You can track all of the changes to your infrastructure, roll back to previous versions if needed, and review the history of infrastructure modifications.</p>
<h2 id="heading-leveraging-iac-to-collaborate-across-teams">Leveraging IaC to Collaborate Across Teams</h2>
<p>One of the things I love most about IaC is its ability to foster collaboration between development and operations teams. Gone are the days of siloed workflows and finger-pointing—thanks to IaC, we can work together more effectively, sharing ownership of infrastructure and breaking down traditional barriers. IaC is all about shared ownership of infrastructure. By treating infrastructure as code—something that developers and operations folks can both understand and manipulate—we're leveling the playing field and delegating everyone to contribute to the infrastructure's evolution.</p>
<p>So, when I talk about the benefits of IaC, it's not just about the technical advantages—it's also about the impact it has on the way teams work together. By fostering collaboration, breaking down barriers, and promoting a culture of shared ownership, IaC is not just revolutionizing infrastructure management—it's revolutionizing the way we work. And that's something worth celebrating!</p>
<h2 id="heading-iac-is-cloud-agnostic">IaC is cloud-agnostic</h2>
<p>You can use IaC with any cloud provider. However, there are some things to consider. Many cloud providers have their own IaC tool, but you can only use resources from that specific cloud provider. For example, if your entire stack is on AWS, you should use one of the AWS tools (CloudFormation for non-serverless resources or SAM for serverless resources). If your entire stack is on GCP, then you should use GCP Deployment manager. If you want to build something using more than one cloud provider, then you should use Terraform because it supports multicloud deployments.</p>
<p>Terraform configuration files are written in HCL - HashiCorp Configuration Language. But don’t worry - you don’t need to learn a new language to implement infrastructure as code. Terraform has a registry that you can use to copy and paste resources, and many of them are from the cloud providers themselves.  In the <a target="_blank" href="https://registry.terraform.io/">Terraform Registry</a>, you have access to thousands of resources you can add to your applications. You just search for the resources, and copy and paste the code into your configuration file.</p>
<p>TL;DR - I don’t care which IaC tool you use, I just care that you’re utilizing a process that is easily repeatable and can be automated in a way that works for you.</p>
<h2 id="heading-multicloud-deployments">Multicloud Deployments</h2>
<p>Companies often leverage multiple cloud providers to meet their diverse requirements, and managing infrastructure can quickly become a complex task. However, with Terraform and HCL, you can navigate multi-cloud environments with ease. By abstracting away the nuances of each cloud provider's API behind a unified interface, Terraform enables you to define infrastructure configurations once and deploy them seamlessly across different clouds. Whether provisioning virtual machines in AWS, spinning up Kubernetes clusters in Google Cloud Platform, or configuring databases in Microsoft Azure, DevOps engineers can leverage the power of infrastructure as code to orchestrate their entire infrastructure stack.</p>
<h2 id="heading-iac-integrations">IaC Integrations</h2>
<p>With infrastructure as code, you have a holistic view of your application. You can add your observability tools, your CI/CD tools, and basically anything with an API to Terraform. This will give you a holistic view of what’s going on in your application and you’ll have a deeper understanding of your ecosystem.</p>
<p>You can seamlessly incorporate observability tools such as Prometheus, Grafana, Datadog, or New Relic into your infrastructure configurations. For example, you can add a <a target="_blank" href="https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/monitor">Datadog monitor</a> to your Terraform configuration file that will alert you when certain thresholds are met. By defining monitoring dashboards, alerting rules, and data collection mechanisms within Terraform, you can establish a robust observability framework that spans the entire stack. This integration ensures that every aspect of the application, from resource utilization to error rates, is meticulously monitored and analyzed in real-time, facilitating proactive troubleshooting and performance optimization.</p>
<p>You can also integrate CI/CD tools like Jenkins, GitLab CI, or CircleCI into Terraform configurations, which enables you to codify your entire deployment pipeline. By defining pipeline stages, triggers, and dependencies within Terraform modules, you can establish a unified CI/CD orchestration mechanism that streamlines the software delivery process. This approach ensures consistency and repeatability across environments while empowering teams to automate tedious tasks, such as environment provisioning, testing, and deployment, with ease.</p>
<p>By integrating observability tools, CI/CD pipelines, and other services into Terraform configurations, you’ll gain a holistic view of your application's behavior and performance. This comprehensive insight extends beyond individual components to encompass the entire application ecosystem, from infrastructure provisioning to code deployment to runtime monitoring. Armed with this holistic perspective, you can identify bottlenecks, optimize resource utilization, and mitigate risks more effectively, ultimately enhancing the reliability, scalability, and resilience of their applications.</p>
<h2 id="heading-so-why-should-i-keep-talking-about-infrastructure-as-code">So, why should I keep talking about infrastructure as code?</h2>
<p>My journey as a developer advocate is fueled by a deep-seated belief in the power of teaching best practices in the cloud, and developing and deploying applications with infrastructure as code is one of those practices. Infrastructure as code, to me, expedites and simplifies the way we build and deploy software. By continuing to speak about IaC at events, I hope to inspire you to embrace this approach and unlock new possibilities.</p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[4 Myths about Building Multicloud Applications]]></title><description><![CDATA[Building multicloud applications is a good strategy for optimizing performance, enhancing resilience, and mitigating risks. However, despite its growing importance, there is so much confusion surrounding multicloud techniques and strategies. This gap...]]></description><link>https://buildwithtalia.com/4-myths-about-building-multicloud-applications</link><guid isPermaLink="true">https://buildwithtalia.com/4-myths-about-building-multicloud-applications</guid><category><![CDATA[multicloud]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Thu, 25 Apr 2024 15:06:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714009948587/f5c2ed1a-b1fe-4801-80b6-bf02b52e1588.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building multicloud applications is a good strategy for optimizing performance, enhancing resilience, and mitigating risks. However, despite its growing importance, there is so much confusion surrounding multicloud techniques and strategies. This gap in knowledge can lead to confusion for engineering teams ready to take the next steps. In this blog, let’s debunk four common myths about building <a target="_blank" href="https://www.akamai.com/glossary/what-is-multicloud">multicloud</a> applications, shedding light on the realities and benefits of this approach.</p>
<h2 id="heading-myth-multicloud-is-only-for-redundancy">Myth: Multicloud is Only for Redundancy</h2>
<p><img src="https://lh7-us.googleusercontent.com/rmSeHx1nt0I5bdGFZmirUgfgdj3JZ8webeHofTdHYnlqE8ppRdRO74DebOqVnh3utRCgnBzmFXOFp6LgXcXnPBX6ytUsSw_RNMpYEIvtda2_TbhXUIzsthnMex5W23ZHqL_r6OXBjfSCz3M_5vpJ9LE" alt /></p>
<p>Multicloud architecture offers more than redundancy; it also provides flexibility and optimization. While redundancy is undoubtedly a crucial aspect of multicloud (you can distribute your workloads across multiple clouds for higher availability and disaster recovery), it's not the sole reason for its adoption. Multicloud enables organizations to leverage the strengths of different cloud providers, optimizing performance, cost, and compliance. By strategically distributing workloads across multiple clouds, organizations can achieve greater agility, scalability, and geographic reach. </p>
<p>Let’s look at an example. Imagine a global e-commerce company that relies heavily on high-quality images to showcase its products and deliver a seamless shopping experience to its customers. This company, let's call it VivaShop, has a vast catalog of products ranging from clothing and accessories to electronics and home goods. To optimize both performance and scalability, VivaShop leverages Akamai for <a target="_blank" href="https://www.akamai.com/glossary/what-is-edge-computing">edge computing</a> while utilizing AWS for compute-intensive tasks like image processing. Here's how VivaShop strategically utilizes a multicloud solution using both Akamai and AWS: </p>
<p>VivaShop leverages <a target="_blank" href="https://www.akamai.com/products/serverless-computing-edgeworkers">Akamai's EdgeWorkers</a>, an edge computing platform that allows developers to execute lightweight JavaScript code at the edge of Akamai's <a target="_blank" href="https://www.akamai.com/glossary/what-is-a-cdn">content delivery network (CDN)</a>. By deploying EdgeWorkers at Akamai's edge locations worldwide, VivaShop can bring computing resources closer to its end-users, reducing latency and improving performance. VivaShop utilizes EdgeWorkers to dynamically resize and optimize images on the fly. When a user requests an image from VivaShop's website or mobile app, EdgeWorkers intercepts the request at the edge, retrieves the original image from the origin server, and applies optimizations such as resizing, compression, and format conversion based on the user's device and network conditions. By offloading image processing tasks to Akamai's edge servers, VivaShop significantly reduces the load on its origin servers and accelerates content delivery to end-users. This not only improves the user experience by delivering optimized images quickly but also reduces bandwidth costs and server load, leading to cost savings and scalability benefits. </p>
<p>For compute-intensive tasks like advanced image processing and analysis, VivaShop leverages the computational power and scalability of AWS. AWS offers a wide range of services and tools specifically designed for tasks such as image recognition, object detection, and content analysis. When VivaShop needs to perform complex image processing tasks, such as identifying product attributes, detecting objects within images, or generating product recommendations based on visual data, it utilizes AWS services like Amazon Rekognition and Amazon SageMaker. These AWS services enable VivaShop to process large volumes of images efficiently, extract valuable insights from visual data, and deliver personalized experiences to its customers. With AWS's elastic scaling capabilities, VivaShop can handle spikes in image processing demand during peak shopping seasons without worrying about provisioning and managing infrastructure.</p>
<p>By strategically leveraging Akamai for edge computing and AWS for compute-intensive tasks like image processing, VivaShop achieves the best of both worlds: enhanced performance, scalability, and cost efficiency. Akamai's EdgeWorkers bring computing resources closer to end-users, optimizing content delivery and reducing latency, while AWS provides the computational power and scalability needed for complex image processing tasks. Together, these two platforms enable VivaShop to deliver a seamless and personalized shopping experience to its global customer base.</p>
<h2 id="heading-myth-multicloud-increases-complexity">Myth: Multicloud Increases Complexity</h2>
<p><img src="https://lh7-us.googleusercontent.com/odd0101F7yz8b0bgKAE0Zt5PUYHgbI__SKihrhpF4OzMWDeOYbKFkFbmUUZpNa4pjyJfmIqVjiyEzv0IGprmYaF6DiCpFpwUdjZTzkyx1FVg9FLZp3-2p0wLypR4mRWYbLfqrJf623Crgc4SGw-tByo" alt /></p>
<p>While it's true that managing multiple cloud environments can pose challenges, modern tools and best practices can help mitigate complexity. Open-source technologies like <a target="_blank" href="https://kubernetes.io/">Kubernetes</a> and <a target="_blank" href="https://www.jenkins.io/">Jenkins</a> help reduce complexities. Kubernetes provides a unified orchestration layer, allowing organizations to manage workloads across diverse cloud environments seamlessly. Additionally, open source CI/CD automation tools like Jenkins, streamline deployment and operations, reducing the overhead associated with multicloud architectures. </p>
<p>For example, let’s say there is a company called CloudNova which is a rapidly growing software-as-a-service (SaaS) provider offering a range of cloud-based applications. To manage its expanding infrastructure efficiently and reduce complexity, CloudNova leverages Kubernetes for container orchestration and Jenkins for CI/CD automation. </p>
<p>CloudNova adopts Kubernetes as its <a target="_blank" href="https://www.akamai.com/glossary/what-is-a-container">container</a>-to-orchestration platform to manage and scale its containerized workloads seamlessly. With Kubernetes, CloudNova can deploy microservices-based applications as Docker containers, ensuring consistency and portability across development, testing, and production environments. Kubernetes abstracts away the underlying infrastructure complexity, allowing developers to focus on application logic rather than infrastructure management. CloudNova also leverages Kubernetes features such as service discovery, load balancing, auto-scaling, and self-healing to ensure high availability and reliability of its applications. Kubernetes' declarative approach to configuration management simplifies deployment workflows and enables rapid iteration and experimentation.</p>
<p>To streamline development and deployment processes, CloudNova uses Jenkins. Using Jenkins for CI/CD automation in a multicloud environment offers several benefits. Let’s go through them. Most importantly, Jenkins is highly customizable and supports a wide range of plugins, making it adaptable to different cloud environments and tools. In a multicloud setup where you might have different requirements or preferences for each cloud provider, Jenkins can be configured accordingly. Jenkins integrates with various cloud platforms, version control systems (like Git), and other tools commonly used in multicloud environments. This integration streamlines the CI/CD pipeline and allows for smooth interaction between different services and platforms. </p>
<p>Another benefit of using Jenkins in multicloud applications is that multicloud applications often require scalability to handle varying workloads and traffic across different cloud providers. Jenkins can be configured to scale horizontally to accommodate increased demand, ensuring efficient CI/CD processes even in dynamic multicloud environments. Jenkins also  provides detailed insights into the CI/CD pipeline, including build statuses, test results, and deployment progress. This visibility is crucial in a multicloud setup where resources are distributed across different platforms, enabling teams to monitor and manage the entire process effectively. With Jenkins, you can define consistent CI/CD workflows across multiple cloud environments, ensuring that development, testing, and deployment processes remain standardized regardless of the underlying infrastructure. This consistency improves collaboration and reduces the likelihood of discrepancies or compatibility issues between cloud providers.</p>
<p>By combining Kubernetes for container orchestration and Jenkins for CI/CD automation, CloudNova simplifies its operations, reduces manual effort, and enhances observability and scalability. Kubernetes abstracts away the complexities of managing containerized workloads, while Jenkins enables consistent CI/CD workflows across multiple cloud environments.</p>
<h2 id="heading-myth-multicloud-is-more-expensive">Myth: Multicloud is More Expensive</h2>
<p><img src="https://lh7-us.googleusercontent.com/s1-eNwSmv-EQhTGNENc9cx7dEUlR7KVVYITnZTWnhJpF9Ooxy4CK7lE_g1WU19S4mFxTkWACF0PFn4V4TMLApY4Kh57nlkQhHPlusJ7H_8fhcAt9D-_OXSz7bUjDnpKoJBTjPBwVsJ1P1rF7PcA9tXU" alt /></p>
<p>Multicloud strategies can lead to cost savings through workload optimization. For instance, a media streaming platform might utilize Akamai Cloud Computing for networking and Google Cloud's AI and ML services for content recommendation algorithms, optimizing costs while improving user experience. </p>
<p>One way that media streaming platforms like Hulu, Netflix, and Disney+ can ensure high availability, low latency, and reliable content delivery is to leverage Akamai Cloud Computing’s robust networking infrastructure. Akamai Cloud Computing's global network of data centers enables streaming platforms to deploy edge servers in strategic locations worldwide, reducing the distance between users and content delivery points. Akamai Cloud Computing's advanced networking features, such as <a target="_blank" href="https://www.akamai.com/glossary/what-is-global-server-load-balancing">load balancing</a>, <a target="_blank" href="https://www.akamai.com/glossary/what-is-a-cdn">content caching</a>, and <a target="_blank" href="https://www.akamai.com/glossary/what-is-ddos">DDoS protection</a>, further enhance their reliability and security. Streaming platforms can also utilize Akamai Cloud Computing's cost-effective pricing model and predictable billing structure to optimize networking costs while ensuring consistent performance and uptime. They also offer a transparent pricing and pay-as-you-go model, and they can scale its networking infrastructure dynamically based on actual usage and demand patterns, avoiding over-provisioning and unnecessary expenses.</p>
<p>Food delivery applications can also use GCP’s AI and ML Services for restaurant and food Recommendation Algorithms to suggest relevant and personalized food options to each user.  By leveraging Google Cloud's AI and ML services, streaming platforms can optimize costs by paying only for the resources consumed during model training and inference, without the overhead of maintaining on-premises hardware or infrastructure. </p>
<p>Through the strategic combination of Akamai Cloud Computing for networking and Google Cloud's AI and ML services for content recommendation algorithms, streaming platforms and food delivery applications can achieve cost optimization and enhance user experience simultaneously. By leveraging Akamai Cloud Computing's reliable and cost-effective networking infrastructure and Google Cloud's powerful AI and ML capabilities, streaming platforms can deliver high-quality streaming experiences and personalized content recommendations to its global audience, without breaking the bank.</p>
<h2 id="heading-myth-security-is-harder-to-implement-in-multicloud-environments">Myth: Security is harder to implement in Multicloud Environments</h2>
<p><img src="https://lh7-us.googleusercontent.com/JTZASAkg7icZwQQ7sfFJGn6tCszPWXzUgz6-84Gc8KZpVBVYHZqou4t72xDphui0dCrqiZwJCC4NrBzfc2mrCJImFUJj4eyeoUUEfrnVfZbFUgKyi2kkqoh1F6-GmIdOJ4oMGoMN2pPpLkr3G3DUxxU" alt /></p>
<p>Let’s face it, implementing security for one cloud provider can be difficult enough, but now we’re adding another one. How do we enhance the security posture, and ensure no threats enter our application? The answer is that we can have a single control plane for adding security protections, allowing for a holistic observability approach for security events. This <a target="_blank" href="https://www.akamai.com/glossary/what-is-zero-trust">zero trust model</a> with consistent security policies will help ensure security of your multicloud application.</p>
<p>One of the ways to effectively manage your security in multicloud environments is through consistent policies. These security policies can be implemented with Akamai's global edge platform. Akamai's edge servers are distributed globally, enabling developers to deploy security controls closer to end-users (and closer to bad actors and threats). </p>
<p>Another security concern for multicloud applications is to achieve high availability for your workloads. Akamai safeguards multicloud applications from both high demand and targeted attacks that seek to disrupt workload or application availability. Additionally, it implements security measures by concealing your origin cloud infrastructure, preventing direct access.</p>
<p>Akamai also offers a unified management platform and automation tooling that provides developers with granular visibility into security events, compliance status, and policy enforcement across all workloads. This centralized approach streamlines security management, simplifies compliance auditing, and enables consistent security enforcement, reducing operational overhead and complexity in a multicloud setup. There is no need to manage the individual security solutions (firewalls, <a target="_blank" href="https://www.akamai.com/glossary/what-is-ddos">DDoS protection</a>) for each cloud provider, which all work differently and require different configuration settings. You have one place with all of the security you need.</p>
<p>From global edge security to scalability, and centralized visibility, Akamai's security solutions offer comprehensive capabilities that are essential for securing multicloud architectures effectively. By incorporating Akamai into your multicloud strategy, you can strengthen your security posture, mitigate risks, and ensure the resilience and <a target="_blank" href="https://www.akamai.com/glossary/what-is-gdpr">compliance</a> of your multicloud applications.  </p>
<p><img src="https://lh7-us.googleusercontent.com/LNJcYQRQTLhVLpr9egI7cvQoXIcC2edhgJPG3tV1CA8WB7Fkr-DJ3lkzU2txjgganUlaOq7n02R-zeSaRZJtJyd8lqmDxc9aNpdYX2B1mDPBWNTziw1wee8GjJ4bKMEIuYTf3cG8r0De2UdUHAfTx2Y" alt /></p>
<p>The diagram above shows how Akamai security works in a multicloud setup. On the bottom, you see the cloud providers and their various components, along with a data center. In this hybrid cloud model, each component is made up of its own services and requires security. Instead of adding security components to each component, you have one unified layer of edge security.</p>
<p>It’s a centrally-managed, cloud-agnostic solution that makes it easier to manage security for multicloud applications. These security policies can be easily managed through a wide variety of DevSecOps-first solutions such as Infrastructure as Code tooling and SDKs.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The journey to building multicloud applications is undoubtedly marked by challenges, but the rewards are unparalleled. Multicloud architecture empowers organizations to harness the best of multiple cloud platforms, optimizing performance, costs, and compliance. By strategically distributing workloads across diverse clouds and leveraging open-source technologies like Kubernetes and Jenkins, businesses can streamline operations and unlock new levels of observability and scalability.</p>
<p>Moreover, embracing a multicloud strategy isn't just about maximizing efficiency—it's about fortifying security and resilience. Integrating Akamai into your multicloud approach adds an extra layer of protection, ensuring that your applications remain robust, compliant, and protected against evolving threats.  </p>
<p>Thanks for reading! As you continue on your multicloud journey, please feel free to <a target="_blank" href="https://twitter.com/talia_nassi">reach out to me</a> with questions! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Strengthening SaaS Security with Virtual Private Clouds (VPCs)]]></title><description><![CDATA[In this blog post, let’s explore how SaaS providers can leverage VPCs to create isolated network environments, safeguard customer data, and enhance compliance with data privacy regulations. 
Let’s say I developed a SaaS tool for healthcare providers....]]></description><link>https://buildwithtalia.com/strengthening-saas-security-with-virtual-private-clouds-vpcs</link><guid isPermaLink="true">https://buildwithtalia.com/strengthening-saas-security-with-virtual-private-clouds-vpcs</guid><category><![CDATA[vpc]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Mon, 22 Apr 2024 18:54:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713811932405/80b6567e-9c35-4ad8-b1e2-9427be8dd756.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this blog post, let’s explore how SaaS providers can leverage VPCs to create isolated network environments, safeguard customer data, and enhance compliance with data privacy regulations. </p>
<p>Let’s say I developed a <a target="_blank" href="https://www.akamai.com/glossary/what-is-software-as-a-service">SaaS</a> tool for <a target="_blank" href="https://www.akamai.com/solutions/industries/health-care-life-sciences">healthcare providers</a>. The tool is for managing administrative tasks, appointment scheduling, updating patient medical records, and patient billing - one place to streamline all of the patient management. I have thousands of customers globally using this tool. However, I need to make sure that patient data is secure, compliant with HIPAA laws, and only accessible to the applicable staff and doctors for that specific customer. For example, a chiropractor’s office in Los Angeles uses my SaaS tool, and a dermatologist office in New York is also a customer of mine. These two healthcare providers should not be able to access each other’s data, nor should they even know about each other. Each healthcare provider trusts my SaaS tool to manage their day-to-day operations efficiently. However, with sensitive patient data at stake, it's imperative to safeguard this information and ensure that it remains accessible only to authorized personnel within each organization. In order to do this, let's dig into the capabilities of VPCs in order to illustrate how you can safeguard patient privacy, maintain regulatory compliance, and instill trust and confidence in your healthcare SaaS platform among both providers and patients.</p>
<h2 id="heading-why-using-a-vpc-is-better-than-a-vpn-or-vlan">Why using a VPC is better than a VPN or VLAN</h2>
<p>Let’s take a look at how we would secure our healthcare app. SaaS providers typically operate multi-tenant environments where multiple customers share the same underlying hardware. In the healthcare management example above, various healthcare providers, ranging from small clinics to large medical facilities, access the SaaS platform to streamline their administrative tasks, manage appointment schedules, and handle patient billing. While this shared model offers cost-efficiency and scalability benefits, it also introduces security challenges, particularly concerning data isolation and access control. </p>
<p>There's a <a target="_blank" href="https://buildwithtalia.com/private-ip-vs-vlan-vs-vpc">few options here for network security</a>. Let’s compare using a VPN, <a target="_blank" href="https://www.akamai.com/blog/security/comparing-the-benefits-of-microsegmentation-versus-vlans">VLAN</a>, and VPC with this architecture.</p>
<p>If we use a VPN for network security, we’d be able to provide secure remote access to healthcare resources. Whether it's accessing electronic health records (EHR), collaborating on patient care plans, or communicating with colleagues, healthcare professionals can rely on VPNs to ensure confidentiality and integrity in their interactions. However, VPNs extend the organization's network perimeter to external devices, including personal laptops, tablets, and smartphones. While this enables seamless connectivity for remote workers, it also introduces security risks associated with exposing internal resources to external threats. To mitigate these risks, organizations must implement robust security measures to protect VPN endpoints and safeguard sensitive data. This includes strong authentication, encryption, access controls, etc. Most of these will cost extra. And although VPNs can provide secure remote access to the healthcare app, most of the time the app users (doctors, nurses, staff) will be in the office and not remote, so VPNs may not be the best solution here.</p>
<p>If we use a VLAN, we can segment traffic based on departments or functional groups. For example, you might allocate VLAN 10 for administrative staff and VLAN 20 for nurses, doctors, and physicians assistants. Administrative staff would have access to patient billing and appointments, but not have access to medications, and highly sensitive patient information. However, VLAN architecture proves challenging when it comes to scaling. Adding new VLANs or expanding existing ones may require additional switches, routers, and cabling, as well as careful planning to avoid network congestion and performance issues. As a result, scalability may be constrained. It’s also harder to meet compliance requirements when you use VLANs. Although you have this <a target="_blank" href="https://www.akamai.com/glossary/what-is-network-segmentation">network segmentation</a>, you may not be able to address all aspects of <a target="_blank" href="https://www.akamai.com/glossary/what-is-gdpr">GDPR compliance</a>, like data encryption, audit logging, or geographic restrictions. As a result, you may struggle to demonstrate compliance to regulatory authorities or face potential penalties for non-compliance. In a healthcare environment, this just won’t work.</p>
<p>If we use a VPC, we’d wrap each VM with a VPC. VPCs enable us to define custom networking configurations, implement access controls, and establish private communication channels, safeguarding sensitive healthcare data from unauthorized access and external threats. Within each VPC, we’d add firewalls to control inbound and outbound traffic to our virtual machines and resources. <a target="_blank" href="https://www.linode.com/content/linode-cloud-firewall-explained-clear-and-intuitive-network-control-to-and-from-all-your-servers/">Firewalls</a> allow us to define and enforce granular security rules based on IP addresses, ports, and protocols, thereby mitigating the risk of unauthorized access and cyber threats. This is a great way to ensure that doctors' offices don’t have access to each others’ patient records and ensure compliance. By deploying these applications within separate VPCs, we ensure that each environment remains isolated from the others. You also get encryption at rest.</p>
<p>VPCs also enable SaaS providers to create isolated network environments for each customer or group of customers. Within their dedicated VPC, each customer's data and resources are logically separated from those of other tenants, ensuring that sensitive information remains segregated and protected from unauthorized access or interference. </p>
<p>Here’s how I would secure my healthcare app with VPCs: I would create a dedicated VPC for each healthcare provider, ensuring that their data and resources are logically separated from other tenants. </p>
<p><img src="https://lh7-us.googleusercontent.com/wiYc_GUpjvsCeG6d7Y9kYO_mIYeuGHQlXr255INyg74IZCE0LbumFOjWFZLQeM-by4NTPAnQnpg2Ciuj50VxHYM0-szbsr6ydnrrhRIFlmAemjGp0dggdXZCjBErvU-UAl4VLF92h5Jyx7tI5LyLdfg" alt /></p>
<p>The chiropractor’s office in Los Angeles has their own VPC, and the dermatologist in New York has their own VPC. Within each VPC, I would <a target="_blank" href="https://buildwithtalia.com/crafting-a-resilient-vpc-landscape-using-terraform">configure custom network settings</a>, including subnets, tailored to the specific requirements of the healthcare provider. This network isolation ensures that patient data remains segregated and protected from unauthorized access, thereby minimizing the risk of data breaches or privacy violations. Access to patient records, and sensitive healthcare information is restricted to authorized personnel within that specific healthcare provider's organization, based on predefined roles and permissions. That means that for my application, a doctor’s office in Los Angeles will not be able to see or access data from a doctor’s office in New York. They are two separate entities, two separate customers, each on their own VPC.</p>
<p>VPCs also enable SaaS companies to have granular access controls. You can define security groups, network access control lists (ACLs), and cloud firewall rules to regulate inbound and outbound traffic. By enforcing strict access policies within the VPC, SaaS providers can restrict access to customer data and services based on predefined rules, mitigating the risk of unauthorized access or data breaches. In my healthcare app example, I could make patient billing only accessible to office administrators, or vaccination records only accessible to parents or legal guardians. This adds another layer of security to my SaaS app.</p>
<p><img src="https://lh7-us.googleusercontent.com/w9n933sTRZxSnh6z29VZsVBko-coIEP9qS1OOLRT_kEIv0qSB9H6vljtT1SJ_Y6w0ctqZ1oD6oP2N0UchIQeAhWLCxW8lKcFCWhXChXikDlvAgbLCTKQLrAuFdYwf3uCvqSLXOv6G2wjsGDES0c9MHg" alt /></p>
<h2 id="heading-encryption-and-compliance">Encryption and Compliance</h2>
<p>In addition to network isolation and access control, SaaS providers must implement encryption mechanisms within the VPC to protect data in transit and at rest. This additional layer of security ensures that sensitive information remains confidential and inaccessible to unauthorized parties, thereby bolstering data privacy and compliance with regulatory standards.</p>
<p>When data is transmitted between users and the SaaS platform, it traverses various network pathways, including public internet connections, which may be susceptible to interception or eavesdropping by malicious actors. By encrypting data streams within the VPC, SaaS providers can render intercepted data unreadable, thereby mitigating the risk of unauthorized access or disclosure. This encryption process involves encoding the data using cryptographic algorithms, making it indecipherable to anyone without the appropriate decryption key.</p>
<p>Similarly, when data is stored within the SaaS platform's storage infrastructure, it is vulnerable to unauthorized access or breaches if adequate security measures are not in place. By encrypting storage volumes at rest within the instances of the VPC, SaaS providers can ensure that data remains protected even if physical storage devices are compromised. Encrypted <a target="_blank" href="https://www.linode.com/docs/guides/server-side-encryption/">data stored within the VPC</a> is unintelligible without the corresponding decryption key, effectively safeguarding sensitive information from unauthorized disclosure or tampering.</p>
<p>For example, the customers using my healthcare SaaS app can rest easy knowing their patient information is encrypted. Within the VPC environment, sensitive patient data, such as current medications, billing information, and medical history, should be encrypted to ensure confidentiality and compliance with healthcare regulations. Each piece of sensitive information is encrypted before transmission or storage, thereby reducing the risk of data breaches and enhancing trust in the SaaS platform's security practices.</p>
<p>This proactive approach to security not only enhances data privacy and compliance but also strengthens the overall integrity and trustworthiness of the SaaS platform. </p>
<h2 id="heading-to-vpc-or-not-to-vpc-that-is-the-question">To VPC or not to VPC? That is the question.</h2>
<p>Now, let’s compare this architecture to the same healthcare app not using a VPC. Data security instantly becomes more challenging. How would we ensure that patient records from one doctor’s office are not accessible by patient records from another doctor’s office? We would have to rely heavily on traditional security measures like firewalls and access control lists to protect data. This approach could work when you’re dealing with non-sensitive data. However, because we’re dealing with highly sensitive patient information, medical records, and billing information, this may prove challenging. Also, data transmission over the public internet introduces risks like interception and eavesdropping. Meeting stringent regulatory standards, such as HIPAA, becomes increasingly difficult in the absence of granular control over network traffic. If you don’t use a VPC, Compliance efforts may be hindered by the lack of visibility and control inherent in traditional architectures.</p>
<p>In contrast, SaaS applications, especially ones dealing with sensitive information, that use VPCs can benefit from a heightened level of security, control, and compliance capabilities. From enhanced isolation and granular control to advanced security measures and streamlined compliance efforts, the benefits of leveraging a VPC are indisputable.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>VPCs play a crucial role in enhancing security and compliance for SaaS providers and their customers. By leveraging VPCs to create isolated network environments, SaaS providers can safeguard customer data, enforce access controls, and demonstrate compliance with data privacy regulations. SaaS providers can also utilize VPCs for security to deliver secure and reliable services to customers worldwide.</p>
<h2 id="heading-more-resources">More Resources</h2>
<p>Connect with the Akamai team and fellow users in the <a target="_blank" href="https://discuss.akamai.com/c/beta-program/vpc-beta/57">Akamai VPC discussion group</a> dedicated to our VPC feature (click <a target="_blank" href="https://discuss.akamai.com/">here</a> to sign up if you’re not a member).</p>
<p>You can also check out our <a target="_blank" href="https://www.linode.com/docs/products/networking/vpc/">VPC documentation</a> for more information and help getting started.</p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[When should I use a Private IP vs. VLAN vs. VPC?]]></title><description><![CDATA[When you build applications in the cloud, you could argue that the most important thing is security, especially when you’re using a third-party cloud provider and you don’t have access to every part of the backend. But, this can be a challenge becaus...]]></description><link>https://buildwithtalia.com/private-ip-vs-vlan-vs-vpc</link><guid isPermaLink="true">https://buildwithtalia.com/private-ip-vs-vlan-vs-vpc</guid><category><![CDATA[vpc]]></category><category><![CDATA[akamai]]></category><category><![CDATA[VLAN]]></category><category><![CDATA[Security]]></category><category><![CDATA[ip address]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Linode]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Fri, 08 Mar 2024 17:25:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1709918078271/d7f977d3-39aa-4641-8518-4d082a08fc12.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you build applications in the cloud, you could argue that the most important thing is security, especially when you’re using a third-party cloud provider and you don’t have access to every part of the backend. But, this can be a challenge because there are so many ways to secure your application. There's such a vast majority of tools and resources available to developers for security, that it can be hard to digest and choose what you need. It’s like being in a candy store with an unlimited credit card! Seriously, it's overwhelming. You've got IAM stuff, encryption, networking things—where do you even start? And just when you think you've got a handle on things, bam, new tools pop up!</p>
<p>So, here's my take: focus on what matters most - the fundamentals: Private IP, VLANs (Virtual Local Area Networks), and VPCs (Virtual Private Clouds). These three foundational, fundamental concepts form the backbone of modern network isolation, and understanding their nuances is crucial. Let’s go through these in a bit more detail based on my experiences.</p>
<h2 id="heading-private-ips">Private IPs</h2>
<p>The last time I used a private IP was when I was setting up my home network. I had to configure routers and assign private IP addresses to various devices. It felt empowering to establish a secure digital ecosystem where my devices could interact away from the prying eyes of the public internet. Private IP addresses are like the secret passageways of the internet. They're the unique identifiers assigned to devices within a private network, allowing them to communicate with each other securely. </p>
<p>However, as my networking endeavors expanded, I encountered the limitations of Private IP addresses. These are the standard 10.x.x.x, 172.x.x.x, and 192.168.x.x you'll use in home and enterprise-grade network gear. They provide no additional privacy aside from not being directly accessible from the public internet. While they were great for local communication, they couldn't provide the isolation and segmentation required for larger networks. </p>
<h2 id="heading-vlans">VLANs</h2>
<p>VLANs are a way to segment a physical network into multiple isolated virtual networks, each with its own set of devices and communication rules. I vividly remember a demo project I helped architect when I was in college. The assignment was to architect how the university’s network would be segmented among faculty, students, and administrators who are all clamoring for network bandwidth and resources. We were tasked with revamping the university's network infrastructure to improve performance, security, and manageability. VLANs were our secret weapon. We segmented the physical network into distinct virtual networks tailored to each group—students, faculty, and admin.</p>
<p>Setting up VLANs felt like drawing boundaries on a map, defining territories where each group could roam freely without stepping on each other's toes. Students had their own VLAN for gaming marathons and late-night study sessions. Faculty members enjoyed a secluded space for research collaborations and lecture streaming. And administrative staff? Well, they had their own VIP section for handling sensitive data and administrative tasks.</p>
<p>But VLANs did more than just divide the network into neat little parcels. They gave us granular control over access rights and traffic prioritization. Students couldn't waltz into faculty-exclusive areas, and admin data remained off-limits to everyone but authorized personnel. Plus, by segregating traffic, we reduced congestion and improved network performance for everyone.</p>
<p>And here's the best part: we achieved all of this without ripping up a single Ethernet cable. No need for costly infrastructure changes or disruptive downtime. VLANs worked their magic purely at the software level, redefining network boundaries on the fly. (VLAN is layer 2 on the OSI model)</p>
<p>Looking back, that project taught me the power of VLANs to revolutionize network architecture. They're not just about dividing networks; they're about empowering organizations to tailor their networks to fit their unique needs. Whether it's a university campus or a corporate headquarters, VLANs offer a versatile solution for optimizing performance, enhancing security, and streamlining network management.</p>
<h2 id="heading-vpcs">VPCs</h2>
<p>Anyone who knows me knows how much I hate testing/staging environments. They can be so problematic. No one cares if your features are working in staging, we care if they work in production. And the only way to know if a feature is working in production is to test it in production. I even went so far as to write a <a target="_blank" href="https://www.split.io/blog/staging-break-up-letter/">breakup letter to staging</a>. However, many companies still use many environments when building and deploying applications. If your company is one of those companies that doesn’t want to test in production, or you don’t have enough automation in place to do so, then you should consider using a VPC to create isolated environments for development, testing, and production. </p>
<p>By creating separate VPCs for each environment, you establish clear boundaries between development, testing, and production. This isolation prevents interference and minimizes the risk of unintended consequences. What happens in the development environment stays in the development environment, reducing the likelihood of bugs or changes impacting critical production systems. VPC segmentation also allows for precise resource allocation and management across environments. Each environment can have its own dedicated resources, such as compute instances, storage, and networking resources. This ensures that development and testing activities don't compete with production workloads for resources, optimizing performance and stability.</p>
<h2 id="heading-to-sum-up">To Sum Up</h2>
<p>Security tools and technologies have evolved and will continue to evolve. From the simplicity of private IP addresses to the complexity of <a target="_blank" href="https://www.linode.com/docs/products/networking/vpc/">cloud-based VPCs</a>, each step has broadened my understanding of network design and administration. So whether you're building a home network, managing a corporate infrastructure, or separating development environments, understanding the nuances of Private IP, VLANs, and VPCs is essential. These technologies form the building blocks of modern networking, empowering us to create secure, scalable, and efficient digital ecosystems. </p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Location, Location, Location: Maximizing Ad Impact with Akamai EdgeWorkers]]></title><description><![CDATA[I have family all over the world - San Francisco, Las Vegas, New York, Belgium, Portugal, and Israel - but I was born and raised in Los Angeles (West Coast: best coast!). Communicating with them is easy nowadays with tools like WhatsApp and Instagram...]]></description><link>https://buildwithtalia.com/maximizing-ad-impact-with-akamai-edgeworkers</link><guid isPermaLink="true">https://buildwithtalia.com/maximizing-ad-impact-with-akamai-edgeworkers</guid><category><![CDATA[edgeworkers]]></category><category><![CDATA[akamai]]></category><category><![CDATA[geolocation]]></category><category><![CDATA[geolocation api]]></category><category><![CDATA[Advertising]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Mon, 04 Mar 2024 18:28:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1709574887883/2d96c9a3-69cf-45b2-9d50-8e23833a494c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have family all over the world - San Francisco, Las Vegas, New York, Belgium, Portugal, and Israel - but I was born and raised in Los Angeles (West Coast: best coast!). Communicating with them is easy nowadays with tools like WhatsApp and Instagram. But sometimes I think about what it would be like if we didn’t have edge servers. How long would my text messages take to send? Would my video calls to my cousins in Tel Aviv be blurry because of the latency? </p>
<p>Edge servers play a crucial role in facilitating communication from one country to another by optimizing content delivery, reducing latency, and enhancing reliability. Latency, or the delay in data transmission, can significantly impact the quality of real-time communication, such as voice calls and video calls. By deploying edge servers in proximity to users, communication latency can be minimized. When users in different countries communicate, like when I make a call from Los Angeles to Israel, their data can be routed through nearby edge servers, reducing the time it takes for data packets to travel between them. This ensures smoother, more responsive communication experiences across borders.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709573244828/b8d1b10b-2c7c-43ca-aeb8-34b63c25537e.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-tailoring-customer-experience-with-geolocation">Tailoring Customer Experience with Geolocation</h2>
<p>Let's consider an example where my family is planning a reunion and we are planning to meet in Hawaii. All of us, from our different locations around the world, start searching for flights. </p>
<p>As I browse through travel sites, advertisers can use geolocation data to determine my location (Los Angeles) and preferences (nonstop flight, Delta preferred). Based on this information, advertisers can deliver targeted advertisements to me, tailored specifically for travelers in Los Angeles interested in visiting Hawaii. These advertisements might include special flight deals from Los Angeles International Airport (LAX) to Honolulu, discounts from Delta, or local attractions and activities in Oahu. By personalizing the content to my location and specific interests, advertisers increase the relevance and effectiveness of their advertisements. Who would have thought ads could be a good thing?!</p>
<p>Meanwhile, my family in Portugal is also planning their trip to the family reunion in Hawaii. As they conduct their own online research and browse travel-related websites and apps, advertisers can use geolocation data to target them with tailored advertisements relevant to their location (Portugal) and their preferences (maximum one stop, preferred TAP Air Portugal, etc). These advertisements might include flight options from Lisbon Airport (LIS) to Honolulu, taxi services in Lisbon, vacation rental deals in Maui, or recommended activities and tours in Kauai. By personalizing the content to their location and interests, advertisers increase the likelihood that my family will engage with the advertisements.</p>
<p>In this example, geolocation technology enables advertisers to deliver highly targeted and relevant advertisements to me and my family based on our respective locations and preferences. By tailoring the content to each audience, advertisers are increasing the effectiveness of their advertising campaigns <em>and</em> providing valuable information and offers that resonate with each individual's interests and travel plans.</p>
<p>It's clear how tailored content can enhance the effectiveness of advertising campaigns. Now, the question arises: how can businesses implement similar strategies in their own campaigns? One approach is to leverage EdgeWorkers to develop microservice geolocation APIs. By integrating these APIs into their ad serving infrastructure, businesses can dynamically retrieve location information about users, allowing for the delivery of targeted and relevant advertisements based on their preferences and travel plans. </p>
<h2 id="heading-implementing-geolocation-using-edgeworkers">Implementing Geolocation using EdgeWorkers</h2>
<p>Now, how would we implement something like this from the example we just went through? Let’s use Akamai Edgeworkers to implement a microservice geolocation API call that returns location information about the client.</p>
<p>Before you get started, you’ll need an <a target="_blank" href="https://www.akamai.com/products/serverless-computing-edgeworkers#free-trial">Akamai Account with access to Edgeworkers</a> or you can <a target="_blank" href="https://www.akamai.com/products/serverless-computing-edgeworkers#free-trial">sign up for our free EdgeWorkers trial</a> to follow the tutorial below. </p>
<p><strong>Step 1: Create an EdgeWorkers ID</strong></p>
<p>Log in to your Akamai account and <a target="_blank" href="https://control.akamai.com/apps/edgeworkers/">navigate to EdgeWorkers from the left panel</a>.</p>
<p><img src="https://lh7-us.googleusercontent.com/usxUOQdEO9OQiuw5QREpmfRmm-RdJvS5yykk24KqiOIt_-kI63GOpMO4pz2CoLoFYEBGJP9y97tMif8zlv7MyHHR3njLyrHhdR-3GOBg3jTBAVfPIoF9_YKoDlwaXXz8i9O6cIuwacE_RCedHAGGboo" alt /></p>
<p>Then, click on Create EdgeWorkers ID. Enter a name for your EdgeWorker, in this case, <code>familyreunion</code>. After that, select a group, and resource tier. EdgeWorkers resource tiers currently include dynamic compute and basic compute. Each tier has different limits for CPU time, wall time, and memory consumption which you can see listed at the bottom of the form.</p>
<p><img src="https://lh7-us.googleusercontent.com/Zdk0ID1EEuFq30uXXWKML1KrRZMYTqY5HHtU7u4SzmQfsNOgeizUucM_dYprh2uJt0tbs1xo74OuFBrZ2NgiwKw2M7INjTVfQ3-ItisoNQqbm_6WFUb1Mddzc7-PiUnFHRrZ8OmFGR2dKC-RA-G55Bg" alt /></p>
<p>After you click Create EdgeWorker ID, you’ll see this success message.</p>
<p><img src="https://lh7-us.googleusercontent.com/gLuFJfzhhVxo4intKZ3YQRdS4e-UQlUvDL09ahnlkSv7BNsU6sIGXNgxM5-GwIxAfRYi-lTH3GbaWAfUGglxJQm2iFxGTZyqnKv2M4TIc4N7yb2oafvWaTN6g_jq3hMqITqL9VZL6CB3-GeLDiyy5PA" alt /></p>
<p><strong>Step 2: Modify Property Behavior</strong> </p>
<p>Now, we need to update the property to execute the EdgeWorker ID (<code>familyreunion</code>) we created in step 1. We do this through the Property Manager which is located on the left panel. Once you’ve done that, simply click on ‘Properties,’ then click on <code>familyreunion</code>. </p>
<p><img src="https://lh7-us.googleusercontent.com/dTEwoGp79F6LeOxO2CKF5QiZloShik7_hvBbQxaHEVfWQiCvBIte3dAzXjHKg5Vm24shvSdMKFYzLkWHJBzmj7gG9wrW3Y9tJGYMyPlsRf5ANL28y2h2ZCeybFtv1DKKJZs1aq8-CSfvIrqtzLNM1Y8" alt /></p>
<p><strong>Step 3: Create Code Bundle</strong></p>
<p>To make this work, we need to create a new version of the property. Click on ‘Create version.’</p>
<p><img src="https://lh7-us.googleusercontent.com/sX9hNahyI9pVDpVyKa_0xVqrWDEB13W8XJd5FFnotxP20MI-GD5CzIn5_thhM_uyePXL_m4TjuIDruaCI6QGizhR5BHMtVr5YkAIBvmyWrjLUNnQ3W-OIoZ5fYegzeLBJARyJt_ueWzcwlSIn1LFg9U" alt /></p>
<p>Now, let’s open the code editor to add in our code. Click on ‘Open editor.’</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709574320310/3a5bfd38-a867-47ef-a73d-6de668ab3c75.png" alt class="image--center mx-auto" /></p>
<p>Next, you’ll see the two files we need to edit, <code>main.js</code> and <code>bundle.json</code>. </p>
<p>Let's start with the first file, <code>main.js</code>:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">onClientRequest</span> (<span class="hljs-params">request</span>) </span>{
  <span class="hljs-keyword">var</span> info = {};

  info.continent = (request.userLocation.continent) ? request.userLocation.continent : <span class="hljs-string">'N/A'</span>;
  info.country = (request.userLocation.country) ? request.userLocation.country : <span class="hljs-string">'N/A'</span>;
  info.zip = (request.userLocation.zipCode) ? request.userLocation.zipCode : <span class="hljs-string">'N/A'</span>;
  info.region = (request.userLocation.region) ? request.userLocation.region : <span class="hljs-string">'N/A'</span>;
  info.city = (request.userLocation.city) ? request.userLocation.city : <span class="hljs-string">'N/A'</span>;

  info.source = <span class="hljs-string">'Akamai EdgeWorkers'</span>;

  request.respondWith(<span class="hljs-number">200</span>, {}, <span class="hljs-built_in">JSON</span>.stringify({ <span class="hljs-attr">geoInfo</span>: info }));
}
</code></pre>
<p>This code retrieves information about the client's location (e.g., continent, country, zip code, region, and city) and constructs a response containing this information. Let’s walk through what each part of the code does: </p>
<ul>
<li><p>First, the function named <code>onClientRequest</code>, is a standard Akamai EdgeWorkers function that runs when a client makes a request. </p>
</li>
<li><p>Second, the function extracts geolocation information from the <code>request.userLocation</code> object, which is provided by Akamai EdgeWorkers and contains information about the client's location based on their IP address. </p>
</li>
<li><p>Then, if certain geolocation information is not available (e.g., if the client's location cannot be determined), the function assigns default values ('N/A') to the corresponding fields in the info object. </p>
</li>
<li><p>Finally, the function constructs a response containing the extracted geolocation information (info object) and sets the response status code to 200 (OK). </p>
</li>
</ul>
<p>Now it’s time to move on to the second file we need to edit, bundle.json, which includes metadata for the EdgeWorkers script:</p>
<pre><code class="lang-javascript">{
  <span class="hljs-string">"edgeworker-version"</span>: <span class="hljs-string">"0.1"</span>,
  <span class="hljs-string">"description"</span> : <span class="hljs-string">"Reply instantly with a formatted JSON containing location information."</span>
}
</code></pre>
<p>The code in bundle.json provides a description of what the EdgeWorker script in main.js does. It states that the script is designed to respond immediately with a JSON-formatted message containing location information.</p>
<p>Now it’s time for us to create a new version by clicking on the ‘Create version’ button on the bottom of the popup screen, and then click ‘Create version’ again on the bottom of the main property page. This will save your code bundle.</p>
<p><img src="https://lh7-us.googleusercontent.com/bc3Hx0JJ14Bk_ZZ0yklz3EJ_3wsuwYvjBU_wv1BsHzOlW5pfWYlTC1d-_WjXmrKHQ9O1qIO5Cf6ZeaDcYoHKf544-bLwUSYnSL9htyYESqSEo7XLrcyC9ojEJOU4V-NBc-vSAa_KPZhzWDwrSLRery4" alt /></p>
<p><strong>Step 4: Deploy EdgeWorker</strong></p>
<p>At this point, we need to make this live in production by clicking ‘Activate version,’ and make sure you select ‘Production’ as your network. (If you’re testing something out or using a staging environment, feel free to use staging, but note that you’ll have to create another version when you’re ready to deploy to production).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709574726861/b06f7b48-d428-4735-982b-c4074dfdf011.png" alt class="image--center mx-auto" /></p>
<p>We’re almost done, and at this point, you should see the status changed to ‘Complete’.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709574767810/690f19e7-fcf5-44d5-968b-04506b615714.png" alt class="image--center mx-auto" /></p>
<p>Now all we need to do is modify the property and add a rule that says if the path matches <code>/ew</code>, then it will activate the EdgeWorker. (For real world applications, you would enter your URL where you need geolocation data here). To do this, you’ll need to navigate to ‘Properties’ in the left panel, search for your property, scroll down, and click on ‘Add Rule.’ Be sure to click ‘Save.’</p>
<p><img src="https://lh7-us.googleusercontent.com/2G-8pUSSjhUOtRkoK52yxW4NURg2SaUtBPUQJ1ATMc6d2gHcDwkEAYIHbZH-grUIr0bVC_a2FDB3ynGIMLRtlEtKVRmeDR7U0e3knjms2kkhC_QEL0AMKmltlddzFh-1YDAset3uUXn3uDLZCG5tGN8" alt /></p>
<p><strong>Step 5: Test Edgeworker</strong></p>
<p>And that’s it! Now, we need to test our geolocation EdgeWorker. Let’s go to the URL we set up and enter the path <code>/ew</code>.</p>
<p><img src="https://lh7-us.googleusercontent.com/6BOJUzqvoFvieYilO2ixMiDTxpWHr-t0m581TShOGihVZwXNHwquv5omkQuuzAZxtCR-JfJIKuzh6Upu4jdDAPBbPTXzP5IxkOWIB62MK7Nh4EdAUb7Hq_T2OXGS4tV1IEGqUafLOBKZEZ78vGdQ7bk" alt /></p>
<p>So in this example, we utilized Akamai EdgeWorkers to retrieve location data. Now that it’s set up, I’m able to use this location data to customize user experience on this website. So I could send targeted ads for TAP Portugal airlines for my family in Portugal, or I could advertise taxi or rideshare services in that area.</p>
<h2 id="heading-next-steps">Next Steps</h2>
<p>There are lots of ways to leverage this tool. For example, a business could target advertisements relevant to my family reunion. This could also be used to customize user experience with targeted marketing and promotions in e-commerce. Here are a few more ideas to try out if you’re looking for more ways to leverage Geolocation using Akamai EdgeWorkers:</p>
<ol>
<li><p>If you’re making a weather app, you could use this geolocation code to display the weather based on your location</p>
</li>
<li><p>If you’re making a food delivery app, you could use this geolocation code for location data to deliver your food</p>
</li>
<li><p>If you’re building an e-commerce app, you could use this geolocation code to get  location data to advertise best-selling products in that region</p>
</li>
</ol>
<p>For more EdgeWorkers code and tutorials, head to <a target="_blank" href="https://www.edgecompute.live/">edgecompute.live</a>. To find documentation, head to <a target="_blank" href="https://techdocs.akamai.com/edgeworkers/docs/welcome-to-edgeworkers">EdgeWorkers TechDocs</a>.</p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[From Code to Cloud: Unpacking DeveloperWeek]]></title><description><![CDATA[Last week, I attended the DeveloperWeek conference in Oakland, CA.  You’ve likely heard of the industry's largest live virtual expo, but in case you haven’t, DeveloperWeek is a two day conference for developers, devops engineers, and product managers...]]></description><link>https://buildwithtalia.com/from-code-to-cloud-unpacking-developerweek</link><guid isPermaLink="true">https://buildwithtalia.com/from-code-to-cloud-unpacking-developerweek</guid><category><![CDATA[Cloud]]></category><category><![CDATA[conference]]></category><category><![CDATA[vpc]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Tue, 27 Feb 2024 17:23:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1709054398635/222dfa5c-e0f7-4d83-ab6c-66d4aff9ce9d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, I attended the DeveloperWeek conference in Oakland, CA.  You’ve likely heard of the industry's largest live virtual expo, but in case you haven’t, DeveloperWeek is a two day conference for developers, devops engineers, and product managers. Last year, I gave a talk on Building Applications with Infrastructure as Code. This year, I was asked to do a demo at the Akamai booth that showcases our <a target="_blank" href="https://buildwithtalia.com/introducing-akamais-virtual-private-cloud">new VPC product</a>. So I thought, why not combine my IaC talk with this new product? So that’s what we did. </p>
<p>The DevWeek conference is different from a lot of other conferences in the industry because it’s focused on tooling and showcases a broad range of vendors. If you’re looking to learn about new tools and upcoming tech, this is the place to go. </p>
<p>The folks who stopped by our booth were a mixed bag. On one hand, we had a lot of students stop by who were looking to speak to recruiters. We also had people stop by our booth who had never heard of Akamai. This is common. We also had DevOps experts who were curious about server configurations and had detailed technical questions. There was a broad spectrum of knowledge among the crowd. </p>
<h2 id="heading-lessons-from-the-farmers-market-analogy">Lessons from the Farmer’s Market Analogy</h2>
<p>My favorite talk at DeveloperWeek was by Billy Thompson on avoiding vendor lock in, something I have struggled with in the past. He started with an analogy of a Farmer’s Market. Billy loves the farmer’s market, and the variety of vegetables sold. He goes to all of the stalls and chooses his favorite in-season vegetables at the best cost. What would it be like if he only went to one stand at the farmer’s market and skipped over the other 30 farmers that are there every week? He would be missing out on the diversity of the other farmers, missing out on things that maybe that one farmer didn’t offer, or could possibly be getting the same vegetables at a cheaper cost. </p>
<p>That’s similar to how cloud computing is nowadays. People have this tunnel vision and think that cloud is spelled AWS, GCP, or AZURE. However, there are more than 20 different cloud providers that offer many of the same services. We, as cloud engineers, need to make good decisions that better serve ourselves and our business needs. </p>
<p><img src="https://lh7-us.googleusercontent.com/hnk3qObUBR2KVlLoLcUuEeHVA0KzVdMZD8eXmUjpl3WXqiYbsHcq1qnqniVpQkp6UMetv-ptVt9yr4MOy1-Q5YnPqSI6r19nx5s0mFRwYUc_WS1bdOvUBcu-i4En2qK-WNYqWtrUxjpqLUFYvF96JBk" alt /></p>
<p>Instead of choosing a cloud provider because of its name, there’s a huge benefit in choosing multiple cloud providers whose features suit each of your individual needs. You can pick the things you like from each one just like choosing the vegetables from each farmer that look the best and are in season, all at the right price.</p>
<p>Billy had the audience do an exercise called the Bare Bones Approach. Think of your application. Strip it down to the bare bones. Then, add one layer of supporting functionality at a time. In this approach, you start by identifying the essential, fundamental needs of your application without any preconceived notions or preferences for certain technologies or providers. By doing so, you ensure that each layer of functionality added is purposefully chosen to meet your needs, rather than being influenced by external factors that may not align with the application's objectives. This approach leads to a more tailored, efficient, and effective use of technology that is directly aligned with the application's requirements. This different design philosophy allows for portability. It’s not about fixing something that’s not broken. It’s about fine-tuning your needs and making sure your cloud services meet those exact requirements.</p>
<h2 id="heading-the-vpc-terraform-demo">The VPC Terraform Demo</h2>
<p>Alright, now back to our demo. I did a demo with <a target="_blank" href="https://austingil.com/">Austin Gil</a>, a fellow dev advocate on my team. </p>
<p>We started by explaining the concept of a Virtual Private Cloud (VPC), which is a secure, isolated section of the cloud where you can launch resources within a virtual network you define. This setup offers several benefits, including enhanced security by isolating your computing resources, greater control over your network environment, such as IP address ranges and network gateways, and the ability to create a hybrid environment that extends your on-premise network to the cloud. We also touched on the principles of Infrastructure as Code (IaC) using Terraform, highlighting how it enables the automation and efficient management of infrastructure through code.</p>
<p>To bring our discussion on Virtual Private Clouds (VPC) and Infrastructure as Code (IaC) to life, we demonstrated the deployment of two distinct setups involving two databases. </p>
<p>Our first setup functioned without the confines of a VPC, serving as a baseline to highlight the comparative advantages. Our second setup was deployed within the secure boundaries of a VPC, providing a clear, real-world illustration of the enhanced security and network isolation a VPC can provide. This setup was particularly effective in showcasing how a VPC can safeguard against unauthorized access and maintain the integrity of an internal network.</p>
<p>As I mentioned earlier, we used <a target="_blank" href="https://www.terraform.io/">Terraform</a> for defining and provisioning infrastructure. Terraform configuration files are written in HCL, HashiCorp Configuration Language. Other cloud providers like AWS commonly use JSON or YAML. If you’re not familiar with HCL, there are plenty of <a target="_blank" href="https://registry.terraform.io/providers/linode/linode/latest/docs">code registries</a> on the Terraform site to learn from. We deployed two databases with similar configurations, differing primarily in their network settings. The first database was deployed in a standard cloud environment without VPC protections, while the second was securely nestled within a VPC, showcasing the added layer of security and isolation. Both applications connected to these databases were designed to display a Pokémon database, providing a simple but effective way to demonstrate the functionality. </p>
<p>This allowed us to show that access to the databases starkly differed; the database in the non-VPC environment was readily accessible, illustrating a potential security vulnerability. In contrast, the database within the VPC remained inaccessible from the outside, effectively demonstrating the VPC's role in network isolation and protection of sensitive data. </p>
<p>The functionality of both versions of the application is identical. Each version displays a Pokémon database. However, because we’ve set the second database up using VPC to secure the database, we can only see the full details of the application accessing the first database. </p>
<p><img src="https://lh7-us.googleusercontent.com/x7eIBSMWtM8U34N6qwqRdPiBzXa-EA2tiR_9n3VOcpK9GfVSI6kYQSk7X09tEpeyCbWRLKvpRNUx8GoKvgdbePIKChWFBwm4jCQHumLzW6i3U1vE8uXv1znIYRdrFuIheZeT66aFCyPs-NbJ09PnBkY" alt /></p>
<p>When we try to access the second app, we can’t see the database, instead getting an error. Good. We’ve successfully prevented a bad actor from accessing private information. </p>
<p><img src="https://lh7-us.googleusercontent.com/Mf_Irt_0nnBHRpxPhEOfsiefsqw00WyLOGTZCaudw0cqrcLp88r4qNb7Q1IV28L0aV-1hnMF2oJIbFjGagwJF0emx1lSObLKB67NSnzAvZriQ0voxvVwFekrrTfB4mbVc-CcoFlinC8bKj4BpJ6SOoQ" alt /></p>
<p>This demo was our simple way of demonstrating how implementing network isolation layers can protect your application data. In this example, we separated different parts of the network to prevent unauthorized access to the Pokémon database. But this can translate to larger scale operations, like DevOps environments, or compliance requirements. </p>
<p>In a DevOps setup, you may have multiple environments such as development, staging, and production. Each environment requires distinct network configurations and access controls. By leveraging VPCs, you can create separate environments within isolated network boundaries. This ensures that changes made in one environment do not affect others and allows for granular control over network policies. With regards to compliance and regulatory requirements, industries such as healthcare, finance, and government are subject to stringent compliance regulations regarding data privacy and security. Using VPCs with isolation layers helps organizations adhere to these requirements by segregating sensitive data and workloads from the rest of the network. For instance, you might isolate personally identifiable information (PII) in dedicated subnets with encryption and access controls enforced at the network level. The concept of isolation layers can be used for applications across most sectors. </p>
<p>To try out our demo on your own, head to <a target="_blank" href="https://github.com/AustinGil/linode-vpc-demo">GitHub</a>.</p>
<p>We only had 15 minutes for the demo, and there’s so much more we could have talked about, like <a target="_blank" href="https://buildwithtalia.com/crafting-a-resilient-vpc-landscape-using-terraform">deploying a VPC and dynamically adding subnets using Terraform</a>. </p>
<p>Well, there you have it. It was another successful DeveloperWeek and I hope to be back next year to learn more from the rest of the tech community. </p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Crafting a Resilient VPC Landscape using Terraform]]></title><description><![CDATA[Last week, we released our Virtual Private Cloud (VPC). Akamai’s VPC offers an isolated network that lets cloud resources privately communicate with each other.
What excites me about this release is the flexibility you have in setting up your VPC. Yo...]]></description><link>https://buildwithtalia.com/crafting-a-resilient-vpc-landscape-using-terraform</link><guid isPermaLink="true">https://buildwithtalia.com/crafting-a-resilient-vpc-landscape-using-terraform</guid><category><![CDATA[vpc]]></category><category><![CDATA[akamai]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Tue, 30 Jan 2024 16:41:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1706550148690/0f3d6e00-d821-4010-bbef-02d4af13bb32.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, <a target="_blank" href="https://buildwithtalia.com/introducing-akamais-virtual-private-cloud">we released our Virtual Private Cloud</a> (VPC). Akamai’s VPC offers an isolated network that lets cloud resources privately communicate with each other.</p>
<p>What excites me about this release is the flexibility you have in setting up your VPC. You can add compute instances via your cloud provider’s UI, Cloud Manager, developer tools like a CLI, or my personal favorite, infrastructure as code tools like Terraform. In this blog, I’ll take you through the journey of deploying a VPC and dynamically adding subnets using Terraform.</p>
<p>I chose Terraform because you can see all of your configurations for your application in one place, allowing you to replicate resources in another environment, move resources to different accounts, etc. It’s essentially your one source of truth, eliminating configuration drift. If you’re new to Terraform or infrastructure as code, <a target="_blank" href="https://www.youtube.com/watch?v=sF3iY74JpVI&amp;t=1s">this video</a> guides you through creating a compute instance with the Linode Terraform Provider. That should get you on the right track to continue with this tutorial.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we begin, make sure you have the following:</p>
<ol>
<li><p>A Linode account. If you don't have one, sign up for a <a target="_blank" href="https://www.linode.com/">free Linode account</a>.</p>
</li>
<li><p>A personal access token for Linode’s <a target="_blank" href="https://www.linode.com/docs/api/">v4 API</a>: Terraform uses this to interact with your Linode resources. Follow this guide to <a target="_blank" href="https://www.linode.com/docs/products/tools/api/get-started/">generate your token</a>.</p>
</li>
<li><p>Terraform installed on your local machine. If you don't have it, follow our <a target="_blank" href="https://www.linode.com/docs/guides/how-to-build-your-infrastructure-using-terraform-and-linode/">guide to installing and setting up your environment</a>.</p>
</li>
<li><p>Basic knowledge of how to use the command line.</p>
</li>
</ol>
<h2 id="heading-step-1-create-your-terraform-configuration-file">Step 1: Create your Terraform Configuration File</h2>
<p>Let’s start by setting up the Linode Provider for Terraform. I created a new directory for my Terraform project and made a file called <code>terraform.tf</code>. Instead of building the configuration file from scratch, I utilized the Terraform registry. If you’ve never used an infrastructure as code registry, watch <a target="_blank" href="https://www.youtube.com/watch?v=hEv-el6Pm5A">this video</a> to learn more.</p>
<p>First, find the <a target="_blank" href="https://registry.terraform.io/providers/linode/linode/latest/docs/resources/vpc">Linode VPC</a> from the Terraform Registry. On the right hand side of the page, click on Use Provider, and copy the code from the dropdown into your <code>terraform.tf</code> file. Then, copy and paste the code from the Example Usage section for the VPC.</p>
<p><img src="https://lh7-us.googleusercontent.com/BJwLeC1W2upI8CK1s6RdGoyicMv5_x7IBVdw6RIUmpgjRPNV6uYEXGfBlY8hYJNoY2z00FLQc0P9f-UoUVW6TEJ0tN88IdcZy1vX9oHAvZVpPP3FnoeUZKQV-CLsqz5FqEWrgOUu8g6MeJmEerDDrIk" alt /></p>
<p>Your configuration file should look like this:</p>
<p><em>Note that you’ll need to replace your authorized key and root password as well.</em></p>
<pre><code class="lang-plaintext">terraform {
  required_providers {
    linode = {
      source = "linode/linode"
      version = "2.13.0"
    }
  }
}

provider "linode" {
  token = "your_api_token"
}

resource "linode_vpc" "test" {
    label = "test-vpc"
    region = "us-iad"
    description = "My first VPC."
}
</code></pre>
<h2 id="heading-step-2-deploy-your-configuration-file">Step 2: Deploy your Configuration File</h2>
<p>To deploy your configuration file, enter: <code>terraform plan</code></p>
<p>Then, enter: <code>terraform apply</code></p>
<p><img src="https://lh7-us.googleusercontent.com/cqDa7bHZW4APzKSlmsob4nHTj6Njg1a1zgh1r4nE5Uyf5Q0xvig7qIFEHQW-F-M900bDHYVa-nD8v0c0Kb1tCBAvNHeEKEx5BoW0YOa-vZa8xYAn7KbvzDe2AGfNc264CqvvDnUdu1l1j3C8JaNTv8g" alt /></p>
<p>Now, when you go to your cloud dashboard, you’ll see your VPC listed.</p>
<p><img src="https://lh7-us.googleusercontent.com/_BWuSnJ5cWHz2VOKxEOAuOH9ZB4i5WS2O2VoSGCYby3SqH12eodrx3PWfrNDS7rNRKbmddt5YlzjuyXKyQP8mIEYjnxkEgQNexeBa8mf10aMr5uhlBF6LuSfop9tImi1iaX2uusQ7gWQ4QQKqOwmu0w" alt /></p>
<p>Using Infrastructure as Code for deploying a VPC provides greater control, consistency, agility, and efficiency in managing your cloud infrastructure while reducing the likelihood of human error and enabling faster, more reliable deployments.</p>
<h2 id="heading-step-3-add-subnets-to-your-vpc">Step 3: Add Subnets to your VPC</h2>
<p>Adding subnets to my VPC is an essential aspect of architecting a well-organized, secure, and scalable cloud infrastructure that aligns with the specific needs of my applications and services. By adding subnets, I can logically segment my VPC into smaller, more manageable networks. This segmentation is beneficial for organizing resources based on functionality, security needs, or other considerations. For instance, I might have separate subnets for web servers, databases, and application components.</p>
<p>Subnets also allow me to efficiently manage IP addresses within my VPC. Each subnet operates within its designated IP address range, preventing conflicts and providing a structured approach to IP address allocation. This becomes increasingly important as my infrastructure scales and more resources are deployed.</p>
<p>Subnets act as security boundaries, enabling me to implement different security measures for each subnet based on the sensitivity of the resources they host. For example, I can apply stricter security rules to a database subnet compared to a public-facing web server subnet. This helps in implementing the principle of least privilege.</p>
<p>Subnets play a crucial role in routing and optimizing network traffic within the VPC. I can configure routing tables to direct traffic between different subnets based on specific requirements. This flexibility allows me to design the most efficient communication paths for my applications.</p>
<p>Now, let’s add a couple subnets to the VPC. Edit your <code>terraform.tf</code> and add a <code>vpc_subnet</code> resource block.</p>
<p><em>Note that you’ll need to replace your vpc_id. Find this in your cloud dashboard below.</em></p>
<p><img src="https://lh7-us.googleusercontent.com/FSD-Rq80OgEI7NCt9vblWQtPieVwWc1XG9X3Skn8VVMFsoZbTj6xhJVAJqJQoT_2kmFMemy0-kjACE0mBiLixUEhM9s78YelZbUOgbscKIsvB4dK2iIYWchiwYvfrJHsp43jCmbLnGz2tzAuRrC8VJM" alt /></p>
<pre><code class="lang-plaintext">resource "linode_vpc_subnet" "vpc-subnet-terraform-subnet-01" {
    vpc_id = "your_vpc_id"
    label = "vpc-subnet-terraform-subnet-01"
    ipv4 = "192.168.1.0/24"
}

resource "linode_vpc_subnet" "vpc-subnet-terraform-subnet-02" {
    vpc_id = "your_vpc_id"
    label = "vpc-subnet-terraform-subnet-02"
    ipv4 = "10.0.0.0/24"
}
</code></pre>
<p>Run <code>terraform apply</code> to apply the changes.</p>
<p><img src="https://lh7-us.googleusercontent.com/fYId8pwdRfDx3NtxLmPPhdDmbB3JRIIZt-Atf_ggvdaNkurqpag-QPvToowuV7Yxk_2iFrZW092MB6fBX3twlFZYvbo9Z85cRk38Scja9kFH9RZnGmO1b595PO-NqC5blHKYUbuiD-TjYsdHW6mtn3I" alt /></p>
<p>Now your VPC has 2 subnets. Navigate to the cloud manager to see the changes:</p>
<p><img src="https://lh7-us.googleusercontent.com/zFeEyDQadhSx9FoD-ve3adz_pj1n8sI7wMC7qV5VMIyz2sUDryvHdaygqXAgkdDgJZZpygZgeobvhmlekDUchxut8vcFV0ZpWgVAFeZg6-EgdbQUMQ0pJ3c67pNElPkJM1IQGQvxk4y0y2rjMJzi7I4" alt /></p>
<p><em>Notes:</em></p>
<ol>
<li><p><em>You should have one</em> <code>vpc_subnet</code> <em>resource block per subnet</em></p>
</li>
<li><p><em>The ipv4 of a subnet within the same VPC must not have any overlap</em></p>
</li>
</ol>
<h2 id="heading-differentiating-between-akamais-vpc-and-the-hyperscalers">Differentiating between Akamai’s VPC and the Hyperscalers</h2>
<p>One of the standout features of Akamai's VPC that I find truly remarkable is its flexibility when it comes to defining subnets. In many traditional hyperscalers, such as AWS and Azure, there's a rigid requirement that all subnets within a VPC must share the same <a target="_blank" href="https://datatracker.ietf.org/doc/html/rfc1918#autoid-3">RFC1918</a> range or block. This essentially means that once you set a top-level CIDR range for your VPC, all subnets within it are confined to that particular address space.</p>
<p>However, Akamai takes a different and, in my opinion, more versatile approach. With Akamai’s VPC, each subnet is allowed to exist in its own RFC1918 range or block. This means that you have the freedom to design your network architecture in a way that best suits your specific use case. For instance, you can have one subnet operating in the 192.168/16 space and another in the 10/8 space, all within the same VPC.</p>
<p>This flexibility is a game-changer, especially as your application or business needs evolve. Unlike the constraints imposed by some hyperscalers, Akamai's VPC enables dynamic addition of subnets with different ranges. So, if your requirements change or your application scales, you can effortlessly integrate new subnets without being bound by the limitations of a predetermined top-level CIDR range. It’s a level of adaptability that caters to the diverse networking demands of modern cloud architectures. This is a feature that sets Akamai's VPC apart, offering a level of customization and scalability that aligns seamlessly with the dynamic nature of today’s cloud environments.</p>
<h2 id="heading-more-resources">More Resources</h2>
<p>Connect with the Akamai team and fellow users in the <a target="_blank" href="https://discuss.akamai.com/c/beta-program/vpc-beta/57">Akamai VPC discussion group</a> dedicated to our VPC feature (click <a target="_blank" href="https://discuss.akamai.com/">here</a> to sign up if you’re not a member).</p>
<p>You can also check out our <a target="_blank" href="https://www.linode.com/docs/products/networking/vpc/">VPC documentation</a> for more information and help getting started.</p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item><item><title><![CDATA[Introducing Akamai's Virtual Private Cloud]]></title><description><![CDATA[We are excited to announce the release of our Virtual Private Cloud (VPC). This is a significant addition to the Akamai Cloud platform and underscores our commitment to providing developers and customers with advanced, secure, and flexible solutions....]]></description><link>https://buildwithtalia.com/introducing-akamais-virtual-private-cloud</link><guid isPermaLink="true">https://buildwithtalia.com/introducing-akamais-virtual-private-cloud</guid><category><![CDATA[akamai]]></category><category><![CDATA[vpc]]></category><dc:creator><![CDATA[Talia Kohan (Talia Nassi)]]></dc:creator><pubDate>Tue, 23 Jan 2024 17:21:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705691565660/6c9f8c5e-24c1-40d8-9dec-206d12352d92.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are excited to announce the release of our <a target="_blank" href="https://www.linode.com/docs/products/networking/vpc/">Virtual Private Cloud (VPC)</a>. This is a significant addition to the Akamai Cloud platform and underscores our commitment to providing developers and customers with advanced, secure, and flexible solutions.</p>
<p>If you’re an Akamai customer, you can now create and manage VPCs via <a target="_blank" href="https://cloud.linode.com/">Cloud Manager</a>, CLI, API, Terraform, Ansible, Python/Golang SDKs, Packer, and Salt - all at no additional cost.</p>
<h2 id="heading-what-is-the-akamai-vpc">What is the Akamai VPC?</h2>
<p>A VPC is an isolated network within the Akamai Cloud. It’s designed to enable cloud resources to communicate privately, manage access to the public internet, and connect to other private networks. It's a cornerstone feature for developers, allowing them to segment traffic and build distributed, multitier web applications with enhanced security and efficiency.</p>
<p>Previously, to achieve network isolation on Akamai Connected Cloud, customers had to rely on VLAN technology, which was limited in functionality compared to a true VPC, or use another cloud provider. Now, you don’t have to leave Akamai to achieve true layer 3 network isolation.</p>
<h2 id="heading-how-does-a-virtual-private-cloud-work">How does a Virtual Private Cloud work?</h2>
<p>A virtual private cloud (VPC) is like having your own secure, isolated section of the internet where you can store data, run programs, and connect different parts of your online systems. Imagine a VPC as a private neighborhood within a city. In this city (which represents the internet), there are many houses and streets (which represent various servers and data centers). When you create a VPC, it's like building a fence around a specific area of this city just for yourself. Inside this fenced area, you can set up your own houses (servers), streets (networks), and even security guards (firewalls) to protect your neighborhood. The VPC allows you to control who can enter (access) your neighborhood and how they move around inside it. You can decide which houses (servers) can talk to each other and which ones are off-limits to outsiders. This setup ensures that your data and operations are kept separate and secure from the rest of the city (internet), providing you with a safe and private space to run your online activities without interference from others.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1705691700120/c2f05f8d-8c54-4b9d-a308-8174722e6eb9.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-why-use-a-vpc">Why use a VPC?</h2>
<p>We know many of you need to secure sensitive information while allowing your team to collaborate, or ensure certain traffic only occurs internally between VMs to avoid transfer charges, or you may need an environment for a multi-tiered web application. Our VPC allows you to seamlessly manage these scenarios on our secure and reliable platform. Let’s take a look at some of the benefits of our VPC.</p>
<p>Some benefits include:</p>
<ul>
<li><p><strong>Easy Access and Management</strong>: As an Akamai customer, you can create and manage VPCs through various tools like Cloud Manager, CLI, API, Terraform, Ansible, Python/Golang SDKs, Packer, and Salt at no additional cost. This flexibility ensures that you can integrate the VPC into your workflow effortlessly.</p>
</li>
<li><p><strong>Security and Isolation</strong>: Akamai is known for security, and keeping your cloud resources in our secure, isolated environment, significantly reduces the risk of unauthorized access and data breaches. You control who accesses your data and applications, reducing the risk of unauthorized access or ransomware attacks.</p>
</li>
<li><p><strong>Tailored Network Performance</strong>: In a VPC, you can optimize traffic flow for better performance, ensuring that your applications run smoothly and efficiently. You can set up the network configuration, choose which services and applications to run, and manage resources according to your specific needs. This flexibility allows for better customization and control over your online operations.</p>
</li>
<li><p><strong>Cost Effectivenes</strong>s: A VPC is a cost-effective solution. It eliminates the need for physical hardware and lets you allocate resources more efficiently, trimming down operational costs, and lowering your bottom line.</p>
</li>
<li><p><strong>Scalability</strong>: Akamai’s VPC offers scalability, meaning you can easily adjust resources like storage and computing power as your needs change. If your business grows, you can quickly scale up to accommodate increased demand. Our new <a target="_blank" href="https://www.linode.com/blog/compute/introducing-warm-migrations/">warm migrations</a> also enable you to resize VMs with downtime under one minute.</p>
</li>
</ul>
<p>VPCs provide businesses with a versatile and secure environment for a variety of critical functions, from safeguarding sensitive data to creating sophisticated multi-layered applications. By leveraging the capabilities of a VPC, businesses can significantly enhance the security, efficiency, and scalability of their digital operations.</p>
<h2 id="heading-getting-started-with-vpcs">Getting Started with VPCs</h2>
<p>VPCs will be accessible through our API and Cloud Manager as well as <a target="_blank" href="https://github.com/linode/linode_api4-python/releases/tag/v5.10.0">Python</a>, <a target="_blank" href="https://github.com/linode/linodego/releases/tag/v1.25.0">Go</a>, <a target="_blank" href="https://github.com/linode/packer-plugin-linode/releases/tag/v1.2.0">Packer</a>, <a target="_blank" href="https://github.com/linode/ansible_linode/releases/tag/v0.22.0">Ansible</a>, and <a target="_blank" href="https://github.com/linode/terraform-provider-linode/releases/tag/v2.10.0">Terraform</a>. VPCs are available in a majority of our core compute regions (check our <a target="_blank" href="https://www.linode.com/docs/products/networking/vpc/">VPC documentation</a> for a full list).</p>
<p>In <a target="_blank" href="https://cloud.linode.com/">Cloud Manager</a>, you will see VPC as an available option in the left sidebar. You will also see the VPC option when creating a new compute instance.</p>
<p><img src="https://lh7-us.googleusercontent.com/weoJAMlB_ybBtM8bhYqRpEI20bJLDhJVaEF7hzA3eaZwFF1E1lzFBzJIf94fott1lBmm3656RFCA10EmiEgv0wolJ76ekrISXBfcdstpukwKI1_DYUiaM1JF9ZBJM_DCv0btZhIwmgPNX0csVhR13jY" alt="VPC Creation" /></p>
<p>VPCs can be used at no additional cost, and this will not change after the functionality enters general availability. The resources deployed within a VPC are, however, billed at the standard rate.</p>
<p>You can learn more about how an Akamai VPC can help your team securely store sensitive data, streamline your application development processes, and efficiently manage network traffic on our <a target="_blank" href="https://www.linode.com/docs/products/networking/vpc/">VPC documentation page.</a></p>
<p>Thanks for reading! For all things cloud, follow me by clicking the follow button at the top of this page, subscribe to my newsletter below, and follow me on <a target="_blank" href="https://twitter.com/talia_nassi">Twitter</a>!</p>
]]></content:encoded></item></channel></rss>