<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Benjamin's Blog]]></title><description><![CDATA[Professional geek. Genius in training.]]></description><link>https://ben.hutchins.co/</link><generator>Ghost 4.1</generator><lastBuildDate>Tue, 21 Apr 2026 01:28:26 GMT</lastBuildDate><atom:link href="https://ben.hutchins.co/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[(Almost) Every technical decision I endorse or regret, scaling a health tech startup from bootstrap to enterprise]]></title><description><![CDATA[<p>I was the co-founder and Chief Technology Officer of a <a href="https://www.patientpal.com">startup</a>. I built and scaled a health-tech SaaS from bootstrap to its sale over four years, and then after joining the acquiring company I worked to expand the service to support enterprise clients over three years. I made some core</p>]]></description><link>https://ben.hutchins.co/almost-every-technical-decision-i-endorse-or-regret-scaling-a-health-tech-startup-from-bootstrap/</link><guid isPermaLink="false">67d20a1559ac8011c8fa554c</guid><category><![CDATA[Technical Thoughts & Notes]]></category><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Wed, 12 Mar 2025 22:26:30 GMT</pubDate><content:encoded><![CDATA[<p>I was the co-founder and Chief Technology Officer of a <a href="https://www.patientpal.com">startup</a>. I built and scaled a health-tech SaaS from bootstrap to its sale over four years, and then after joining the acquiring company I worked to expand the service to support enterprise clients over three years. I made some core decisions that the company had to stick to, for better or worse, these past seven years. This post will outline the major technical decisions I made&#x2014;what worked, what didn&#x2019;t, and what I&#x2019;d do differently. This idea was inspired by <a href="https://cep.dev/posts/every-infrastructure-decision-i-endorse-or-regret-after-4-years-running-infrastructure-at-a-startup/">Jack Lindamood</a>, I read his post and thought it was a creative way to share some lessons learned.</p><h2 id="aws">AWS</h2><h3 id="picked-aws-over-other-cloud-services">Picked AWS over other cloud services</h3><p>&#x1F7E9; Endorse</p><p>When starting, I had significant experience using <a href="https://aws.amazon.com">AWS</a> from past roles, and I didn&apos;t really have the opportunity to spend ramping up knowledge. AWS was the default choice. Since, I have found that AWS was a great choice. The tooling is superb. Support is reliable. Hiring engineers with experience is easy. AWS provided great stability. In seven years, we only had two downtime incidents, both of which could have been avoided with better multi-AZ replication.</p><h3 id="dynamodb-as-our-primary-database">DynamoDB as our primary database</h3><p>&#x1F7E5; Regret</p><p>I wanted to design a completely serverless service. Serverless SQL databases were not available seven years. After dealing with <a href="https://ben.hutchins.co/databases-are-like-a-delivery-service/">MongoDB&#x2019;s manual server management, sharding, and scaling issues,</a> I decided to give <a href="https://aws.amazon.com/dynamodb/">DynamoDB</a> a chance. I had never used DynamoDB, but I built a POC and saw performance was good. This was before adding the use of <a href="https://aws.amazon.com/dynamodbaccelerator/">DynamoDB Accelerator (DAX)</a>, which I knew was available to help accelerate the database more if it became necessary. DynamoDB it was.</p><p>DynamoDB has proven to be a powerful and performant database. When optimized with the correct indexes, DynamoDB performs lightning fast and for most of our application it has worked well. It completely fails in use cases where you cannot make predetermined, optimized indexes for queries. Supporting data tables with advanced filtering and searching is nearly impossible at scale with DynamoDB. Any use of a scan in production is impossible to scale. To support more complex searching, we added an ElasticSearch service and hooked into DynamoDB streams to ensure we kept ElasticSearch up-to-date.</p><p>DynamoDB&#x2019;s official SDK from AWS is also limited. It is similar to most direct database libraries and does not attempt to enter the realm of an ORM. We originally used <a href="https://github.com/dynamoose/dynamoose">Dynamoose</a> as our ORM, but its TypeScript support was limited, so I eventually created and open-sourced a more powerful DynamoDB library designed for TypeScript, called <a href="https://github.com/benhutchins/dyngoose/">Dyngoose</a>.</p><p>With a useful ORM and combining DynamoDB with ElasticSearch, we were able to handle almost every use case. The main issue became development time, many features that would be simple in SQL using a JOIN became difficult with DynamoDB. It often required loading a record to get a pointer to another record you needed to load. This can drive down performance of using DynamoDB significantly and the benefits evaporate. One use case that completely failed with DynamoDB is reporting.</p><p>Reporting isn&#x2019;t always the highest priority feature for a startup, delivering features is typically considered more vital. For many businesses, they want to see the value your product and service delivers and without the appropriate reporting capabilities you lack the ability to truly show your clients the value you deliver. DynamoDB and ElasticSearch are both terrible for reporting. Eventually, you may need to build in reporting capabilities.</p><p>DynamoDB does not position itself to be the primary application for large SaaS products. This decision came down to my preference for a serverless database. Today, weighing it all, I do regret the decision. DynamoDB continues to perform admirably for us, but I believe PostgreSQL would have been the better choice and today there are great services like <a href="https://neon.tech/">Neon</a>, <a href="https://supabase.com/">Supabase</a>, and <a href="https://aws.amazon.com/rds/aurora/serverless/">AWS Aurora</a> which make scaling a database easier than ever. I believe an optimized Postgres database with the benefit of appropriate caching in something like DynamoDB would have performed as well and allowed us to build a more useful product for our customers.</p><h3 id="using-lambdas-for-all-of-our-apis">Using Lambdas for all of our APIs</h3><p>&#x1F7E9; Endorse</p><p>Although the <a href="https://serverless.com">Serverless framework</a> did not work (more below), the serverless approach of using <a href="https://aws.amazon.com/lambda/">Lambda</a> for all of our API endpoints did. Lambda is AWS&#x2019;s Function-as-Service. Every one of our APIs is processed via a Lambda behind an API Gateway. Our service does not use a lot of CPU intensive processes, our biggest being our ETL process, while most of the service was automated via events (e.g., webhooks and schedules). It was a perfect use case for Lambdas and to this day, even with supporting hundreds of thousands of patients every month, the operating cost for the service is minimal and performance is great.</p><p>Using Lambdas provided several key benefits beyond reduced operating cost:</p><ol><li>Significantly lower infrastructure management costs. I didn&#x2019;t need to spend time managing servers or optimizing load balancers. I never had to deal with memory leaks or runaway processes bringing down a server.</li><li>Consistent performance. Our application performed reliably, for all users, under almost every workload we ever encountered. The service scaled instantly, it was never down for 10 minutes while waiting for another server to come up and enter the load balance&#x2019;s group of healthy targets.</li></ol><p>There are certain technical considerations and limitations that are imposed upon you when creating an API backed entirely by API Gateway and Lambda. Limits such as:</p><ol><li>Maximum of 30 seconds to respond. API Gateway restricts your Lambdas to responding in 30 seconds, known as the &#x201C;Maximum integration timeout&#x201D;. 30 seconds is plenty of time for 99% of requests; however, it means certain processes (e.g., exports) become difficult. Generally, this timeout forced us to design performant processes. We rarely had timeouts, but when we did, it was always nearly a mistake in how we performed a DynamoDB query where it fell back to a database scan. This timeout prevented runaway processes and forced us to review performance issues early on. To handle tasks longer than 30 seconds, we designed asynchronous workflows&#x2014;either queueing jobs or using WebSockets for real-time communication.</li><li>Maximum of 15 minutes runtime. When you have a Lambda that runs from a non-API Gateway event, you typically have 900 seconds to respond. This is plenty of time for typical workloads. To handle tasks longer than 15 minutes, we utilized <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html">ECS Tasks</a> on <a href="https://aws.amazon.com/fargate/">Fargate</a>.</li></ol><p>Designing our service to be entirely FaaS has proven to be a good call, our service continues to scale efficiently, run reliably, and is easy to maintain.</p><h3 id="api-gateway-for-websockets">API Gateway for WebSockets</h3><p>&#x1F7E5; Regret</p><p>Several years into building the application, we kept finding use cases where we needed asynchronous requests that would take longer than the limited 30 seconds of the allowed HTTP API quota on API Gateway. Processes where we could not directly control the duration, such as talking to a third-party API or generating export files.</p><p>To support these requests, we added the use of WebSockets via API Gateway. We had been using API Gateway for all our HTTP/REST API endpoints and thought API Gateway would be up to the challenge of supporting WebSocket. API Gateway implements WebSocket in a Request &#x2192; Response format, similar to a typical HTTP request. When you want to push a message to an open WebSocket connections, though, API Gateway falls short.</p><p>For simple request &#x2192; response uses of WebSockets, API Gateway can kick off a Lambda and you can respond to the request similarly as a regular HTTP request. This works well for requests where the response only goes to the user who made the request. Of course, WebSockets can be used for so much more. We wanted to push updates to user&#x2019;s browsers in realtime to support new features.</p><p>API Gateway maintains the WebSocket connections for you. To asynchronous send a message to an open connection, you use the API Gateway Management API. For every websocket message you want to send, you need to perform a request to an HTTP API. The API Gateway Management API also doesn&#x2019;t support sending a message to a batch of connections. This makes for a very slow &#x201C;realtime&#x201D; messaging system when you have to notify a few thousands users of a single event. Without batch support from API Gateway, sending messages to thousands of users individually was inefficient.</p><p>WebSockets are great, I will continue to utilize them more in the future. However, I&#x2019;d most likely self-host a WebSocket server using <a href="https://github.com/websockets/ws">ws</a> as an <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html">ECS Service</a> behind a <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html">NLB</a>.</p><h3 id="elastic-container-service">Elastic Container Service</h3><p>&#x1F7E9; Endorse</p><p>Our serverless application eventually had the need for longer-running services that required more intensive processing. To support these processes, we looked at ECS. While Kubernetes is spectacular for deploying services, we wanted a simple solution for running jobs. The obvious choice for us was ECS.</p><p>We started running ECS on Fargate, which allowed us to scale easily and get an idea of our usage as we continued to deploy more functionality. Eventually, we moved to using EC2-backed ECS clusters, which offered several benefits for us.</p><p>One thing we did have to implement a custom solution for long-running ECS tasks. Tasks that fail to shutdown properly could hang indefinitely. We introduced a scheduled Lambda which looks for tasks that have stalled, kills them, and throws an alarm.</p><p>ECS made deploying task-based docker containers incredibly easy to manage and scale.</p><h3 id="a-single-aws-account">A single AWS Account</h3><p>&#x1F7E5; Regret</p><p>We remained on one AWS account for far too long. We always had strong IaC; even with the ability to automate your infrastructure, it was easier to simply have a shared AWS account for our various test, QA, and production environments.</p><p>I now strongly believe there are too many benefits to isolating environments in separate AWS Accounts.</p><h3 id="systems-manager">Systems Manager</h3><p>&#x1F7E9; Endorse</p><p><a href="https://aws.amazon.com/systems-manager/">AWS Systems Manager</a> allowed us, with incredibly ease, to enable remote management and access to our servers. While our main API was entirely serverless, we eventually began to utilize ECS backed by an EC2 cluster. Systems Manager allows secure access to servers, which enables significantly easier access and debugging capabilities with an ECS Task failed.</p><h3 id="cloudformation">CloudFormation</h3><p>&#x1F7E5; Regret</p><p>At first, we utilized Serverless to deploy all our cloud resources and under the hood it used <a href="https://aws.amazon.com/cloudformation/">CloudFormation</a>. Eventually, we ran into issues with the way Serverless handled deployments, occasionally it determines it needs to destroy a stack and recreate it, so we had to move our persistent resources to an independent CloudFormation stack. We never really looked into better options and I have regretted it since, many times.</p><p>When we ported our resources out of our <code>serverless.yml</code> file, I found a library called <a href="https://github.com/bright/cloudform">cloudform</a> that allowed us to build resources with strict typing. This was before tools like <a href="https://aws.amazon.com/cdk/">CDK</a> had been available, but Terraform was very much an option. We continue to live with restrictions imposed by CloudFormation.</p><p>CloudFormation is an unforgiving black box. <a href="https://sst.dev/blog/sst-v3/">I am not the only one to realize this.</a> With limited tools for testing or validating deployments locally, you often rely heavily on deploying to a staging environment to test changes. Make a mistake though and your stack will have to roll back, a fun process that can take anywhere between five minutes and five hours with very little rationale as to why.</p><p>Today, I would utilize <a href="https://www.pulumi.com/">Pulumi,</a> which is built on top of <a href="https://www.terraform.io/">Terraform</a>. I find Terraform to be a tedious endeavor, however, Pulumi provides useful constructs for managing resources and intelligent defaults you do not get with CloudFormation or native Terraform. <a href="https://sst.dev">sst.dev</a> v3 is built on top of Pulumi and provides even more developer-friendly constructs that makes it a great choice for managing applications.</p><h2 id="frameworks">Frameworks</h2><h3 id="angular">Angular</h3><p>&#x1F7E9; Endorse</p><p>I picked <a href="https://angular.io">Angular</a> over alternatives early on for very much the same reason I picked AWS&#x2014;I knew Angular. Seven years ago, Angular and React had similar popularity and there wasn&#x2019;t an obvious choice like there is today. React was a great utility; however, it didn&#x2019;t come packaged with all the tooling and support Angular did making React harder to build with when starting from scratched. React felt better suited to large enterprises who could put a lot of manpower into building applications (e.g., Facebook) while Angular felt like it was better suited to smaller teams. That simply made my choice easier, as Angular came out of the box with great tooling for testing, building, deploying, routing, and internationalizing I stuck with Angular.</p><p>Angular proved to be a powerful tool and has gotten better with each subsequent release. I believe it still to be a powerful framework for building and is incredibly easy to get started with. While I would still very much like to say I&#x2019;d start my next project with Angular, the industry has changed. Angular has fallen in popularity and has seen many of the third-party extensions stagger and lose support for newer versions of Angular. In the past seven years I&#x2019;ve seen the rise and fall of <a href="https://vuejs.org/">Vue.js</a>, and today <a href="https://nextjs.org/">Next.js</a> has come to dominate. Next.js provides similar tooling that Angular comes packaged with. Next.js and React have popular support.</p><h3 id="material">Material</h3><p>&#x1F7E7; Regret-ish</p><p>I am willing to admit, my skillset when it comes to design is limited. I opted for <a href="https://material.angular.io/">Material</a> because it worked well with Angular and looked fine to me. Material was a great choice for our patient-facing applications, it is familiar to users, allowing them to flow through the experiences smoothly. Material is a mobile-first responsible design, which does not work well for our clinician-facing consoles that are used on desktops and want optimized experiences as well.</p><p>I would have likely regretted using multiple UI libraries for different applications, that would have complicated the experience for the developers building the applications, who are working on several Angular applications. I&#x2019;d look for a more responsive UI framework that optimizes well for desktop and mobile.</p><h3 id="serverless">Serverless</h3><p>&#x1F7E5; Regret</p><p><a href="https://www.serverless.com/">Serverless</a> is a utility to help build entirely serverless applications and deploy them to AWS Lambdas. It has expanded significantly over the past seven years and, however, almost everything it does has restrictions that we&apos;ve had to work around. This is partially due to the limitations of the technology Serverless relies on, specifically CloudFormation. Serverless attempts to help you define your API Gateway routes and connect them to Lambdas, it then tries to package your code into zip archives and manage deployments and updates for you.</p><p>Serverless doesn&apos;t offer official local development support, a popular third-party plugin <a href="https://github.com/dherault/serverless-offline">serverless-offline</a> provides most of the essential functionality and became vital to our development process. It lacks file watch support, which we implemented as a <a href="https://gulpjs.com/">gulp</a> task. Early on our watch process worked well, at some point updates were made to Serverless that affected serverless-offline and our watch process had to restart the serverless-offline process with every file change, significantly slowing the development experience.</p><p>Serverless doesn&apos;t use <a href="https://esbuild.github.io/">esbuild</a> or <a href="https://webpack.js.org/">webpack</a> to build your Lambdas. It zips up your entire project and tries to determine the appropriate dependencies to include, doing so per-function. This was too slow. We eventually had to build our own packaging system that would generate the packaged zips for functions to optimize build times and reduce deployment package sizes.</p><p>Serverless doesn&apos;t deploy custom resources well. To reduce the risk of Serverless attempting to destroy persistent resources (i.e., our database) we created our own CloudFormation templates and handled the deployments of custom resources entirely outside the purview of Serverless.</p><p>Today, I would utilize <a href="https://sst.dev">sst.dev</a> for developing and deploying Lambdas. It provides <a href="https://sst.dev/docs/live/">a practical solution for development</a>, it packages functions with <a href="https://esbuild.github.io/">esbuild</a>, and it manages resources with <a href="https://www.pulumi.com/">Pulumi</a>.</p><h3 id="typescript-for-our-apis">TypeScript for our APIs</h3><p>&#x1F7E9; Endorse</p><p>When starting, I was a strong JavaScript and Python developer, having utilized both for years working on machine learning R&amp;D contracts for the Marines and Navy. I thought hard about what to use for our APIs and I decided to utilize TypeScript compiling to run all our APIs with Node.js. My biggest reason for this was that I knew this would be a frontend heavy service, we&apos;d be designing patient portals and wanted superb patient experiences, so the first engineers I&apos;d be hiring would have to be strong frontend developers. I wanted to ensure anyone I hired who may have only frontend experience could still work as a full-stack engineer on the team. I was not significantly worried about application performance, I planned to design the application to be serverless and knew that even if Node.js was slower than Python it wouldn&apos;t make the application feel slow. TypeScript allowed us to have the same language for the frontend and backend of our application. My theory held true, several engineers who have come in with heavy frontend experience have found it easy to work on the full application stack, in large part to already being extremely familiar with the language. While I still love Python, the benefits for a smaller team to empower everyone to work as a full-stack engineer are significant. TypeScript has grown in popularity and is now significantly more popular than vanilla JavaScript. Performance of JavaScript has improved significantly and is getting better with runtimes like <a href="https://deno.com">Deno</a> and <a href="https://bun.sh/">Bun,</a> which <a href="https://medium.com/deno-the-complete-reference/hello-world-performance-bun-express-vs-python-fast-api-dc3c00960981">approaches or beats Python performance</a> in many use cases.</p><p>There is significant value in a less complicated stack that empowers more engineers to grow into effiicent full-stack contributors.</p><h2 id="process">Process</h2><h3 id="prioritizing-building-the-product">Prioritizing building the product</h3><p>&#x1F7E9; Endorse</p><p>Many of the decisions, both technical and business, focused on delivering value to our customers and onboarding new customers. Time spent on infrastructure maintenance and technical debt was a waste of resources. Many of the decisions, such as to utilize TypeScript and go for an entirely serverless application architecture, were made to reduce the overall time I&#x2019;d spend on onboarding new hires and managing the infrastructure. To this day, most of the decisions where I did not prioritize reducing effort towards delivering value, such as using Gitlab over GitHub, I reject. In a bootstrapped startup, you have very limited resources, and the most limited of all is your time.</p><p>My suggestion is to make decisions that will drive team efficiency.</p><h3 id="monorepo">Monorepo</h3><p>&#x1F7E9; Endorse</p><p>Early on, we used a monorepo; eventually we decided to split up the projects into independent repositories. The added complexity for managing releases, end-to-end testing, and additional developer overhead was not worth it. After several years, we ported back to a monorepo using <a href="https://nx.dev/">Nx</a>. Today, I would use <a href="https://turbo.build/">Turborepo</a> over Nx, I prefer how it manages project dependencies independently.</p><h3 id="a-restful-api">A RESTful API</h3><p>&#x1F7E7; Regret-ish</p><p>As our Angular applications needed data, we needed an API. As we were using Lambdas behind API Gateway, the obvious option was REST API. REST APIs are wonderful, but it becomes difficult to create useful endpoints that can fully bootstrap a user&#x2019;s session. As a result, we often find our applications end up making too many requests on startup. I believe there are times for REST APIs and there are times for alternatives.</p><p>Today, I would utilize <a href="https://graphql.org/">GraphQL</a> when appropriate, in addition to REST APIs, to allow for more flexibility.</p><h3 id="kanban">Kanban</h3><p>&#x1F7E9; Endorse</p><p>Early on we spent a moment on sprint planning meetings, I came from a team of a dozen engineers and I fell into the same routine. Agile didn&#x2019;t work well for our when we were one or two engineers and as we grew to a whopping size of four engineers, sprint planning still would be a lot of overhead for little value.</p><p>In a fast-paced startup, where client support, sales demos, and feature development compete for attention, priorities shift constantly. Unlike Scrum, which requires sprint planning, Kanban lets us adapt instantly, ensuring we could respond to changing needs without the overhead of excessive planning.</p><p>Kanban was the right approach, and I have come to believe it a superior system for smaller teams. Maintain priorities diligently so you always know what to do next, and stay focused on your current assignment if at all possible. My primary goal when using Kanban is to minimize the times you need to pull engineers out of work they&#x2019;ve begun, and to ensure they always know what they should pick up next. Continue to run retrospectives and incorporate feedback from the team into your process and product.</p><h3 id="cost-tracking-and-resource-budgets">Cost tracking and resource budgets</h3><p>&#x1F7E5; Regret</p><p>Early on, our costs were minimal. We designed an entirely serverless application, and production costs remained low for the first two years. Then, as we rapidly expanded, our costs suddenly spiked to $6,000 a month, quickly eating into our limited financial resources&#x2014;something that could have been easily avoided. At one point, our biggest expense was <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html">the SSM Parameter Store</a>, due to a poor implementation of how we loaded parameters. We should have been using <a href="https://aws.amazon.com/secrets-manager/">AWS Secrets Manager</a> instead.</p><p>Later, adding a NAT within our VPC escalated costs again, as we hadn&#x2019;t implemented the proper VPC Endpoints.</p><p>We also incurred massive, unnecessary costs due to a poorly configured AWS Backup plan, which backed up our entire DynamoDB database daily&#x2014;even though we had PITR enabled and our only tested recovery process relied on DynamoDB&#x2019;s native restoration capabilities rather than AWS Backup.</p><p>The key lessons here:</p><ol><li>Implement a monthly budget review process for cloud expenses.</li><li>Investigate unexpected cost increases early.</li><li>Set up and use <a href="https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-managing-costs.html">AWS Budgets</a> to track spending and prevent surprises.</li><li>Always analyze new infrastructure costs before scaling.</li></ol><h3 id="continuous-delivery">Continuous Delivery</h3><p>&#x1F7E9; Endorse</p><p>We invested early in a fully automated CI/CD. A good CI/CD ensures deployments are consistently reliable. We hosted the CI/CD on Gitlab and eventually refactored it to GitHub Actions when we migrated to GitHub. Our strategy was simple: deploy early, deploy often.</p><p>We wanted to deploy features the moment it was partially visible, usable, or functional. This allowed us to incorporate feedback we built. Due to limited development resources, we usually delivered minimal viable features and moved on to other priorities for a time. We&#x2019;d return to enhance features after gathering additional feedback from several clients.</p><p>Feedback is a vital part of a good SLDC. Clients typically like being part of the process.</p><h3 id="life-without-qa">Life without QA</h3><p>&#x1F7E7; Endorse-ish</p><p>We didn&#x2019;t have a QA analyst for the first five years. We maintained continuous delivery continuous, regularly with numerous deployments to prod a day. While we managed to produce high-quality work and rarely had outages, it was due to diligence of every developer testing their work. It was difficult and risky at times.</p><p>Unit testing and integration tests are essential; invest in automated testing early. Write an appropriate amount of test coverage. Set a high bar for developers to test their changes thoroughly. Review every pull request diligently. Hire a QA.</p><h3 id="project-management">Project Management</h3><p>&#x1F7E9; Endorse</p><p>We didn&#x2019;t have a Project Manager for the first six years. Yet, we built a product that delivered exceptional value to our clients. I typically filled the role of a project manager. In my opinion, engineers make good product managers, but they rarely want to do the work.</p><p>Clients and Sales rarely ask for a feature without a purpose, but they typically ask for a solution without explaining the reasoning behind it. Often, the request is what they believe to be an &#x201C;easy to implement&#x201D; feature. A project manager with the appropriate knowledge of the technology can help determine the right approach. It is vital to understand the issue actually being solved by the request.</p><p>Good project managers should be technical. Engineers who want to build products that delight users need to understand the reason behind the request.</p><h2 id="saas">SaaS</h2><h3 id="slack">Slack</h3><p>&#x1F7E9; Endorse</p><p>For development teams, in my opinion, Slack remains superior to Microsoft Teams. Slack has added many useful features over the past seven years and continued to prioritize its integration and easy for collaborative communication.</p><h3 id="jira">Jira</h3><p>&#x1F7E5; Regret</p><p>We used <a href="https://www.atlassian.com/software/jira">Jira</a> for our development and support tickets. I have never liked it, and I didn&#x2019;t like it going into the start. It was what I was familiar with. Jira remains a bloated and costly software, providing little value over many alternatives on the market today.</p><p>I would likely try out <a href="https://linear.app/">Linear</a> for our development tickets, although I&#x2019;ve not used it, I am impressed with the feedback I&#x2019;ve heard, the technical design, and the user experience they focus on delivering. Linear offers a more modern UI, faster performance, and a streamlined workflow compared to Jira.</p><p>For client support tickets, I would select an omnichannel support system. Clients would rather not make tickets in a ticketing system, they want their issues heard, and they want to be replied to. You need to respond to clients telling them you are looking into the issue and will follow up, you need to follow up, customers are everything to a SaaS and through diligent customer service we grew our company. We used Jira and made it work, but it had significant overhead. A simpler tool to ensure all emails are received, issues reported during phone conversations are logged, and the issues get assigned to the appropriate team (e.g., implementation or development) is vital.</p><h3 id="confluence">Confluence</h3><p>&#x1F7E5; Regret</p><p>We used <a href="https://www.atlassian.com/software/confluence">Confluence</a> since we used Jira. Confluence, like Jira, is bloated. It provides a complex organization system when you want to keep information ready at your team&#x2019;s fingertips. It is vital to have a place to store company information, but it needs to be incredibly easy to add documents and information to.</p><h3 id="gitlab">Gitlab</h3><p>&#x1F7E5; Regret</p><p>I opted for Gitlab at the start as it was cheaper than GitHub, this was before GitHub lowered their prices to match Gitlab, and I could self-host it to improve security and privacy. I knew this application would be dealing with protected health information. The issue was managing Gitlab. We were a small startup, for several years we had only two engineers. Taking on managing Gitlab servers and runners was more overhead than it was worth. When GitHub dropped their prices, I immediately said, &quot;yes please&quot; and we moved to GitHub. Overall, GitHub is great, popularity is unquestionable, and they&apos;ve continued to expand GitHub Actions to become a powerful CI/CD platform that I now appreciate and find more powerful than most CI/CD platforms I&apos;ve worked with.</p><p>Today, I would pick GitHub from the start.</p><h2 id="software">Software</h2><h3 id="javascript-standard-style">JavaScript Standard Style</h3><p>&#x1F7E9; Endorse</p><p>Early on, I added strict linters to our codebase. My goal was not to be pedantic, but to improve the long-term maintainability of the codebase. I had little opinion as to the rules of the linter, so I picked <a href="https://standardjs.com/">JavaScript Standard Style</a> as our base set of rules. The one change I made was to strictly require trailing commas, I have found it makes git diffs easier when you can see only what is being meaningfully changed.</p><p>I believe having appropriate tools in place to manage an ever-growing codebase is vital, doing it from day one will avoid the need to clean up the code in the future. Pick some rules and enforce them.</p><h3 id="babeledit">BabelEdit</h3><p>&#x1F7E9; Endorse</p><p><a href="https://www.codeandweb.com/babeledit">BabelEdit</a> is a great internationalization tool, has worked well for us.</p><h3 id="dependabot">Dependabot</h3><p>&#x1F7E9; Endorse</p><p><a href="https://docs.github.com/en/code-security/getting-started/dependabot-quickstart-guide">Dependabot</a> is a tool to manage your dependencies and help keep them up to date. It is incredibly helpful to continuously keep dependencies up to date. If they stagnate, it becomes much harder to handle upgrades. Having an automated tool has become a must.</p><h3 id="snyk">Snyk</h3><p>&#x1F7E9; Endorse</p><p>Security was always important for us, my background was in military contracts and I had an expertise with information security. Working with PHI, we needed to take security seriously from day one. <a href="https://snyk.io/">Snyk</a> was a great tool to help us with that, it integrated into developer IDEs to help them avoid mistakes to begin with, integrated into our pull requests to avoid committing mistakes, and monitoring dependencies for vulnerabilities. We eventually added its infrastructure and container scanning tools. Put these kinds of tools in place early on.</p><h2 id="hardware">Hardware</h2><h3 id="apple-macbooks">Apple MacBooks</h3><p>&#x1F7E9; Endorse</p><p>Apple MacBooks are a fantastic product and a great device for developers.</p>]]></content:encoded></item><item><title><![CDATA[Hard Things in Computer Science]]></title><description><![CDATA[<p>Phil Karlton, a well-known computer programmer, is often quoted for saying:</p><blockquote>There are only two hard things in computer science: cache invalidation and naming things.</blockquote><p>After many years as a software engineer, I have a proposed revision to this statement:</p><blockquote>There are only three hard things in computer science: cache</blockquote>]]></description><link>https://ben.hutchins.co/hard-things-in-computer-science/</link><guid isPermaLink="false">65fb043259ac8011c8fa5440</guid><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Wed, 20 Mar 2024 15:45:57 GMT</pubDate><content:encoded><![CDATA[<p>Phil Karlton, a well-known computer programmer, is often quoted for saying:</p><blockquote>There are only two hard things in computer science: cache invalidation and naming things.</blockquote><p>After many years as a software engineer, I have a proposed revision to this statement:</p><blockquote>There are only three hard things in computer science: cache invalidation, naming things, and timezones.</blockquote><p>See <a href="https://xkcd.com/2867/">xkcd #2867</a>.</p>]]></content:encoded></item><item><title><![CDATA[Inside-Out Grilled Cheese]]></title><description><![CDATA[<p>In my humble opinion, this crispy, crunchy, cheesy masterpiece is the ultimate grilled cheese sandwich. Make sure you follow some basic rules for this to work properly. Use a nice sharp cheddar and be sure to use a quality non-stick pan over medium to medium-low heat.</p><p>This is a cross</p>]]></description><link>https://ben.hutchins.co/ultimate-inside-out-grilled-cheese/</link><guid isPermaLink="false">6053bc0359ac8011c8fa5406</guid><category><![CDATA[Food & Recipes]]></category><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Tue, 29 Sep 2020 15:37:32 GMT</pubDate><content:encoded><![CDATA[<p>In my humble opinion, this crispy, crunchy, cheesy masterpiece is the ultimate grilled cheese sandwich. Make sure you follow some basic rules for this to work properly. Use a nice sharp cheddar and be sure to use a quality non-stick pan over medium to medium-low heat.</p><p>This is a cross between two other grille cheese recipes, one by <a href="https://altonbrown.com/recipes/grilled-grilled-cheese/">Alton Brown</a> and the other by <a href="https://foodwishes.blogspot.com/2010/05/inside-out-grilled-cheese-sandwich.html">Chef John</a>.</p><h2 id="ingredients">Ingredients</h2><ul><li>2 slices sourdough or hearty country bread</li><li>2 tablespoons unsalted butter, at room temperature</li><li>40 grams (~1.5 ounces) extra sharp cheddar cheese, grated</li><li>40 grams (~1.5 ounces) Gruyere cheese, grated</li><li>&#xBD; teaspoon dry mustard</li><li>&#xBC; teaspoon freshly ground black pepper</li><li>&#xBC; teaspoon smoked paprika (optional, depends on what you&#x2019;re pairing with the grilled cheese)</li></ul><h2 id="directions">Directions</h2><ol><li>Grate your cheeses; then combine the cheeses, mustard, paprika (if using) and pepper in small bowl.</li><li>Melt 1 &#xBD; tablespoons of the butter in a nonstick skillet over medium-low heat. Place bread slices in the skillet on top of the melted butter.</li><li>Spread about 75% of the cheese mixture on one slice of bread; place the other slice of bread, butter-side up, on top of the cheese. Spread half of remaining cheese on top of the sandwich.</li><li>Melt remaining &#xBD; tablespoon butter in the skillet next to the sandwich. Flip the sandwich (carefully) onto the melted butter so that the cheese-side is facing down. Spread remaining cheese on top of the sandwich. Cook sandwich until cheese on the bottom is crispy and caramelized, 3 to 4 minutes. Flip sandwich once more and cook until cheese is crispy and caramelized on the other side, another 3 to 4 minutes.</li></ol>]]></content:encoded></item><item><title><![CDATA[Beef & Bacon Pie]]></title><description><![CDATA[<p>This is a dramatically improved version of the beef pie I originally found in &quot;<a href="https://www.amazon.com/dp/0345534492">A Feast of Ice and Fire: The Official Game of Thrones Companion Cookbook</a>&quot;.</p><p>In the original recipe, the mixture calls for a saffron-infused crust and the filling comes out like soup, which doesn&apos;</p>]]></description><link>https://ben.hutchins.co/beef-bacon-pie/</link><guid isPermaLink="false">6053bc0359ac8011c8fa5403</guid><category><![CDATA[Food & Recipes]]></category><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Wed, 09 Oct 2019 16:23:17 GMT</pubDate><content:encoded><![CDATA[<p>This is a dramatically improved version of the beef pie I originally found in &quot;<a href="https://www.amazon.com/dp/0345534492">A Feast of Ice and Fire: The Official Game of Thrones Companion Cookbook</a>&quot;.</p><p>In the original recipe, the mixture calls for a saffron-infused crust and the filling comes out like soup, which doesn&apos;t work well for a pie filling. I&apos;ve removed the use of the saffron crust; while I enjoy saffron, using it the crust is a waste and the flavor never comes out well. I also have changed most of the filling, starting it with a roux, so the result is something that only slightly resembles the original recipe.</p><p>So here&apos;s my version:</p><h2 id="ingredients">Ingredients</h2><ul><li>1 pie crust (you can make this, but honestly, ones from the grocer work)</li><li>1 package (12 strips) of bacon (avoid use of thick cut, it won&apos;t crisp in the same way)</li><li>1 stick of butter</li><li>2-4 tablespoons of flour</li><li>1 small onion, diced</li><li>1 large carrot, cubed</li><li>1 small potato, cubed</li><li>1 pound of meat (steak tips or lamb shoulder preferred, but this is a stew so you can use chuck, stew meat, or even regular ground beef), cut into 0.5-1&quot; cubes</li><li>1/2 cup low/reduced-sodium vegetable stock</li><li>1 tablespoon of rosemary</li><li>1 tablespoon of thyme</li><li>3 bulbs of garlic, minced</li></ul><h2 id="prepare-the-bacon">Prepare the bacon</h2><ol><li>Preheat oven to 400F</li><li>Form a lattice from the bacon, weaving the strips together into a square</li><li>Roast bacon on a cooling rack set inside a half-sheet pan for 15-20 minutes, until just starting to become crispy</li><li>Remove from oven, allow to cool</li></ol><h2 id="prepare-the-crust">Prepare the crust</h2><ol><li>Place pie crust into a pie dish. One with higher edges preferred over the most classic dessert pie dish. Ensure the pie crust goes up to the edge of the dish.</li><li>Blind-bake the pie crust per directions, usually requires baking for 15 minutes at 425F.</li></ol><h2 id="prepare-the-filling">Prepare the filling</h2><ol><li>Reserve 1 tablespoon of the butter.</li><li>Melt the rest of the butter over low-medium heat.</li><li>Slowly add 2 tablespoons of the flour, mixing constantly, until flour if fully incorporated. You may need the additional flour. We&apos;re trying to make a roux, no lumps of the flour should remain. Once all desired flour is added, continue to stir the roux and allow it to cook for 1-2 minutes. It&apos;ll darken slightly, this allows the flour to fully incorporate and is important to remove any floury taste.</li><li>Add the vegetable stock. Continue to mix to incorporate.</li><li>Turn temp down to low.</li><li>Allow mixture to cook, stirring occasionally, for 10 minutes. The mixture will thicken as the stock incorporates.</li><li>In a separate pan, add reserved butter and then cook the onions until they start to become translucent. </li><li>Add carrots and potatoes to onions, continue to cook until onions have browned, potatoes and carrots are softened.</li><li>Once onion, carrot, potato mixture is cooked, add mixture to the gravy-like filling mixture along with any juices and remaining butter from pan.</li><li>Add some bacon grease from the tray you baked the bacon on to pan.</li><li>Brown your meat in the pan, strain, then add meat back to pan.</li><li>Sprinkle meat with herbs, toss.</li><li>Once meat is mostly cooked, reduce heat and add minced garlic to meat.</li><li>Allow meat to cook until just before desired doneness, do not allow garlic to burn.</li><li>Add meat to the filling.</li><li>Allow the filling to cook, to fully incorporate flavors, on low, for at least another 10 minutes and up to two hours, stirring occasionally. Using more time hear allows flavors to really develop. The mixture will continue to thicken and reduce as steam escapes.</li><li>Taste the filling, adding salt, pepper, or more seasoning as desired.</li></ol><h3 id="prepare-the-pie">Prepare the pie</h3><ol><li>Add filling to blind-baked pie crust.</li><li>Place the crispy bacon lattice atop pie, this acts as the top pie crust.</li><li>Bake in the oven for 30-45 minutes, allowing the filling to fully set and solidify (made possible thanks to the roux).</li><li>Removed from oven and allow mixture to cool slightly, 10 minutes at least, before cutting; otherwise the filling may still splatter out.</li><li>Enjoy.</li></ol>]]></content:encoded></item><item><title><![CDATA[Databases are like a delivery service]]></title><description><![CDATA[<p>I recently began thinking about the <a href="https://en.wikipedia.org/wiki/MEAN_%28software_bundle%29" rel="noopener">MEAN stack</a> that has become popular and is frequently taught around the world; in bootcamps and technology classrooms. <a href="https://www.mongodb.com/" rel="noopener">MongoDB</a>, <a href="https://expressjs.com/" rel="noopener">Express.js</a>, <a href="https://angularjs.org/" rel="noopener">AngularJS</a>, and <a href="https://nodejs.org/en/" rel="noopener">Node.js</a> (&#x201C;MEAN&#x201D;) is aimed to be a complete stack for building web applications using only JavaScript and MongoDB.</p>]]></description><link>https://ben.hutchins.co/databases-are-like-a-delivery-service/</link><guid isPermaLink="false">6053bc0359ac8011c8fa53fd</guid><category><![CDATA[Technical Thoughts & Notes]]></category><category><![CDATA[databases]]></category><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Wed, 09 Oct 2019 02:04:27 GMT</pubDate><content:encoded><![CDATA[<p>I recently began thinking about the <a href="https://en.wikipedia.org/wiki/MEAN_%28software_bundle%29" rel="noopener">MEAN stack</a> that has become popular and is frequently taught around the world; in bootcamps and technology classrooms. <a href="https://www.mongodb.com/" rel="noopener">MongoDB</a>, <a href="https://expressjs.com/" rel="noopener">Express.js</a>, <a href="https://angularjs.org/" rel="noopener">AngularJS</a>, and <a href="https://nodejs.org/en/" rel="noopener">Node.js</a> (&#x201C;MEAN&#x201D;) is aimed to be a complete stack for building web applications using only JavaScript and MongoDB.</p><p>It is easy to call JavaScript an &#x201C;all-purpose programming language&#x201D;. It allows someone to easily create desktop, web, mobile, and embedded apps and services with a single programming language. It&#x2019;s a great first-language to learn for new developers.</p><p>MongoDB, however, is the weakest link in this popularized technology stack. Someone coming out of a bootcamp, whose only database of knowledge is Mongo, will often struggle as they build an application. I worry for those individuals who might walk out thinking they have all the tools necessary to do something great. They may be thinking MongoDB is an &#x201C;all-purpose database&#x201D;.</p><p>For anyone who hasn&#x2019;t&#x200B; struggled to scale Mongo in production: Mongo is an incredibly easy database to use and develop with, but it&#x2019;s not an all-purpose database. Mongo is a document-store database. While Mongo is flexible, it cannot possibly do everything a growing service demands.</p><p>That&#x2019;s the thing: no database is all-purpose.</p><h3 id="let-s-compare-databases-vs-delivery-service">Let&#x2019;s compare: Databases vs. Delivery Service</h3><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://cdn-images-1.medium.com/max/600/1*2qfr3XirI8nfAxinYbjtJw.png" class="kg-image" alt loading="lazy"><figcaption>Your amazing product: A vacuum&#xA0;cleaner!</figcaption></figure><p>Let&#x2019;s compare different types of databases to aspects of a delivery service. Imagine your application provides a product. Imagine that product is a vacuum cleaner.</p><p>Your product if nifty and neat, therefore, it&#x2019;s&#x200B; in high demand. You want to be able to deliver these amazing vacuums to your customers quickly and efficiently!</p><p>Delivering products has a lot of logistics problems. There is no &#x201C;one way&#x201D; to deliver a product, you need different capabilities for different purposes.</p><p>Let&#x2019;s compare.</p><h4 id="semi-trailer-trucks">Semi-trailer Trucks</h4><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/800/1*dv7N3tNnZeNyXPRqyaiAvg.png" class="kg-image" alt loading="lazy"></figure><p>A semi is quite versatile. It can do just about everything. You could even store all your vacuum cleaners on your semis. You can, if necessary, drive your semi up to your customer&#x2019;s doorstep and drop off their brand new vacuum to them.</p><p>You can tell though, a semi isn&#x2019;t going to be great at doing everything. Certainly storing all of your vacuum inventory on semi trucks can&#x2019;t be the most efficient method; what happens when you run out of space? You&#x2019;d have to buy another semi truck. You might also have difficulty finding the right vacuum you need to deliver due to having too many extra vacuums in the way. Often though, a semi won&#x2019;t work for that last-mile delivery, getting the vacuum on the customer&#x2019;s doorstep. Semi trucks are too big to fit on some streets or in some driveways.</p><p>Document storage databases, like MongoDB, are like a semi truck. Storing all of your data in a document-store can get you started, but it&#x2019;s not going to be efficient. When you have enough data, you need to start <a href="https://docs.mongodb.com/manual/sharding/" rel="noopener">sharding</a> your database. This can add a lot of complexity finding the right data (or vacuum) when you need it. Document-stores aren&#x2019;t optimized for searching, so searching over your data sets, even with an index, can be slow. It can be difficult to optimize queries to return only the data you need, resulting in a lot of wasted effort transmitting unnecessary data. Additionally, it isn&#x2019;t quite fast enough to provide the blistering fast performance your application might demand.</p><p>Clearly using only semi trucks isn&#x2019;t the best idea. What else do we need to make our delivery service work?</p><h4 id="warehouses">Warehouses</h4><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/800/1*jpvnyQuyQSB8dirG47p-QA.png" class="kg-image" alt loading="lazy"></figure><p>If semi trucks cannot provide the best storage for all of your vacuums, could you stored the vacuums you aren&#x2019;t actively delivering in a warehouse? A warehouse is definitely going to be able to help keep your vacuums safe. With some foresight you can organize your warehouse so it&#x2019;ll always be easy to find the right vacuum when you need it. Clearly, any good delivery service needs a warehouse.</p><p>SQL databases, like MySQL or PostgreSQL, are like warehouses. They are perfectly designed for storing all the data you need to organize and store.</p><p>Just like actual warehouses, SQL databases requires proper setup to scale. Without the proper foresight into your database setup, as you add data it will will increase retrievable times significantly, making your service slower. A poorly designed database will destroy your application&#x2019;s ability to scale as the amount of data it&#x2019;s indexing and organizing grows.</p><p>So what you might say? Rather than trying to organize your pile of vacuums, you might say &#x201C;Let&#x2019;s just buy another acre of land and expand our warehouse!&#x201D; That doesn&#x2019;t scale infinitely, just like throwing hardware at your failing database&#x2019;s scaling needs. It cannot last.</p><p>There are times, even when your warehouse is perfectly organized, that retrieving the right vacuum from your inventory is still too slow. Warehouses are really only designed to hold your vacuum cleaners.</p><p>Since a warehouse cannot drop a vacuum off at a customer&#x2019;s doorstep, if used only warehouses you&#x2019;d have customers coming to pickup their order from your warehouse. That would be terribly slow and inconvenient to your customers. Clearly a warehouse isn&#x2019;t going to be very good at those last-mile, to the doorstep, deliveries.</p><p>It is starting to feel like you&#x2019;re going to need a warehouse and semi trucks; but clearly warehouses still have weaknesses. What can we do about that?</p><h4 id="warehouse-robots">Warehouse Robots</h4><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/800/1*RHwu_MGIIy8IRufhpgZ3SQ.png" class="kg-image" alt loading="lazy"></figure><p>Robots are cool, aren&#x2019;t they? When a warehouse has enough of those amazing vacuums being stored, it slows down the time it takes to find a specific one. What if we used robots to help us get the right vacuum even faster?</p><p>Warehouses often use robots to retrieve inventory from their shelves. This speeds things up, helping the products get to customers faster, while putting less pressure on the non-robotic employees of the warehouse.</p><p>Warehouse robots are similar to an indexing service, like Solr. An indexing service helps speed up searches on top of your massive amounts of data you keep in your databases. This performance can help reduce the time it takes to find whatever it is your application needs. As you continue to get more inventory in your warehouses, it becomes increasingly more important to have good indexing and automation.</p><p>Using warehouse robots can certainly help us speed up warehouses when they need to perform difficult searches. Now you&#x2019;re getting to a reliable delivery service! If you use all three of the current options: semis, warehouses, and a robots, your delivery service will do pretty well! But what if you continue growing and have to optimize your delivery service even more?</p><h4 id="vans">Vans</h4><figure class="kg-card kg-image-card"><img src="https://cdn-images-1.medium.com/max/800/1*7J1ofuH1cITtb6tddXRhyA.png" class="kg-image" alt loading="lazy"></figure><p>As we know, a warehouse can&#x2019;t even attempt to make a last-mile delivery; and semi trucks have a lot of challenges making it all the way to our customer&#x2019;s doorsteps. So what about what about a van?</p><p>A van is much smaller than a semi, so it can fit down those narrow roads and onto tight driveways. That would allow our drivers to get those vacuums to our customers even faster! A van is nimble and capable of tough crowded city roads or a country road.</p><p>A van is comparable to a key-value store, like Redis or any memcache. A key-value store is extremely fast! It provides amazing performance for data that&#x2019;s been loaded into it from another data source, making it perfect for caching to help make the entire service faster.</p><p>Unfortunately, vans are cramped. You definitely cannot use them for storing your inventory. Even if you can expand the storage space, the design limits you to getting the vacuums out in a limited number of ways.</p><p>Key-value stores provide amazing performance, but they rely heavily (or entirely) on a server&#x2019;s memory to provide this performance. They also have limited retrieval options, many have zero querying support, can only lookup by a specific key. It&#x2019;s almost impossible to search a key-value store. They&#x2019;re simply not designed for it.</p><p>Clearly your delivery service needs warehouses and either semi trucks or vans. Possibly all three! Depending on how popular our delivery service is, we might still need those warehouse robots. If you are using all of these options at your disposable, is it possible to outgrow what these provide us?</p><h4 id="delivery-logistics">Delivery Logistics</h4><p>Sometimes a delivery service has too much going on. Sometimes it might just be getting so complicated, having multiple warehouses, dozens of semi trucks, potentially hundreds of vans and robots helping us out. We really need something to help piece everything together.</p><p>We clearly need something to help with these logistics. Something that can help us keep track of where everything is, whose is whats, and whats is whose.</p><p>You need an operations manager. You need someone, or something, that can stay aware of the high-level details, just the status of things. An operations manager doesn&apos;t need to know exactly how much inventory is available or when the earliest delivery possible might be; but the operations manager would help speed up operations between services, provide helpful logistics, reporting, insight, and analytics.</p><p>A graph database, such as Neo4j, is a great tool for this. While you&#x2019;d be hard-pressed to use a graph database for storing any inventory, it can certainly help with that network of information. It can help speed up those messier queries and help you quickly know the exact status and location of the data you needed.</p><p>Clearly we now have a scalable delivery service. One that will allow us to continue to grow through new challenges and handle delivering any amount of vacuums our customers demand.</p><h3 id="takeaway">Takeaway</h3><p>I hope you&#x2019;ve come to realize that a single-database service will almost always eventually fail on its own, like a delivery service that has only one type of mechanism to deliver a product. To build an application that can scale under a constantly growing demand, you may need to use a variety of tools. While a database like MongoDB is going to help you get started, if you don&#x2019;t plan for growth you&#x2019;ll quickly find challenges managing your growth.</p><p>I hope more people start mentioning the importance of using the right tool for the job and learn about the types of technologies that are available to help you through even the toughest scaling challenge.</p>]]></content:encoded></item><item><title><![CDATA[Lamb with Cheese Pockets]]></title><description><![CDATA[<p>This is a recipe I originally sourced from an old Weber cookbook. I&apos;ve modified in efforts of making it better, which it is. I prefer this with lamb, but this works well with some cuts of steak. You could also forgo the cheese pocket and use the marination</p>]]></description><link>https://ben.hutchins.co/cheese-pockets/</link><guid isPermaLink="false">6053bc0359ac8011c8fa53fb</guid><category><![CDATA[Food & Recipes]]></category><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Wed, 09 Oct 2019 01:13:30 GMT</pubDate><content:encoded><![CDATA[<p>This is a recipe I originally sourced from an old Weber cookbook. I&apos;ve modified in efforts of making it better, which it is. I prefer this with lamb, but this works well with some cuts of steak. You could also forgo the cheese pocket and use the marination standalone.</p><h2 id="ingredients">Ingredients</h2><ul><li><strong>Marinade:</strong></li><li>1/2 small onion, chunked (if using food processor), diced fine (if without the assistance of a food processor).</li><li>1/4 cup of soy sauce (traditionally brewed preferred)</li><li>2 tablespoons of firmly packed brown sugar (dark preferred)</li><li>2 tablespoons of lemon juice</li><li>2 large garlic cloves</li><li>1 teaspoons of salt (<strong>only</strong> if using low sodium soy sauce)</li><li><strong>Meat:</strong></li><li>Half a rack of lamb chops or 4 filet mignon steaks</li><li><strong>Cheese:</strong></li><li>6oz goat or blue cheese</li></ul><h2 id="create-the-marinade">Create the marinade</h2><ol><li>Place ingredients for the marination into food processor, whirl until onion and garlic are diced fine.</li><li>Pour marination into a 1-gallon zip lock bag, place your prepared meats (see below) into the bag.</li><li>Refrigerate for minimum of 30 minutes, ~6 hours preferred.</li></ol><h2 id="prepare-the-meats">Prepare the meats</h2><ol><li>For lamb: Add the cheese pockets by cutting a small slit on the backside of lamb chops, about 1/2&#x201D; deep, to the bone.</li><li>For steaks: Add the cheese pockets by cutting a small slit on the side of the steak about 1/2&#x201D; deep.</li></ol><h2 id="preparation-for-grilling">Preparation for grilling</h2><ol><li>Heat grill to 425&#xB0;F</li><li>Remove chops or steaks from marinade, reserving the liquid.</li><li>Stuff cheese pockets with your choice of cheese.</li><li>Place meats on grill, cook to desired doneness.</li></ol><h3 id="using-your-leftover-marinade">Using your leftover marinade</h3><p>Consider your options. I recommend #2.</p><ol><li>Baste the meats once with reserved marination liquid.</li><li>Turn marination into sauce. Heat in a pot on the stove until it just starts to boil, keep temp low and simmer for 3-5 minutes until thickened.</li></ol>]]></content:encoded></item><item><title><![CDATA[Len's Dip]]></title><description><![CDATA[<p>This dip was originally created by a family member of mine, let&apos;s call him Len. It&apos;s great for parties, pairs well with thick potato chips and veggies. A perfect alternative to the common french onion and ranch dips.</p><h2 id="ingredients">Ingredients</h2><ul><li>16 oz sour cream</li><li>16 oz cottage</li></ul>]]></description><link>https://ben.hutchins.co/lens-dip/</link><guid isPermaLink="false">6053bc0359ac8011c8fa53fa</guid><category><![CDATA[Food & Recipes]]></category><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Wed, 09 Oct 2019 00:18:42 GMT</pubDate><content:encoded><![CDATA[<p>This dip was originally created by a family member of mine, let&apos;s call him Len. It&apos;s great for parties, pairs well with thick potato chips and veggies. A perfect alternative to the common french onion and ranch dips.</p><h2 id="ingredients">Ingredients</h2><ul><li>16 oz sour cream</li><li>16 oz cottage cheese (preferably with chives)</li><li>1.5 tsp Lawry&apos;s seasoned salt</li><li>1 Tbsp garlic powder</li><li>1 Tbsp dehydrated onions</li><li>1 Tbsp dehydrated dill</li></ul><h2 id="process">Process</h2><p>Mix all ingredients, preferably the night before to let the onion and dill hydrate and absorb moisture.</p><p>It can be hard to taste test because the onion and dill flavor will strengthen. The color should be visibly, but very sightly, red from the seasoned salt. Dill should be visible throughout the mixture, enough to get a little with every chip.</p><p>If you use fresh drill, add a bit less as the flavor is strong. The mixture might end up more watery, but flavor will be good.</p>]]></content:encoded></item><item><title><![CDATA[What you’re revealing to your ISP, why a VPN isn’t enough, and ways to avoid leaking it]]></title><description><![CDATA[<p>Originally published to <a href="https://hackernoon.com/what-youre-revealing-to-your-isp-why-a-vpn-isn-t-enough-and-ways-to-avoid-leaking-it-503816542951">Hacker Noon</a>.</p><p>There&#x2019;s a lot of chatter and concern about <a href="https://rules.house.gov/bill/115/sj-res-34" rel="noopener">S.J. Res. 34</a>, a pending resolution that will allow Internet Service Providers (<a href="https://hackernoon.com/tagged/isps" rel="noopener">ISPs</a>) to record your activity and then sell that information. From what I have read, most people are focusing on your &#x201C;</p>]]></description><link>https://ben.hutchins.co/what-youre-revealing-to-your-isp-why-a-vpn-isnt-enough-and-ways-to-avoid-leaking-it/</link><guid isPermaLink="false">6053bc0359ac8011c8fa5400</guid><category><![CDATA[Technical Thoughts & Notes]]></category><category><![CDATA[security]]></category><category><![CDATA[privacy]]></category><dc:creator><![CDATA[Benjamin Hutchins]]></dc:creator><pubDate>Fri, 31 Mar 2017 02:08:00 GMT</pubDate><content:encoded><![CDATA[<p>Originally published to <a href="https://hackernoon.com/what-youre-revealing-to-your-isp-why-a-vpn-isn-t-enough-and-ways-to-avoid-leaking-it-503816542951">Hacker Noon</a>.</p><p>There&#x2019;s a lot of chatter and concern about <a href="https://rules.house.gov/bill/115/sj-res-34" rel="noopener">S.J. Res. 34</a>, a pending resolution that will allow Internet Service Providers (<a href="https://hackernoon.com/tagged/isps" rel="noopener">ISPs</a>) to record your activity and then sell that information. From what I have read, most people are focusing on your &#x201C;web history.&#x201D; This focus is harmful because there are many additional types of information that can reveal details about you, and you cannot solve this <a href="https://medium.freecodecamp.com/how-to-set-up-a-vpn-in-5-minutes-for-free-and-why-you-urgently-need-one-d5cdba361907" rel="noopener">simply by</a> <a href="https://journal.standardnotes.org/vpns-are-absolutely-a-solution-to-a-policy-problem-3b88af699bcd" rel="noopener">using a VPN</a>.</p><p>Even worse, people are out there have <a href="https://www.reddit.com/r/politics/comments/62a3kj/cards_against_humanity_creator_just_pledged_to/" rel="noopener">invalid conceptions of what data will be available</a>; some are going as far to try and <a href="https://searchinternethistory.com/" rel="noopener">get money from people who are uninformed</a>. That is very wrong. Let&#x2019;s explore exactly what an ISP will be able to know about you.</p><p>To keep terms consistent, I will use the FCC&#x2019;s term of <em>BIAS</em> (&#x201C;broadband Internet access service&#x201D;) to refer to all internet service providers (ISPs).</p><h3 id="why-does-this-matter">Why does this matter?</h3><p>While this article isn&#x2019;t going to get into the nit and grit of why you should care about your data and privacy, even if you think you have nothing to hide or to be concerned about, it&#x2019;s important to understand what this resolution will do, if passed. It specifically will dismantle the FCC&#x2019;s rule <a href="https://www.gpo.gov/fdsys/pkg/FR-2016-12-02/pdf/2016-28006.pdf" rel="noopener">&#x201C;Protecting the Privacy of Customers of Broadband and Other Telecommunications Services&#x201D; (81 Fed. Reg. 87274)</a>. This set of FCC rules and regulations are more extensive than just protecting your web browsing history, it protects and prevents the recording of several sets of specific information by your BIAS.</p><h3 id="what-information-does-the-fcc-s-privacy-regulations-protect">What information does the FCC&#x2019;s Privacy Regulations protect?</h3><p>The FCC&#x2019;s regulations protect the following information from being collected without the consent of the customer. A BIAS already has the technical ability to record this information, and could do so with customer consent, but they want to record this information without the customer&#x2019;s consent. That by itself might make sense if it was purely for the operation of the service, but they explicitly want to sell this information, claiming that doing so will allow them to offer targeted marketing and thereby lower the cost of the service to their customers. While I highly doubt that they&#x2019;ll ever lower the service cost, it is important to know that it has always been possible for your BIAS to see the information the FCC protected by claiming it was sensitive information. For those who are truly concerned about their privacy, even now these suggestions can help you improve your privacy and protect your data.</p><p>These are the categories of information the FCC&#x2019;s regulations protected:</p><ul><li><a href="#a047">Broadband Service Plans</a></li><li><a href="#64cb">Geolocation</a></li><li><a href="#e8ae">MAC Addresses and Other Device Identifiers</a></li><li><a href="#5a3d">IP Addresses</a> and <a href="#92a6">Domain Name Information</a></li><li><a href="#734b">Traffic Statistics</a></li><li><a href="#ff5e">Port Information</a></li><li><a href="#0b39">Application Header</a></li><li><a href="#e2af">Application Usage</a></li><li><a href="#b742">Application Payload</a></li><li><a href="#f02f">Customer Premises Equipment and Device Information</a></li></ul><h3 id="let-s-break-down-what-this-information-means">Let&#x2019;s break down what this information means</h3><p>Let&#x2019;s go through the list above, breaking down what it means and think of ways to protect this information (if possible).</p><h4 id="broadband-service-plans">Broadband Service Plans</h4><p>The FCC regulations stated that the Internet package you get from your provider is sensitive. This is because it reveals information about the quantity, type, and amount of use your home consumes. If you have a higher tier package, you might be more interested in video games, online video streaming services, or want more adult porn advertisements than someone with a lower speed.</p><p>This included protecting all types of services: mobile, cable, fiber; whether you are contract, prepaid (monthly); and included protecting the network speed, price, data caps, and data usage/consumption.</p><p>Besides changing your Internet provider (see below), there are not a lot of realistic options available. You could (if you don&#x2019;t have a cap) consume several terabytes of data a month by seeding legal torrents like <a href="https://www.ubuntu.com/download/alternative-downloads" rel="noopener">Ubuntu</a>, <a href="http://isoredirect.centos.org/centos/7/isos/x86_64/" rel="noopener">CentOS</a>, or become a <a href="https://meta.wikimedia.org/wiki/Mirroring_Wikimedia_project_XML_dumps" rel="noopener">host for Wikipedia dumps</a>. Doing so will skew this information, hiding your actual usage amounts. Of course, you can also setup your own <a href="https://hyperboria.net/" rel="noopener">Hyperboria</a> connection and hope that more services start to support the meshnet.</p><h4 id="geolocation">Geolocation</h4><blockquote>&#x201C;Geo-location is information related to the physical or geographical location of a customer or the customer&#x2019;s device(s), regardless of the particular technological method used to obtain this information.&#x201D;&#x200A;&#x2014;&#x200A;Federal Register / Vol. 81, &#x2116; 232, Section 87282, #65</blockquote><p>The FCC claimed that any means of determining your geolocation is considered private, and is not allowed to be recorded or shared without the consent of the customer.</p><p>Many devices, including laptops, tablets, and phones have built-in GPS. Your device will not simply report your location to your BIAS, so what is more likely is that they&#x2019;ll use your home address, or data about your street, town, or region to allow for targeted individual to regional marketing.</p><p>You are usually legally required to provide your address to an Internet service provider; as a result, there is little you can do to protect this information. Your only option is to switch providers to a service that cares about you.</p><h4 id="mac-addresses-and-other-device-identifiers">MAC Addresses and Other Device Identifiers</h4><p>The FCC regulations stipulate that collection of device identifiers, of any kind, are protected. Each networking device has at least one <a href="https://en.wikipedia.org/wiki/MAC_address" rel="noopener">MAC address</a>; this is a unique identifier of that device&#x2019;s networking hardware.</p><p>While a MAC address is only revealed during the <a href="https://en.wikipedia.org/wiki/Link_layer" rel="noopener">link layer</a> of networking, depending on your broadband service this may be an issue. If you are connected directly to the Internet or using service provided hardware for your modem or router, then this information can easily be collected.</p><p>This, I am happy to say, you can do something about. If you&#x2019;re using any service provided equipment (e.g. modem or router), you have two options. The first is to stop using the provided hardware entirely. That might help save you money longterm; there is often charge for &#x201C;Equipment Rental Fee&#x201D; for the privilege to use their crappy hardware anyway. Instead, buy your own hardware to replace it.</p><p>Replacing your service provided hardware depends on your service provider, it may not possible, so the second option that everyone has is to add your own device between your devices and your service provider. Many people already do this by having their own wireless router. Simply be sure that all of your devices connect to it first, this will report only a single MAC address to your BIAS, one that will show itself to only be a gateway device of some kind, keeping the actual devices you use private.</p><p>A third option, that doesn&#x2019;t require buying hardware, would be to use a <a href="https://www.howtogeek.com/192173/how-and-why-to-change-your-mac-address-on-windows-linux-and-mac/" rel="noopener">MAC address spoofer</a> to have your devices lie about their MAC addresses. This can lead to a lot of complications and difficulties, but for those advanced users who want to protect themselves, it is available.</p><p>The use of a <a href="https://hackernoon.com/tagged/vpn" rel="noopener">VPN</a> would not protect you from revealing this information.</p><h4 id="ip-addresses-and-domain-name-information-domain-names">IP Addresses and Domain Name Information&#x200A;&#x2014;&#x200A;Domain Names</h4><p>While the FCC bucketed this category as &#x201C;IP Addresses and Domain Name Information&#x201D;, we will look at these as two separate issues because we can solve for them differently. Let us first look at the Domain Name Information. The FCC had this to say:</p><blockquote>&#x201C;We also conclude that information about the domain names visited by a customer constitute CPNI [Customer Proprietary Network Information] in the broadband context.&#x201D;</blockquote><p>A domain name is simply the name of a website that is human-readable, something like &#x201C;fcc.gov&#x201D;, &#x201C;medium.com&#x201D;, or &#x201C;google.com&#x201D;. These are simple for us humans to read and understand&#x200A;&#x2014;&#x200A;for a computer, however, they need to be translated into an address that can be routed to. For this, computers use <a href="https://en.wikipedia.org/wiki/Domain_Name_System" rel="noopener">DNS</a> or the &#x201C;Domain Name System.&#x201D; Simply, someone buys a domain they&#x2019;d like to use, and they point it to the server they&#x2019;d like computers to talk with whenever that domain is requested.</p><p>Your BIAS provides your default DNS provider as well, and the FCC was careful to ensure that they protected you regardless of the DNS provider you used:</p><blockquote>&#x201C;Whether or not the customer uses the BIAS [Broadband Internet Access Service] provider&#x2019;s in-house DNS lookup service is irrelevant to whether domain names satisfy the statutory definition of CPNI.&#x201D;</blockquote><p>Your DNS provider is essentially the database you request when your computer needs to know the <a href="https://en.wikipedia.org/wiki/IP_address" rel="noopener">IP address</a> a domain points to. This is necessary because there are too many domains and the IP address that a domain points to updates too frequently to have this entire database available locally on your computer. While your BIAS provides your default DNS, you can easily swap the DNS provider. Many good, free alternatives exist. <a href="https://developers.google.com/speed/public-dns/" rel="noopener">Google Public DNS</a> and <a href="https://www.opendns.com/setupguide/" rel="noopener">Cisco OpenDNS</a> are two very popular alternatives.</p><p>However, your use of an alternative DNS provider does not protect you! Back in the day DNS was first developed, the focus was on getting this little thing known as the Internet working before the focus of privacy and security was even raised. So DNS is not encrypted or secure. Even using an alternative DNS provider, your BIAS is able to track the domains that you are looking up, thereby revealing the domains you are requesting. This happens even if you are visiting secured web pages using <a href="https://en.wikipedia.org/wiki/HTTPS" rel="noopener">HTTPS</a>.</p><p>You have a few options here, the easiest available to most is to use <a href="https://dnscrypt.org/" rel="noopener">DNSCrypt</a>, which has clients for most devices and is very easy to setup and use. Other possibilities may include <a href="https://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions" rel="noopener">DNSSEC</a>, <a href="https://dnscurve.org/" rel="noopener">DNSCurve</a>, and <a href="https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Named_Entities" rel="noopener">DANE</a>.</p><p>Whether or not you use a VPN, your DNS requests <a href="https://www.dnsleaktest.com/what-is-a-dns-leak.html" rel="noopener">may not be encrypted by your VPN client</a>, or you might end up using your BIAS&#x2019; as the DNS provider, so swapping DNS providers and encrypting your DNS requests is always beneficial.</p><h4 id="ip-addresses-and-domain-name-information-ip-addresses">IP Addresses and Domain Name Information&#x200A;&#x2014;&#x200A;IP Addresses</h4><p>We now enter the more difficult of the two issues bucketed together by the FCC, the IP addresses you are requesting. Domain names translate directly into IP addresses, so after your computer has (hopefully securely) determined the IP address it needs to connect with, your computer needs to make the request to that IP address. The FCC did protect you here:</p><blockquote>&#x201C;We conclude that source and destination IP addresses constitute CPNI in the broadband context because they relate to the destination, technical configuration, and/or location of a telecommunications service.&#x201D;</blockquote><p>This protection would go away; now your BIAS can easily record the IP address you are requesting. At the same time, the destination address you are connecting with can see your source IP (your home&#x2019;s unique IP address). An IP address alone tells a service a lot about you. If it&#x2019;s unique enough, it may reveal exactly who you are, but it always reveals your <a href="https://www.privateinternetaccess.com/pages/whats-my-ip/" rel="noopener">BIAS/ISP and your geographic region.</a></p><p>This has long posed a major privacy concern to many people and there are many services such as <a href="https://www.privateinternetaccess.com/" rel="noopener">Private Internet Access</a> which provide a means to protect you from leaking this sensitive information. The most popular means is a VPN.</p><p>A VPN, or &#x201C;Virtual Private Network,&#x201D; keeps your home&#x2019;s IP address protected while also encrypting and securing all your requests so that your BIAS cannot determine the true destination of your requests. Only the VPN provider will now be able to link your originating home&#x2019;s IP with the true destination IP. For that reason, many of the more privacy-concerned VPN providers do not record any information.</p><p>Most VPNs protect you by implementing <a href="https://en.wikipedia.org/wiki/IPsec" rel="noopener">IPSec</a> or a similar protocol, which is a protocol designed to encrypt all connections at a lower-level of the request, so that your applications do not need to be aware their requests are being hijacked and routed through a VPN. This makes using a VPN extremely easy. Some client software include additional privacy, security, and bandwidth saving features such as malware protection, ad &amp; tracker blocking, and compression. A few offer real-time image and video compression to save your even more bandwidth, which can be extremely useful for mobile devices or specific geographic regions.</p><p>So this can be easily solved for! To prevent your BIAS from seeing the IP addresses you are visiting and the destination server from seeing your source IP, start using a VPN.</p><p>You can <a href="https://github.com/Nyr/openvpn-install" rel="noopener">setup your own VPN</a>, but it requires knowledge of setting up servers and running commands. For most you can search for and find a VPN provider easily. Just be careful to find one that does not log your traffic and one that is large have enough bandwidth that your traffic is not extremely slow. I personally use <a href="https://www.privateinternetaccess.com/" rel="noopener">Private Internet Access</a>. I&#x2019;ve been using them for several years, however, there are other options out there.</p><p><strong>Why is a proxy not an option?</strong> While you get very similar results while using a proxy as you do when using a VPN, the downside is that a proxy generally does not encrypt your connections. This means that the destination might not be able to see your source (your home&#x2019;s) IP address, but your BIAS can certainly see all your traffic still. Some proxies, like HTTPS proxies, can encrypt connections, but they only support encrypting your web traffic. This means that it is not easy to configure a system-wide proxy (although apps like <a href="https://proxifier.com/" rel="noopener">Proxifier</a> do exist for that). As a result of using only a proxy, much of your computer&#x2019;s traffic will not be encrypted, secured, or kept private.</p><p><strong>Why is Tor not an option?</strong> <a href="https://www.torproject.org/index.html.en" rel="noopener">Tor</a> is actually a good option, but it is a limiting one. While Tor may be private, it is generally not used for the entire system, so many apps, just like when using a proxy, will still reveal information. Tor is also not great when making insecure connections (like HTTP vs. HTTPS), as it means many people along the way can both see and change the data being transmitted. Tor is decent to use when in a pinch, but it&#x2019;s not going to solve all the problems.</p><p>Without the use of a VPN, you cannot hide all of the IP addresses you are requesting from a BIAS (although a secure proxy and Tor can hide some). Once the IP address you are requesting is known, it is a simple matter of doing a <a href="https://en.wikipedia.org/wiki/Reverse_DNS_lookup" rel="noopener">reverse DNS lookup</a> on the IP addresses to get an idea of the websites you are visiting. If the connection is secure and encrypted, they&#x2019;ll still know the website you are loading, but not the page or data being sent or received.</p><h4 id="traffic-statistics">Traffic Statistics</h4><p>Traffic statistics cover a range of information, including the destinations being requested (which I mentioned above, you can hide with the use of a VPN), but also including the amount of data consumed broken up either by month, by day, time of day, and including the size of data and packets.</p><p>Even when using a VPN, the amount of traffic consumed can reveal information about your habits. However, the size of packets by connection can also reveal details like whether you are streaming a video, downloading large files, or browsing the web. While you might be able to hide the specific video being watched, file being downloaded, or website you are visiting; you cannot hide this information from BIAS providers.</p><p>The FCC protects mobile services from selling your call history, including the number you spoke with, the duration, and time the call was placed or received. The FCC similarly protected our Internet habits, but you will lose this protection and regardless of the use of a VPN a BIAS can determine valuable information from the way you use our connection.</p><p>This again is another point where our only option is to switch providers to one that protects our data.</p><h4 id="port-information">Port Information</h4><p>A <a href="https://en.wikipedia.org/wiki/Port_%28computer_networking%29" rel="noopener">port</a> is a number that helps both the sender and the receiver of a connection request know what service is being requested. This, alongside the IP address of a request, is protected by the use of a VPN. Without the protection of a VPN, even now, each connection your computer makes has a port associated with it that reveals information about your habits. This happens even if you are connecting to secured websites. There is <a href="https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers" rel="noopener">a list of popular ports used by applications on Wikipedia</a>.</p><p>Your BIAS can view this port, to know the type of application you are using, determining information about your habits. Such as traffic on port 80 or 443 reveal you are browsing the web, or traffic on port 6881&#x2013;6887 reveal you are using a BitTorrent client. Securing yourself with a VPN protects this information from being revealed to your BIAS.</p><h4 id="application-header">Application Header</h4><p>We&#x2019;re now getting into a category of tracking that should truly worry all people. The FCC defined Application Header as:</p><blockquote>&#x201C;The application will usually append one or more headers to the payload; these headers contain information about the application payload that the application is sending or requesting. For example, in web browsing, the Uniform Resource Locator (URL) of a Web page constitutes application header information. In a conversation via email, instant message, or video chat, an application header may disclose the parties to the conversation.&#x201D;</blockquote><p>The FCC was exactly right. Many requests reveal a lot of information in those connection headers, including the domain you are requesting, the page being requested, the application you are using (i.e. web browser you are using), and request headers like the search terms you are requesting. Now, many web servers are encrypted using TLS/SSL (HTTPS), so for any encrypted connection, none of this information is revealed to anyone except the destination server.</p><p>This is also true when using a VPN. Even when using a VPN, any insecure connection is still going to be visible to the VPN provider. Whenever you make an insecure connection, there is exposed risk of a MITM attack (&#x201C;man in the middle&#x201D;). A MITM attack allows anyone between your computer and the destination to change the request or response, without either party being able to tell it was changed. This has been a security risk for a very long time and is why there are movements like <a href="https://letsencrypt.org/" rel="noopener">Let&#x2019;s Encrypt</a> making it easier for website providers to encrypt their traffic supported by major services like Mozilla.</p><p>To protect yourself, you should always use encrypted connections when they are available. I strongly recommend the use of <a href="https://www.eff.org/https-everywhere/" rel="noopener">HTTPS Everywhere</a>, a browser extension that works with Chrome, Firefox, and Opera to automatically reroute your request to be secure when a website supports encrypted connections.</p><p>I use HTTPS Everywhere with the additional feature to &#x201C;Block all unencrypted requests&#x201D; enabled. Doing this poses a noticeable impact on the usability of the Internet, but it protects me. Services such as eBay.com do not support whole-site encryption, as a result, whenever you browse ebay.com the searches you perform and products you look at are revealed to your BIAS. That is unacceptable to me and I refuse to use eBay again until they fix this major flaw to their users privacy. Many other similar situations exist, but all the good websites out there support full encryption. For the most part, blocking all insecure requests does not affect my usage of the Internet. (Although, I wish Amazon&#x2019;s short urls http://a.co/&#x2026; supported HTTPS, because when people share a link to an Amazon Product, I now don&#x2019;t have a way to hide the product I am about to view).</p><h4 id="application-usage">Application Usage</h4><p>A BIAS still might be able to determine the applications you&#x2019;re using to generate connections, regardless of you using a VPN. Due to the variety of ways a BIAS can collect information about you, it is possible to use heuristics and machine learning to determine this information. The applications used, such as your web browser (e.g. Chrome, Firefox, Opera), messaging, email, music/video streaming, or torrent applications all generate traffic. They do so in unique ways and have unique characteristics that make it possible (if difficult) to determine even when you use an encrypted secure connection.</p><p>To truly hide this information, your best chance is to use a VPN with a client that supports good encryption.</p><p>Beyond just the application you use, many services are unique enough that with a VPN and encrypted connection it is possible for your BIAS to determine specific habits. Specifically, streaming technologies from unique services like YouTube, Netflix, Spotify, and many video games are optimized enough (to save bandwidth) that their unique patterns of requests can be determined even through the encryption. While your BIAS might not see the specific video being watched, they can determine the service, time of day, and size of requests to get an idea of the kind of videos you are watching. You cannot protect yourself from this data gathering, without the regulations the FCC put in place.</p><h4 id="application-payload">Application Payload</h4><p>Every connection has two parts key parts, the headers (discussed above in &#x201C;Application Headers&#x201D;) and the payload. While the headers reveal information about the pages and metadata about the clients being used, the payload is the actual content of a request. This is the <a href="https://en.wikipedia.org/wiki/HTML" rel="noopener">HTML</a> content of a web page, that makeup the body and text you see. It is also the content of every image, video, and file requested or downloaded.</p><p>Similar to protecting your Application Headers, any secure connection protects this information from being revealed. The use of a VPN can help hide more details from your BIAS, but you should always prefer the use of a secure connection over an insecure one, so install <a href="https://www.eff.org/https-everywhere/" rel="noopener">HTTPS Everywhere</a>.</p><h4 id="customer-premises-equipment-and-device-information">Customer Premises Equipment and Device Information</h4><p>I&#x2019;ve touched on this in the &#x201C;MAC Addresses and Other Device Identifiers&#x201D; section, that hardware provided by your BIAS is suspect and should be untrusted. However, the FCC specifically claimed that the hardware they provide should be protected. This means that the model of the devices they provide cannot be sold, as that might reveal the service, package, or capabilities of your service. They protect this specifically because the FCC similarly protects customer of mobile/wireless service providers by not allowing mobile services to sell information about the mobile devices being used, such as the mobile of cell phone you use. While your mobile service knows that information, they cannot reveal or sell it to others. The FCC wanted to protect us similarly, even if the hardware was provided by the BIAS.</p><p>While you cannot protect yourself from this data being collected, you can make the information they have less meaningful. Even if you are forced to use provided hardware (e.g. router, modem), you can often adjust its settings to lock it down, disable shared wireless signals. Or, even better, add your own hardware to the mix, as to not reveal anything about your home&#x2019;s actual devices such as their MAC addresses.</p><h3 id="takeaways">Takeaways</h3><p>I realize this is a lengthy article, but it is necessary to see how much information is truly at risk, and while you can do a lot to protect yourself, you are not in control of much of the data that is going to be collected and sold about us.</p><p>For those who want to do all they can, the list is this:</p><ol><li>Switch providers (see below) if at all possible, to one that will not sell your data.</li><li>Use a VPN to protect and encrypt your traffic from your BIAS and to hide your source (your home&#x2019;s) IP address from others.</li><li>Enable DNS security, use <a href="https://dnscrypt.org/" rel="noopener">DNSCrypt</a> or DNSSEC and change your DNS provider.</li><li>Use HTTPS as much as possible, install <a href="https://www.eff.org/https-everywhere" rel="noopener">HTTPS Everywhere</a>.</li><li>Be sure to use a device you control as your Internet gateway, so none of the device&#x2019;s unique identities can be revealed. Setup your own wireless network and replace any provided hardware if possible.</li></ol><p>If you take these precautions, this is the kind of information your BIAS will be able to know and sell about you:</p><ol><li>Your Internet plan, including price, speeds, and data caps.</li><li>Your Internet usage, including data consumption, and times of days you use it.</li><li>Your geolocation (down to your address).</li><li>The manufacturer of your gateway/router device (possibly).</li><li>Potentially be capable of identifying the services you use for video or music streaming, or that you play video games.</li><li>Without a secure VPN, the IP address (and domains through a reverse DNS lookup) you communicate with.</li></ol><p>For those with misconceptions, I am not trying to downplay the severity or the damage that removing these regulations will have on the privacy of American people. I do want to make it clear that anyone who believes your BIAS will have unlimited access to your traffic is mistaken, and any information being sold by a BIAS will not be a list of all of the websites you visit <strong>unless you allow them to have that information.</strong></p><p>Talk to your representatives and support <a href="https://supporters.eff.org/donate" rel="noopener">eff.org</a> which works to help improve our right to privacy.</p><h3 id="switching-internet-providers">Switching Internet Providers</h3><p>Since I mention it as a possible solution many times, I thought I&#x2019;d share some notes on your options for switching service providers.</p><h4 id="for-broadband-services">For broadband services</h4><p>For broadband use, switch to a local provider rather than using a big provider such as Comcast or Verizon. While that is difficult in many regions of American, some regional services might exist in the form of a DSL provider, or <a href="http://chrishacken.com/starting-an-internet-service-provider/" rel="noopener">in specific regions, you might have local companies you can support</a>.</p><p>Additionally, consider joining a <a href="https://hyperboria.net" rel="noopener">Hyperboria</a> community near you (or starting one) or trying out other similar means to decentralize the Internet a bit like <a href="https://zeronet.io/" rel="noopener">ZeroNet</a>.</p><h4 id="for-mobile-services">For mobile services</h4><p>Worth mentioning is mobile services as well, including your cell phone service provider. Many providers such as <a href="https://fi.google.com/about/" rel="noopener">Google Project Fi</a>, <a href="https://ting.com/" rel="noopener">Ting</a>, <a href="https://charge.co/" rel="noopener">Charge.co</a>, and others care about their customer&#x2019;s privacy and protect it.</p><h3 id="faq-1-what-if-i-cannot-use-a-vpn">FAQ 1: What if I cannot use a VPN</h3><p>I&#x2019;m just going to address this concern now, as I know it will be mentioned a lot. There are many downsides to using a VPN, primarily, that it slows your network speeds and may cost you money for the additional services and bandwidth.</p><p>For those who cannot use a VPN and who cannot switch BIAS providers to one that protects your privacy, I understand this. For myself, I don&#x2019;t particularly care if my BIAS wants to see I am accessing google.com or reddit.com; as long as they cannot read my search terms, emails, and the specific posts, news, and comments I am reading.</p><p>In cases like this, be sure to use encrypted DNS (<a href="https://www.dnscrypt.org/" rel="noopener">DNSCrypt</a>) and only use secure connections (<a href="https://www.eff.org/https-everywhere" rel="noopener">HTTPS Everywhere</a>) and protect what you can. Reverse DNS is not 100% accurate, so your BIAS will still not always be able to determine the websites you visit by IP address alone.</p><p>Otherwise, <a href="https://www.torproject.org" rel="noopener">Tor</a> and HTTPS Proxies are available to at least protect your web browsing habits.</p><blockquote><a href="http://bit.ly/Hackernoon" rel="noopener nofollow noopener noopener">Hacker Noon</a> is how hackers start their afternoons. We&#x2019;re a part of the <a href="http://bit.ly/atAMIatAMI" rel="nofollow noopener nofollow noopener noopener">@AMI</a>family. We are now <a href="http://bit.ly/hackernoonsubmission" rel="nofollow noopener nofollow noopener noopener">accepting submissions</a> and happy to <a href="mailto:partners@amipublications.com">discuss advertising &amp; sponsorship</a> opportunities.</blockquote><blockquote>To learn more, <a href="https://goo.gl/4ofytp" rel="nofollow noopener noopener">read our about page</a>, <a href="http://bit.ly/HackernoonFB" rel="nofollow noopener noopener">like/message us on Facebook</a>, or simply, <a href="https://goo.gl/k7XYbx" rel="nofollow noopener noopener">tweet/DM @HackerNoon.</a></blockquote><blockquote>If you enjoyed this story, we recommend reading our <a href="http://bit.ly/hackernoonlatestt" rel="nofollow noopener nofollow noopener noopener">latest tech stories</a> and <a href="https://hackernoon.com/trending" rel="nofollow noopener nofollow noopener noopener">trending tech stories</a>. Until next time, don&#x2019;t take the realities of the world for granted!</blockquote><figure class="kg-card kg-embed-card"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 50%;"></div><div class="iframeContainer"><iframe data-width="800" data-height="400" src="https://ben.hutchins.co/media/3c851dac986ab6dbb2d1aaa91205a8eb" data-media-id="3c851dac986ab6dbb2d1aaa91205a8eb" data-thumbnail="https://i.embed.ly/1/image?url=https%3A%2F%2Fucarecdn.com%2F57662ddc-da7c-407b-afcb-cb45184b2705%2F&amp;key=4fce0568f2ce49e8b54624ef71a8a5bd" allowfullscreen width="700" height="350" frameborder="0"></iframe></div></div></figure>]]></content:encoded></item></channel></rss>