The world is getting smaller through digital transformation that allows anyone with a mobile device and internet access to connect with people on the other side of the globe.
This digital disruption is making the traditional office space lined with cubicles more obsolete by the day as many up-and-coming startups and enterprises have opened up to a remote work environment.
Working remotely away from a physical office in the comforts of your own home or a seaside resort is an alluring proposition for any job seeker, yet all the freedom and flexibility can be distracting from our true responsibilities.
Here’s how to create a home office environment that helps you stay focused and productive.
Recently I have been playing around with Serverless + AWS lambda and I have to say, I have been awestruck.
This piece outlines the pros and cons of Express and Serverless and explains why it made sense for our team at Pilcro to switchover. This piece is aimed at tech teams looking to deploy and manage Node.js APIs on AWS (or similar).
I’ve been helping out on a project recently where we’re doing a number of integrations with third-party services. The integration platform is built on AWS Lambda and the Serverless framework.
Aside from the data hygiene questions that you might expect in an integration project like this, one of the first things we’ve run into is a fundamental constraint in productionizing Lambda-based systems. As of today, AWS Lambda has the following limits (among others):
- max of 1000 concurrent executions per region (a soft limit that can be increased), and
- max duration of 5 minutes for a single execution
In May the European Union’s General Data Protection Regulation goes into effect, two years after passage by the European Parliament. This radical new privacy law, which covers any business that processes information about EU residents, will dramatically affect the way data is collected, stored, and used, including for U.S. companies doing business abroad.
In the U.S., lawmakers are now circling waters bloodied by revelations regarding potential abuse of Facebook’s social media data, with CEO Mark Zuckerberg scheduled to testify on Capitol Hill this week about the “use and protection of user data.” Facebook’s woes, following continued reports of major data breaches at other leading companies, have amplified calls for GDPR-like legislation in the U.S.
Why geographically-distributed SRE teams?
In today’s hyper-connected technological world, there is a need for geographically widespread technical teams to facilitate global growth. Businesses that scale to this level need global teams to handle that reach. However, as development teams continue to ramp, it quickly becomes infeasible for them to solve operational problems while also scaling the product. That’s where SREs, whose primary job is to develop software to solve operational problems, come in. Investing in geographically-distributed (GD) SRE teams is key to achieve the goal of scaling a business or product to a global audience.
In part one of this series, we discussed some of the key principles to consider when developing geographically distributed (GD) SRE teams. Similar to the first article, we’re leveraging the journey of LinkedIn’s SRE team as the point of reference for the topics discussed here in part two. Within this post, we’ll discuss growth planning, the challenges associated with being part of a remote team, and some of the unexpected advantages geographically distributed SRE teams can offer.
There are many articles out there about being an effective remote worker. Most of them focus on the basics, like building the ideal home office, following strategies for effective video conferencing, managing cabin fever, avoiding becoming a remote developer black box or improving self-discipline, motivation and communication.
But where do you go from there? People often ask me that. Julia Evans at Stripe has done a great job writing about this, and I’d like to give you my take as well.
In last couple of years, we have observed evolution of several message brokers and queuing services which are all fast, reliable and scalable. While the list is long, in this blog, I will limit the discussion to SQS, Kinesis and Kafka. Simple Queuing Service (SQS) is a fully managed and scalable queuing service on AWS. Kinesis is another service offered by AWS that makes it easy to load and analyze streaming data and also provides the ability to build custom streaming data applications for special requirements. Apache Kafka is a fast, scalable, durable, and fault-tolerant publish-subscribe messaging system which is often used in place of traditional message brokers like JMS and AMQP because of its characteristics like higher throughput, reliability and replication.
While making decisions about which messaging system is right for you, it is important to understand not only the technical differences but also the implications of operational costs both in terms of running them at scale as well as monitoring them. In this blog, I will touch upon our experiences and learning at OpsClarity, based on our evaluation of messaging systems and our migration from SQS to Kafka.