Redis as a JSON store

tl;dr a Redis module that provides native JSON capabilities – get it from the GitHub repository or read the docs online.

Both JSON and Redis need no introduction; the former is the standard data interchange format between modern applications, whereas the latter is ubiquitous wherever performant data management is needed by them. That being the case, I was shocked when a couple of years ago I learned that the two don’t get along.

Redis isn’t a one-trick pony–it is, in fact, quite the opposite. Unlike general purpose one-size-fits-all databases, Redis (a.k.a the “Swiss Army Knife of Databases”, “Super Glue of Microservices” and “Execution context of Functions-as-a-Service”) provides specialized tools for specific tasks. Developers use these tools, which are exposed as abstract data structures and their accompanying operations, to model optimal solutions for problems. And that is exactly the reason why using Redis for managing JSON data is unnatural.

Fact: despite its multitude of core data structures, Redis has none that fit the requirements of a JSON value. Sure, you can work around that by using other data types: Strings are great for storing raw serialized JSON, and you can represent flat JSON objects with Hashes. But these workaround patterns impose limitations that make them useful only in a handful of use cases, and even then the experience leaves an un-Redis-ish aftertaste. Their awkwardness clashes sharply with the simplicity and elegance of using Redis normally.

But all that changed during the last year after Salvatore Sanfilippo’s @antirez visit to the Tel Aviv office, and with Redis modules becoming a reality. Suddenly the sky wasn’t the limit anymore. Now that modules let anyone do anything, it turned out that I could be that particular anyone. Picking up on C development after more than a two decades hiatus proved to be less of a nightmare than I had anticipated, and with Dvir Volk’s @dvirsky loving guidance we birthed ReJSON.

https://redislabs.com/blog/redis-as-a-json-store/

Stack Overflow – Developer Survey Results 2017

Each year since 2011, Stack Overflow has asked developers about their favorite technologies, coding habits, and work preferences, as well as how they learn, share, and level up. This year represents the largest group of respondents in our history: 64,000 developers took our annual survey in January.

As the world’s largest and most trusted community of software developers, we run this survey and share these results to improve developers’ lives: We want to empower developers by providing them with rich information about themselves, their industry, and their peers. And we want to use this information to educate employers about who developers are and what they need.

We learn something new every time we run our survey. This year is no exception:

  • A common misconception about developers is that they’ve all been programming since childhood. In fact, we see a wide range of experience levels. Among professional developers, 11.3% got their first coding jobs within a year of first learning how to program. A further 36.9% learned to program between one and four years before beginning their careers as developers.
  • Only 13.1% of developers are actively looking for a job. But 75.2% of developers are interested in hearing about new job opportunities.
  • When we asked respondents what they valued most when considering a new job, 53.3% said remote options were a top priority. A majority of developers, 63.9%, reported working remotely at least one day a month, and 11.1% say they’re full-time remote or almost all the time.
  • A majority of developers said they were underpaid. Developers who work in government and non-profits feel the most underpaid, while those who work in finance feel the most overpaid.

Want to dive into the results yourself? In a few weeks, we’ll make the anonymised results of the survey available for download under the Open Database License (ODbL). We look forward to seeing what you find!

http://stackoverflow.com/insights/survey/2017/

Go with Peter Bourgon

Go is a lovely little programming language designed by smart people you can trust and continuously improved by a large and growing open-source community.

Go is meant to be simple, but sometimes the conventions can be a little hard to grasp. I’d like to show you how I start all of my Go projects, and how to use Go’s idioms. Let’s build a backend service for a web app.

  1. Setting up your environment
  2. A new project
  3. Making a web server
  4. Adding more routes
  5. Querying multiple APIs
  6. Make it concurrent
  7. Simplicity
  8. Further exercises

http://howistart.org/posts/go/1/

New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption

The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK.

In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

How does the AWS Encryption SDK help me?

Developers using encryption often face three problems:

  1. How do I correctly generate and use a data key to encrypt data?
  2. How do I protect the data key after it has been used?
  3. How do I store the data key and ciphertext in a portable manner?

The library provided in the AWS Encryption SDK addresses the first problem by implementing the low-level envelope encryption details transparently using the cryptographic provider available in your development environment. The library helps address the second problem by providing intuitive interfaces to let you choose how you want to generate data keys and the master keys or key-encrypting keys that will protect data keys. Developers can then focus on the core of the application they are building instead of on the complexities of encryption. The ciphertext addresses the third problem, as described later in this post.

The AWS Encryption SDK defines a carefully designed and reviewed ciphertext data format that supports multiple secure algorithm combinations (with room for future expansion) and has no limits on the types or algorithms of the master keys. The ciphertext output of clients (created with the SDK) is a single binary blob that contains your encrypted message and one or more copies of the data key, as encrypted by each master key referenced in the encryption request. This single ciphertext data format for envelope-encrypted data makes it easier to ensure the data key has the same durability and availability properties as the encrypted message itself.

The AWS Encryption SDK provides production-ready reference implementations in Java and Python with direct support for key providers such as AWS Key Management Service (KMS). The Java implementation also supports the Java Cryptography Architecture (JCA/JCE) natively, which includes support for AWS CloudHSM and other PKCS #11 devices. The standard ciphertext data format the AWS Encryption SDK defines means that you can use combinations of the Java and Python clients for encryption and decryption as long as they each have access to the key provider that manages the correct master key used to encrypt the data key.

Let’s look at how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your data keys in ways that help improve application availability by not tying you to a single region or key management solution.

https://aws.amazon.com/blogs/security/new-aws-encryption-sdk-for-python-simplifies-multiple-master-key-encryption/

CloudWatch Events Now Supports AWS Step Functions as a Target

The Amazon CloudWatch Events service now supports AWS Step Functions state machines as event targets. Amazon CloudWatch Events enables you to respond quickly to application availability issues or configuration changes that might impact performance or security by notifying you of AWS resource changes in near-real-time. You simply write rules to indicate which events are of interest to your application and what automated action to take when a rule matches an event. You can, for example, invoke AWS Lambda functions or notify an Amazon Simple Notification Service (SNS) topic. Now, you can also send the matching events to an AWS Step Functions state machine to start a workflow responding to the event of interest, such as managing copies of Amazon Elastic Block Store (EBS) snapshots upon snapshot completion.

You may also schedule execution of AWS Step Functions state machines at intervals down to 1-minute to automate processes such as synchronizing S3 buckets nightly.

AWS Step Functions is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Tokyo) regions.

Please visit our website for more information on Amazon CloudWatch Events and AWS Step Functions:

https://aws.amazon.com/about-aws/whats-new/2017/03/cloudwatch-events-now-supports-aws-step-functions-as-a-target

Paxos in 25 lines

        --- Paxos Proposer ---
      	
     1	proposer(v):
     2    while not decided:
     2	    choose n, unique and higher than any n seen so far
     3	    send prepare(n) to all servers including self
     4	    if prepare_ok(n, na, va) from majority:
     5	      v' = va with highest na; choose own v otherwise   
     6	      send accept(n, v') to all
     7	      if accept_ok(n) from majority:
     8	        send decided(v') to all
      	
        
        --- Paxos Acceptor ---

     9	acceptor state on each node (persistent):
    10	 np     --- highest prepare seen
    11	 na, va --- highest accept seen
      	
    12	acceptor's prepare(n) handler:
    13	 if n > np
    14	   np = n
    15	   reply prepare_ok(n, na, va)
    16   else
    17     reply prepare_reject
      	
      	
    18	acceptor's accept(n, v) handler:
    19	 if n >= np
    20	   np = n
    21	   na = n
    22	   va = v
    23	   reply accept_ok(n)
    24   else
    25     reply accept_reject

http://nil.csail.mit.edu/6.824/2015/notes/paxos-code.html

Automagically synchronize audio and text

aeneas is a Python/C library and a set of tools to automagically synchronize audio and text (aka forced alignment).

Goal

aeneas automatically generates a synchronization map between a list of text fragments and an audio file containing the narration of the text. In computer science this task is known as (automatically computing a) forced alignment.

The synchronization map can be output to file in several formats, depending on its application:

  • research: Audacity (AUD), ELAN (EAF), TextGrid;
  • digital publishing: SMIL for EPUB 3;
  • closed captioning: SubRip (SRT), SubViewer (SBV/SUB), TTML, WebVTT (VTT);
  • Web: JSON;
  • further processing: CSV, SSV, TSV, TXT, XML.

https://github.com/readbeyond/aeneas

Amazon ElastiCache Launches Enhanced Redis Backup and Restore with Cluster Resizing

We are excited to announce that Amazon ElastiCache now supports enhanced Redis Backup and Restore with Cluster Resizing. In October 2016, we launched support for Redis Cluster with Redis 3.2.4. In addition to scaling your Redis workloads across up to 15 shards with 3.5TiB of data, it also allowed creating cluster-level backups, which contain snapshots of each of the cluster’s shards. With this launch, we are adding the capability to restore a backup into a Redis Cluster with a different number of shards and slot distribution, allowing you to resize your Redis workload. ElastiCache will parse the Redis key space across the backup’s individual snapshots, and redistribute the keys in the new Cluster according to the requested number of shards and hash slots. Your new cluster can be either larger or smaller in size, as long as the data fits in the selected configuration.
Enhanced Backup and Restore with Cluster Resizing also provides an easy migration path to a managed Redis Cluster experience on ElastiCache. If you are running self-managed Redis on EC2, you can take RDB snapshots or your existing workloads (both Redis Cluster and single-shard Redis) and store them in S3. Then simply provide them as input for creating a sharded Redis Cluster on ElastiCache, and the desired number of shards. ElastiCache will do the rest.

https://aws.amazon.com/about-aws/whats-new/2017/03/amazon-elasticache-launches-enhanced-redis-backup-and-restore-with-cluster-resizing

20+ VPNs rated on privacy and security side-by-side

A VPN is now a necessity for anyone who values their privacy online. They prevent hackers, governments, corporations, and internet service providers from monitoring and tracing internet activity back to the user. All internet traffic is encrypted and tunneled through a remote server so that no one can track its destination or its contents.

https://www.comparitech.com/blog/vpn-privacy/best-vpns-privacy-and-anonymity