Links from 2020-05-03
Monoliths are the future because the problem people are trying to solve with microservices doesn’t really line up with reality.
Google cloud native adherent and evangelist Kelsey Hightower
The proc filesystem is an important feature of Linux that you can’t ignore. proc is a pseudo or virtual filesystem that provides an interface to kernel data structures. In other words, proc isn’t an actual filesystem in the real-world sense; rather, it resides only in memory and not on a disk. It is automatically mounted by the system.
Most of its contents are regular files and directories, so you can use most regular Linux tools to navigate the proc filesystem. The examples in this article should run the same on any Linux distribution.
Did you ever want to match a regex, but all you had was a fat32 driver? Ever wanted to serialize your regex DFAs into one of the most widely supported formats used by over 3 billion devices? Are directory loops your thing?
Worry no more, with regex2fat this has become easier than ever before! With just a little regex2fat ‘[YOUR] F{4}VOUR{1,7}E (R[^E]G)*EX HERE.’ /dev/whatever, you will have a fat32 regex DFA of your favourite regex. For example, to see whether the string ‘Y FFFFVOURRE EX HEREM’ would match, just mount it and check if ‘/Y/SPACE/F/F/F/F/V/O/U/R/R/E/SPACE/E/X/SPACE/H/E/R/E/M/MATCH’ exists.
If you’re trying to learn Docker you will first have to master its various terminal commands. This guide aims to help you get started with basic docker commands.
systemd has become a mainstay for the Linux world, but one of the things that still seems to stick around is cron jobs. It’s understandable, as cron is a tool that we have been using for a long time. Change is hard, but I think systemd Timers make the change well worth it. Here are a few reasons why…
A large majority of computer systems have some state and are likely to depend on a storage system. My knowledge on databases accumulated over time, but along the way our design mistakes caused data loss and outages. In data-heavy systems, databases are at the core of system design goals and tradeoffs. Even though it is impossible to ignore how databases work, the problems that application developers foresee and experience will often be just the tip of the iceberg. In this series, I’m sharing a few insights I specifically found useful for developers who are not specialized in this domain.
On any given day, we handle around 15% of daily retail trading volume across all stock exchanges in India. Billions of requests generated in the process are handled by a suite of systems we have built in-house. Also, we are very particular on self-hosting as many dependencies as possible, everything from CRMs to large databases, Kafka clusters, mail servers etc.
To aid these primary systems, there are a large number of ancillary workloads that run, covering everything from real-time trades, document processing, KYC, and account opening, legal and compliance, complex, large scale P&L and number crunching, and a wide range of backoffice workloads. The systems are spread across a hybrid setup; physical racks across two different data centres (where exchange leased lines terminate) and AWS. All of this means that we have a lot of dynamic workloads and dissimilar systems and environments, bare metal to Kubernetes clusters, to be monitored independently.
The first and second open source migration waves were periods of rapid expansion for companies that rose up to provide commercial assurances for Linux and the open source databases, like Red Hat, MongoDB, and Cloudera. Or platforms that made it easier to host open source workloads in a reliable, consistent, and flexible manner via the cloud, like Amazon Web Services, Google Cloud, and Microsoft Azure.
This trend will continue in the third wave of open source migration, as organizations interested in reducing cost without sacrificing development speed will look to migrate more of their applications to open source. They’ll need a new breed of vendor—akin to Red Hat or AWS—to provide the commercial assurances they need to do it safely.
I’ve been writing about running Docker on Raspberry Pi for five years now and things have got a lot easier than when I started back in the day. There’s now no need to patch the kernel, use a bespoke OS, or even build Go and Docker from scratch.
The decision in 2017 to move back to a monolith considered all the trade-offs, including being comfortable with losing the benefits of microservices. The resulting architecture, named Centrifuge, is able to handle billions of messages per day sent to dozens of public APIs. There is now a single code repository, and all destination workers use the same version of the shared library. The larger worker is better able to handle spikes in load. Adding new destinations no longer adds operational overhead, and deployments only take minutes. Most important for the business, they were able to start building new products again. The team felt all these benefits were worth the reduced modularity, environmental isolation, and visibility that came for free with microservices.
In this letter, the minister agrees to the principle of Free Software by default ("Open Source by default") for procurement, which can be considered a parallel to the ‘comply or explain’ policy that is already in effect for the adoption of open standards. The minister also agrees to the government actively developing and publishing Free Software.
Tagged as: 2do, collection, commandline, container, cron, crontab, database, delicious, dev, docker, google, kubernetes, links, linux, microservice, monitoring, opensource, raspberrypi, regex, shaarli, systemd, wrong | Author: Martin Leyrer
[Montag, 20200504, 05:00 | permanent link | 0 Kommentar(e)