You are here

Feed aggregator

Four short links: 1 June 2018

O'Reilly Radar - Fri, 2018/06/01 - 01:00

AI Touch, Drone Delivery, WTF, and Javascript Robotics

  1. Artificial Sense of Touch -- This rudimentary artificial nerve circuit integrates three previously described components. The first is a touch sensor that can detect even minuscule forces. This sensor sends signals through the second component -- a flexible electronic neuron. The touch sensor and electronic neuron are improved versions of inventions previously reported by the Bao lab. Sensory signals from these components stimulate the third component, an artificial synaptic transistor modeled after human synapses.
  2. Drone Delivery Coming to Vanuatu -- the nation is opening a tender for vaccine delivery services between islands. UNICEF and the government of Vanuatu expect that a few drone companies will become the long-term solution to the many logistical challenges of “last-mile delivery” of vaccines on the small islands.
  3. wtf -- a personal terminal-based dashboard utility, designed for displaying infrequently-needed, but very important, daily data.
  4. Johnny-Five -- an Open Source, Firmata Protocol based, IoT and Robotics programming framework, developed at Bocoup. Johnny-Five programs can be written for Arduino (all models), Electric Imp, Beagle Bone, Intel Galileo & Edison, Linino One, Pinoccio, pcDuino3, Raspberry Pi, Particle/Spark Core & Photon, Tessel 2, TI Launchpad and more.

Continue reading Four short links: 1 June 2018.

Categories: Technology

Steering around blockchain hype

O'Reilly Radar - Thu, 2018/05/31 - 03:00

When we finally find the best use cases for blockchains they may look like nothing we would have expected.

One of the puzzles of the blockchain world is the gap between the enterprise cheerleaders, who want to see a blockchain in anything, and people from the Bitcoin world, who frequently don't see applications of blockchains outside of currency. It's hardly surprising that there's a feeding frenzy around any new and cool technology. And it's to be expected that those who already understand the new technology are more cautious about how it's applied. But for blockchains, that caution is extreme. Jimmy Song's recent articles, “Alternatives to Blockchain” and “Why Blockchain Is Hard,” are good examples.

I know Song, who's writing a book on Bitcoin programming for O'Reilly, and I agree on many of his specific points. Even where I disagree, Song knows a lot more about blockchains than I do. And blockchain advocates need to learn a lot from people who have actually made a blockchain work: there's a lot of fantasy and unreality at large. Song’s articles should be required reading for anyone considering a blockchain deployment.

Continue reading Steering around blockchain hype.

Categories: Technology

Four short links: 31 May 2018

O'Reilly Radar - Thu, 2018/05/31 - 01:00

Internet Trends, Deep Learning, Governing Commons, and Invisible Asymptotes

  1. Mary Meeker Internet Trends Report 2018 -- growth of number of internet-connected devices and users has slowed, but usage is still growing. And check out that exponential growth in the number of Wi-Fi networks globally. Her preso has got a whole lot less focused as she's scrambling for things that may still indicate that the tech boom isn't over.
  2. Deep Learning's Value (Hacker News) -- If you think Deep (Reinforcement) Learning is going to solve AGI, you are out of luck. If you however think it's useless and won't bring us anywhere, you are guaranteed to be wrong. Frankly, if you are daily working with Deep Learning, you are probably not seeing the big picture (i.e. how horrible methods used in real-life are and how you can easily get very economical 5% benefit of just plugging in Deep Learning somewhere in the pipeline; this might seem little but managers would kill for 5% of extra profit).
  3. Governing the Commons: The Evolution of Institutions for Collective Action (Amazon) -- Dr Ostrom uses institutional analysis to explore different ways - both successful and unsuccessful - of governing the commons. In contrast to the proposition of the 'tragedy of the commons' argument, common pool problems sometimes are solved by voluntary organizations rather than by a coercive state. Among the cases considered are communal tenure in meadows and forests, irrigation communities and other water rights, and fisheries.
  4. Invisible Asymptotes -- interesting stories from inside Amazon, then the idea of invisible asymptotes: the things that will stop your growth but you don't know what they are (the "shoulders in the S-curve"). People hate paying for shipping. They despise it. It may sound banal, even self-evident, but understanding that was, I'm convinced, so critical to much of how we unlocked growth at Amazon over the years. People don't just hate paying for shipping, they hate it to literally an irrational degree.

Continue reading Four short links: 31 May 2018.

Categories: Technology

Four short links: 30 May 2018

O'Reilly Radar - Wed, 2018/05/30 - 01:00

Rapidly Learning Games, Geo Toolbox, Philosophy and CS, and Moravec's Paradox

  1. Playing Hard Exploration Games by Watching YouTube -- This method of one-shot imitation allows our agent to convincingly exceed human-level performance on the infamously hard exploration games Montezuma's Revenge, Pitfall! and Private Eye for the first time, even if the agent is not presented with any environment rewards. (via @hardmaru)
  2. An Open-Source Geospatial Toolbox -- Uber's React-built geo toolkit. No word on whether there's a function for faking randomly circling cars near your location.
  3. Why Philosophers Should Care About Computer Science (Scott Aaronson) -- computational complexity theory—the field that studies the resources (such as time, space, and randomness) needed to solve computational problems—leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume’s problem of induction, Goodman’s grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest.
  4. Moravec's Paradox -- the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.

Continue reading Four short links: 30 May 2018.

Categories: Technology

Is conversation really the best UI experience in a conversational UI?

O'Reilly Radar - Tue, 2018/05/29 - 04:00

Learn design best practices and where conversational AIs are headed in the future.

Continue reading Is conversation really the best UI experience in a conversational UI?.

Categories: Technology

Resist the myth of the one-size-fits-all learning modality

O'Reilly Radar - Tue, 2018/05/29 - 03:00

A commitment to multi-modal learning is better than grasping for single-modality solutions that don’t deliver.

Picture this: A usually introverted but passionate learning professional finishes reading yet another proclamation that the key to effective learning is found in one delivery format or another. She sits there with a rising sense of frustration. What does she do? Well if the introverted-but-passionate learning pro is me, she mounts a proverbial soapbox and begins shouting:

“Fellow lovers of learning. It’s time to stop the silver-bullet madness. We must abandon our obsession with the one-size-fits-all approach to learning.”

Oh I know, we say we reject one-size-fits all notions, but in the very same breath we proclaim micro-learning as the “solution” or we design entire learning ecosystems around e-learning modules. Neither of these modalities is inherently bad—both can be done terrifically well and pitifully poorly. This is true of all modalities.

Modality is only the delivery mechanism, and while delivery matters very much, it isn’t where the single answer to learning success lies. Spoiler alert: There isn’t a single answer at all.

The truth is, the most effective learning opportunities come from multi-modal experiences.

Why is multi-modal important?

We now know that most of the research on learning styles has been insufficiently designed to suggest individuals experience real advantage to learning in one modality (e.g. auditory vs. visual) than another. In fact, cognitive science research suggest that retention and recall is improved by multiple modality learning. This is sometimes called the “context effect” and it not only speaks to the modality through which you consume content but also where you are when you consume it as well as other environmental variables. Varying your approach to learning on all levels of context increases the likelihood of retention. In other words, if “it” is something you wish to learn and you read about it, then watch something about it, then take a self-assessment on it, you are more likely to learn it than you are if you only read something about it.

So by all means, pursue micro-learning, virtual reality programs, text content, augmented reality solutions, or video courses, but don’t fall into the trap that any one of them is the answer. They may all be. Look for ways to select modalities that fit your content, learning objectives, learners’ needs, and budget, and then vary it up. After all, variety is the spice of life, right? And your learners will appreciate you for it because they will learn more and apply more of their learning. And that is what we’re all shooting for in the end anyway.

Continue reading Resist the myth of the one-size-fits-all learning modality.

Categories: Technology

Four short links: 29 May 2018

O'Reilly Radar - Tue, 2018/05/29 - 01:00

Data Beats Algorithms, Copyright Futures, Data Privacy, and Cryptocurrency Attacks

  1. You Need to Improve Your Training Data (Pete Warden) -- without changing the model or test data at all, the top-one accuracy increased by over 4%, from 85.4% to 89.7%. Written up in an Arxiv paper.
  2. Future Not Made -- potential products that won't exist if the EU passes a database copyright law. In the words of Cory Doctorow: The feature all these devices share is that they rely on databases of user-supplied assets -- annotations, recorded sensations, shapefiles -- of the sort that the EU is about to make legally impossible. (via BoingBoing)
  3. California Eyes Data Privacy Measures -- Mactaggart says the proposed law would not prevent Facebook, Google or a local newspaper from collecting users' data and using it to target ads to them. But users will have a right to stop companies from sharing or selling their data. And businesses would be required to disclose the categories of information they have on users — including home addresses, employment information and characteristics such as race and gender.
  4. Cost of a 51% Attack on Popular Cryptocurrencies -- the commentary on Hacker News is also interesting. As one notes: These attacks are only possible for coins where the last column is > 100%. That's still a distressingly large total market cap in minor coins, even if it doesn't include the big players.

Continue reading Four short links: 29 May 2018.

Categories: Technology

Four short links: 28 May 2018

O'Reilly Radar - Mon, 2018/05/28 - 01:00

Hypergrowth, Metaphor-Oriented Programming, Zombie Data, and Science Robotics Challenges

  1. Productivity in the Age of Hypergrowth -- good tips and perspective on scaling engineering teams as companies ramp up hiring.
  2. Homespring Programming Language Reference -- Homespring uses the paradigm of a river to create its astoundingly user-friendly semantics. Each program is a river system which flows into the watershed (the terminal output). Information is carried by salmon (which represent string values), which swim upstream trying to find their home river. Terminal input causes a new salmon to be spawned at the river mouth; when a salmon leaves the river system for the ocean, its value is output to the terminal. In this way, terminal I/O is neatly and elegantly represented within the system metaphor. It's a (joke) metaphor-oriented programming language that makes my eyes water.
  3. Engauge Digitizer -- Extracts data points from images of graphs. This creates zombie data (the data were dead and interred in a graph, now they're almost live again). Beware ...
  4. Grand Challenges of Science Robotics -- (i) New materials and fabrication schemes; (ii) Biohybrid and bioinspired robots; (iii) New power sources, battery technologies, and energy-harvesting schemes; (iv) Robot swarms; (v) Navigation and exploration in extreme environments; (vi) Fundamental aspects of artificial intelligence; (vii) Brain-computer interfaces (BCIs); (viii) Social interaction; (ix) Medical robotics with increasing levels of autonomy; (x) Ethics and security for responsible innovation in robotics.

Continue reading Four short links: 28 May 2018.

Categories: Technology

Kubernetes recipes: Maintenance and troubleshooting

O'Reilly Radar - Fri, 2018/05/25 - 11:50

Recipes that deal with various aspects of troubleshooting, from debugging pods and containers, to testing service connectivity, interpreting a resource’s status, and node maintenance.

In this chapter, you will find recipes that deal with both app-level and cluster-level maintenance. We cover various aspects of troubleshooting, from debugging pods and containers, to testing service connectivity, interpreting a resource’s status, and node maintenance. Last but not least, we look at how to deal with etcd, the Kubernetes control plane storage component. This chapter is relevant for both cluster admins and app developers.

Enabling Autocomplete for kubectl Problem

It is cumbersome to type full commands and arguments for the kubectl command, so you want an autocomplete function for it.


Enable autocompletion for kubectl.

For Linux and the bash shell, you can enable kubectl autocompletion in your current shell using the following command:

$ source <(kubectl completion bash)

For other operating systems and shells, please check the documentation.

See Also Removing a Pod from a Service Problem

You have a well-defined service (see not available) backed by several pods. But one of the pods is misbehaving, and you would like to take it out of the list of endpoints to examine it at a later time.


Relabel the pod using the --overwrite option—this will allow you to change the value of the run label on the pod. By overwriting this label, you can ensure that it will not be selected by the service selector (not available) and will be removed from the list of endpoints. At the same time, the replica set watching over your pods will see that a pod has disappeared and will start a new replica.

To see this in action, start with a straightforward deployment generated with kubectl run (see not available):

$ kubectl run nginx --image nginx --replicas 4

When you list the pods and show the label with key run, you’ll see four pods with the value nginx (run=nginx is the label that is automatically generated by the kubectl run command):

$ kubectl get pods -Lrun NAME READY STATUS RESTARTS AGE RUN nginx-d5dc44cf7-5g45r 1/1 Running 0 1h nginx nginx-d5dc44cf7-l429b 1/1 Running 0 1h nginx nginx-d5dc44cf7-pvrfh 1/1 Running 0 1h nginx nginx-d5dc44cf7-vm764 1/1 Running 0 1h nginx

You can then expose this deployment with a service and check the endpoints, which correspond to the IP addresses of each pod:

$ kubectl expose deployments nginx --port 80 $ kubectl get endpoints NAME ENDPOINTS AGE nginx,, + 1 more... 1h

Moving the first pod out of the service traffic via relabeling is done with a single command:

$ kubectl label pods nginx-d5dc44cf7-5g45r run=notworking --overwrite Tip

To find the IP address of a pod, you can list the pod’s manifest in JSON and run a JQuery query:

$ kubectl get pods nginx-d5dc44cf7-5g45r -o json | \ jq -r .status.podIP172.17.0.3

You will see a brand new pod appear with the label run=nginx, and you will see that your nonworking pod still exists but no longer appears in the list of service endpoints:

$ kubectl get pods -Lrun NAME READY STATUS RESTARTS AGE RUN nginx-d5dc44cf7-5g45r 1/1 Running 0 21h notworking nginx-d5dc44cf7-hztlw 1/1 Running 0 21s nginx nginx-d5dc44cf7-l429b 1/1 Running 0 5m nginx nginx-d5dc44cf7-pvrfh 1/1 Running 0 5m nginx nginx-d5dc44cf7-vm764 1/1 Running 0 5m nginx $ kubectl describe endpoints nginx Name: nginx Namespace: default Labels: run=nginx Annotations: <none> Subsets: Addresses:,,, ... Accessing a ClusterIP Service Outside the Cluster Problem

You have an internal service that is causing you trouble and you want to test that it is working well locally without exposing the service externally.


Use a local proxy to the Kubernetes API server with kubectl proxy.

Let’s assume that you have created a deployment and a service as described in Removing a Pod from a Service. You should see an nginx service when you list the services:

$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP <none> 80/TCP 22h

This service is not reachable outside the Kubernetes cluster. However, you can run a proxy in a separate terminal and then reach it on localhost.

Start by running the proxy in a separate terminal:

$ kubectl proxy Starting to serve on Tip

You can specify the port that you want the proxy to run on with the --port option.

In your original terminal, you can then use your browser or curl to access the application exposed by your service. Note the specific path to the service; it contains a /proxy part. Without this, you get the JSON object representing the service:

$ curl http://localhost:8001/api/v1/proxy/namespaces/default/services/nginx/ <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ... Note

Note that you can now also access the entire Kubernetes API over localhost using curl.

Understanding and Parsing Resource Statuses Problem

You want to react based on the status of a resource—say, a pod—in a script or in another automated environment like a CI/CD pipeline.


Use kubectl get $KIND/$NAME -o json and parse the JSON output using one of the two methods described here.

If you have the JSON query utility jq installed, you can use it to parse the resource status. Let’s assume you have a pod called jump and want to know what Quality of Service (QoS) class1 the pod is in:

$ kubectl get po/jump -o json | jq --raw-output .status.qosClass BestEffort

Note that the --raw-output argument for jq will show the raw value and that .status.qosClass is the expression that matches the respective subfield.

Another status query could be around the events or state transitions:

$ kubectl get po/jump -o json | jq .status.conditions [ { "lastProbeTime": null, "lastTransitionTime": "2017-08-28T08:06:19Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2017-08-31T08:21:29Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2017-08-28T08:06:19Z", "status": "True", "type": "PodScheduled" } ]

Of course, these queries are not limited to pods—you can apply this technique to any resource. For example, you can query the revisions of a deployment:

$ kubectl get deploy/prom -o json | jq .metadata.annotations { "": "1" }

Or you can list all the endpoints that make up a service:

$ kubectl get ep/prom-svc -o json | jq '.subsets' [ { "addresses": [ { "ip": "", "nodeName": "minikube", "targetRef": { "kind": "Pod", "name": "prom-2436944326-pr60g", "namespace": "default", "resourceVersion": "686093", "uid": "eee59623-7f2f-11e7-b58a-080027390640" } } ], "ports": [ { "port": 9090, "protocol": "TCP" } ] } ]

Now that you’ve seen jq in action, let’s move on to a method that doesn’t require external tooling—that is, the built-in feature of using Go templates.

The Go programming language defines templates in a package called text/template that can be used for any kind of text or data transformation, and kubectl has built-in support for it. For example, to list all the container images used in the current namespace, do this:

$ kubectl get pods -o go-template \ --template="{{range .items}}{{range .spec.containers}}{{.image}} \ {{end}}{{end}}" busybox prom/prometheus See Also

1Medium, "What are Quality of Service (QoS) Classes in Kubernetes".

Debugging Pods Problem

You have a situation where a pod is either not starting up as expected or fails after some time.


To systematically discover and fix the cause of the problem, enter an OODA loop:

  1. Observe. What do you see in the container logs? What events have occurred? How is the network connectivity?

  2. Orient. Formulate a set of plausible hypotheses—stay as open-minded as possible and don’t jump to conclusions.

  3. Decide. Pick one of the hypotheses.

  4. Act. Test the hypothesis. If it’s confirmed, you’re done; otherwise, go back to step 1 and continue.

Let’s have a look at a concrete example where a pod fails. Create a manifest called unhappy-pod.yaml with this content:

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: unhappy spec: replicas: 1 template: metadata: labels: app: nevermind spec: containers: - name: shell image: busybox command: - "sh" - "-c" - "echo I will just print something here and then exit"

Now when you launch that deployment and look at the pod it creates, you’ll see it’s unhappy:

$ kubectl create -f unhappy-pod.yaml deployment "unhappy" created $ kubectl get po NAME READY STATUS RESTARTS AGE unhappy-3626010456-4j251 0/1 CrashLoopBackOff 1 7s $ kubectl describe po/unhappy-3626010456-4j251 Name: unhappy-3626010456-4j251 Namespace: default Node: minikube/ Start Time: Sat, 12 Aug 2017 17:02:37 +0100 Labels: app=nevermind pod-template-hash=3626010456 Annotations:{"kind":"SerializedReference","apiVersion": "v1","reference":{"kind":"ReplicaSet","namespace":"default","name": "unhappy-3626010456","uid": "a9368a97-7f77-11e7-b58a-080027390640"... Status: Running IP: Created By: ReplicaSet/unhappy-3626010456 Controlled By: ReplicaSet/unhappy-3626010456 ... Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: default-token-rlm2s: Type: Secret (a volume populated by a Secret) SecretName: default-token-rlm2s Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: <none> Events: FirstSeen ... Reason Message --------- ... ------ ------- 25s ... Scheduled Successfully assigned unhappy-3626010456-4j251 to minikube 25s ... SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-rlm2s" 24s ... Pulling pulling image "busybox" 22s ... Pulled Successfully pulled image "busybox" 22s ... Created Created container 22s ... Started Started container 19s ... BackOff Back-off restarting failed container 19s ... FailedSync Error syncing pod

As you can see, Kubernetes considers this pod as not ready to serve traffic as it encountered an "error syncing pod."

Another way to observe this is using the Kubernetes dashboard to view the deployment (Figure 1), as well as the supervised replica set and the pod (Figure 2).

Figure 1. Screenshot of deployment in error state Figure 2. Screenshot of pod in error state Discussion

An issue, be it a pod failing or a node behaving strangely, can have many different causes. Here are some things you’ll want to check before suspecting software bugs:

  • Is the manifest correct? Check with the Kubernetes JSON schema.

  • Does the container run standalone, locally (that is, outside of Kubernetes)?

  • Can Kubernetes reach the container registry and actually pull the container image?

  • Can the nodes talk to each other?

  • Can the nodes reach the master?

  • Is DNS available in the cluster?

  • Are there sufficient resources available on the nodes?

  • Did you restrict the container’s resource usage?

See Also Getting a Detailed Snapshot of the Cluster State Problem

You want to get a detailed snapshot of the overall cluster state for orientation, auditing, or troubleshooting purposes.


Use the kubectl cluster-info dump command. For example, to create a dump of the cluster state in a subdirectory cluster-state-2017-08-13, do this:

$ kubectl cluster-info dump --all-namespaces \ --output-directory=$PWD/cluster-state-2017-08-13 $ tree ./cluster-state-2017-08-13 . ├── default │   ├── cockroachdb-0 │   │   └── logs.txt │   ├── cockroachdb-1 │   │   └── logs.txt │   ├── cockroachdb-2 │   │   └── logs.txt │   ├── daemonsets.json │   ├── deployments.json │   ├── events.json │   ├── jump-1247516000-sz87w │   │   └── logs.txt │   ├── nginx-4217019353-462mb │   │   └── logs.txt │   ├── nginx-4217019353-z3g8d │   │   └── logs.txt │   ├── pods.json │   ├── prom-2436944326-pr60g │   │   └── logs.txt │   ├── replicasets.json │   ├── replication-controllers.json │   └── services.json ├── kube-public │   ├── daemonsets.json │   ├── deployments.json │   ├── events.json │   ├── pods.json │   ├── replicasets.json │   ├── replication-controllers.json │   └── services.json ├── kube-system │   ├── daemonsets.json │   ├── default-http-backend-wdfwc │   │   └── logs.txt │   ├── deployments.json │   ├── events.json │   ├── kube-addon-manager-minikube │   │   └── logs.txt │   ├── kube-dns-910330662-dvr9f │   │   └── logs.txt │   ├── kubernetes-dashboard-5pqmk │   │   └── logs.txt │   ├── nginx-ingress-controller-d2f2z │   │   └── logs.txt │   ├── pods.json │   ├── replicasets.json │   ├── replication-controllers.json │   └── services.json └── nodes.json Adding Kubernetes Worker Nodes Problem

You need to add a worker node to your Kubernetes cluster.


Provision a new machine in whatever way your environment requires (for example, in a bare-metal environment you might need to physically install a new server in a rack, in a public cloud setting you need to create a new VM, etc.), and then install the three components that make up a Kubernetes worker node:


This is the node manager and supervisor for all pods, no matter if they’re controlled by the API server or running locally, such as static pods. Note that the kubelet is the final arbiter of what pods can or cannot run on a given node, and takes care of:

  • Reporting node and pod statuses to the API server.

  • Periodically executing liveness probes.

  • Mounting the pod volumes and downloading secrets.

  • Controlling the container runtime (see the following).

Container runtime

This is responsible for downloading container images and running the containers. Initially, this was hardwired to the Docker engine, but nowadays it is a pluggable system based on the Container Runtime Interface (CRI), so you can, for example, use CRI-O rather than Docker.


This process dynamically configures iptables rules on the node to enable the Kubernetes service abstraction (redirecting the VIP to the endpoints, one or more pods representing the service).

The actual installation of the components depends heavily on your environment and the installation method used (cloud, kubeadm, etc.). For a list of available options, see the kubelet reference and kube-proxy reference.


Worker nodes, unlike other Kubernetes resources such as a deployments or services, are not directly created by the Kubernetes control plane but only managed by it. That means when Kubernetes creates a node, it actually only creates an object that represents the worker node. It validates the node by health checks based on the node’s field, and if the node is valid—that is, all necessary components are running—it is considered part of the cluster; otherwise, it will be ignored for any cluster activity until it becomes valid.

See Also Draining Kubernetes Nodes for Maintenance Problem

You need to carry out maintenance on a node—for example, to apply a security patch or upgrade the operating system.


Use the kubectl drain command. For example, to do maintenance on node 123-worker:

$ kubectl drain 123-worker

When you are ready to put the node back into service, use kubectl uncordon 123-worker, which will make the node schedulable again.


What the kubectl drain command does is to first mark the specified node un-schedulable to prevent new pods from arriving (essentially a kubectl cordon). Then it evicts the pods if the API server supports eviction. Otherwise, it will use normal kubectl delete to delete the pods. The Kubernetes docs have a concise sequence diagram of the steps, reproduced in Figure 3.

Figure 3. Node drain sequence diagram

The kubectl drain command evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). For pods supervised by a DaemonSet, drain will not proceed without using --ignore-daemonsets, and regardless it will not delete any DaemonSet-managed pods—those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings.


drain waits for graceful termination, so you should not operate on this node until the kubectl drain command has completed. Note that kubectl drain $NODE --force will also evict pods not managed by an RC, RS, job, DaemonSet, or StatefulSet.

See Also Managing etcd Problem

You need to access etcd to back it up or verify the cluster state directly.


Get access to etcd and query it, either using curl or etcdctl. For example, in the context of Minikube (with jq installed):

$ minikube ssh $ curl | jq . { "action": "get", "node": { "key": "/registry", "dir": true, "nodes": [ { "key": "/registry/persistentvolumeclaims", "dir": true, "modifiedIndex": 241330, "createdIndex": 241330 }, { "key": "/registry/", "dir": true, "modifiedIndex": 641, "createdIndex": 641 }, ...

This technique can be used in environments where etcd is used with the v2 API.


In Kubernetes, etcd is a component of the control plane. The API server (see not available) is stateless and the only Kubernetes component that directly communicates with etcd, the distributed storage component that manages the cluster state. Essentially, etcd is a key/value store; in etcd2 the keys formed a hierarchy, but with the introduction of etcd3 this was replaced with a flat model (while maintaining backwards compatibility concerning hierarchical keys).


Up until Kubernetes 1.5.2 we used etcd2, and from then on we switched to etcd3. In Kubernetes 1.5.x, etcd3 is still used in v2 API mode and going forward this is changing to the etcd v3 API with v2 being deprecated soon. Though from a developer’s point of view this doesn’t have any implications, because the API server takes care of abstracting the interactions away, as an admin you want to pay attention to which etcd version is used in which API mode.

In general, it’s the responsibility of the cluster admin to manage etcd—that is, to upgrade it and make sure the data is backed up. In certain environments where the control plane is managed for you, such as in Google Kubernetes Engine, you cannot access etcd directly. This is by design, and there’s no workaround for it.

See Also

Continue reading Kubernetes recipes: Maintenance and troubleshooting.

Categories: Technology

Four short links: 25 May 2018

O'Reilly Radar - Fri, 2018/05/25 - 03:10

Bitcoin Badness, True Platform, Hardware Details, and Continuous Game of Life

  1. U.S. Criminal Probe into Bitcoin Manipulation -- also in the news: $1.2 billion of cryptocurrency stolen since 2017.
  2. Bill Gates on Platforms -- A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it. Then it’s a platform. (via Stratechery)
  3. Inside the 76477 Space Invaders Sound Chip -- this is fascinating! The 76477 is primarily analog—most control signals are analog, the chip doesn't have digital control registers, and most sounds are generated from analog circuits—but about a third of the chip's area is digital logic.
  4. Smooth Life -- Conway's Game of Life on a continuous domain. See also Game of Life for Curved Surfaces and accompanying video. (via

Continue reading Four short links: 25 May 2018.

Categories: Technology

Designing our friendly robot companions isn't about the AI

O'Reilly Radar - Thu, 2018/05/24 - 13:35

Ben Brown on why messaging design will become as important as responsive design.

Continue reading Designing our friendly robot companions isn't about the AI.

Categories: Technology

So, you want to be successful in the open future?

O'Reilly Radar - Thu, 2018/05/24 - 13:00

Louise Beaumont explores the five characteristics of companies that choose to succeed.

Continue reading So, you want to be successful in the open future?.

Categories: Technology

When to KISS

O'Reilly Radar - Thu, 2018/05/24 - 13:00

Zubin Siganporia explains how the KISS principle (“Keep It Simple, Stupid”) applies to solving problems and convincing end-users to adopt data-driven solutions to their challenges.

Continue reading When to KISS.

Categories: Technology

Machine learning: Research & industry

O'Reilly Radar - Thu, 2018/05/24 - 13:00

Having worked in both research and industry, Mikio Braun shares insights into what's the same, what's different, and how deep learning might change the game.

Continue reading Machine learning: Research & industry.

Categories: Technology

The good, the bad, and the internet?

O'Reilly Radar - Thu, 2018/05/24 - 13:00

Martha Lane Fox considers the unintended consequences of technology.

Continue reading The good, the bad, and the internet?.

Categories: Technology

Out of the lab and into real life

O'Reilly Radar - Thu, 2018/05/24 - 13:00

Christine Foster discusses how today’s academic papers turn into tomorrow’s data science.

Continue reading Out of the lab and into real life.

Categories: Technology

What to expect at the JupyterCon 2018 Business Summit

O'Reilly Radar - Thu, 2018/05/24 - 07:18

One of our goals is to bring Jupyter’s enterprise use cases and practices into one place.

We've seen a dramatic shift in Jupyter’s deployment over the past two years: starting with mostly use by individuals, but moving to enterprise production deployments at scale. Even while enterprise use cases for Jupyter tend to share common themes, however, there hasn't been a forum for comparing approaches. So, we’re excited to be expanding the enterprise-related content at the Business Summit at JupyterCon 2018 in New York City in August. The track will open with Enterprise usage of Jupyter: The business case and best practices for leveraging open source, by Project Jupyter co-lead Brian Granger. His talk will include training aspects, which get applied later in the summit during the discussion groups:

  • the business case for adopting open source in large organizations
  • how Jupyter is evolving to address enterprise usage cases
  • developing infrastructure tooling based on open standards
  • how open source projects work from a governance perspective
  • best practices for enterprise to engage with open source (what to do, what not to do)
  • engaging with Jupyter through strategic initiatives: Jupyter white papers, roadmap planning, etc.

That follows with speakers throughout the two-day track presenting enterprise use cases (in most cases, initial-year results) from Capital One, DoD, Amazon AWS, Booz Allen Hamilton, GE, Teradata, PayPal, Two Sigma, Capsule8. Enterprise organizations are leveraging Jupyter to build out their collaborative data infrastructures internally. While those use cases leverage open source tooling, such as JupyterHub, once the software gets deployed, the organizational challenges immediately rise to the fore. These represent pain points that enterprise organizations share: collaboration, discovery, needs for reproducible work, security, data privacy, compliance, ethics, and data access patterns—all of which aren’t one size fits all.

We’ve encountered several large use cases within DoD and finance, for example, so one of our goals for the Business Summit at JupyterCon 2018 is to bring those use cases and practices into one place. Many opportunities exist for collaboration, sharing best practices, and supporting crossover between government and industry. Themes being explored through enterprise case studies—presented by the practitioners—include:

We had many more excellent session proposals than could be included in the program; these will be presented as “poster sessions” in the concourse for the Business Summit to facilitate discussion during breaks.

The first day’s track will conclude with a roundtable discussion: The Current Environment: Compliance, ethics, ML model interpretation, GDPR, etc., with participation from IBM, Capital One, DoD, Amazon AWS, and Oracle. This roundtable is intended to summarize common themes across the different use cases being presented, plus provide time for extended Q&A. The audience will have opportunities to submit questions in advance to the moderator. Note that the Q&A is intended as dialog among practitioners: we ask that members of the press hold their questions for other opportunities outside of the Business Summit.

At the end of the second day, the summit concludes with unconference-style break-out sessions, intended as a “two-way street” for enterprise stakeholders to give input to Project Jupyter directly about features needed, roadmap priorities, and who will partner on specific projects.

Participants who attend the Business Summit will receive a certificate of participation for “Enterprise Engagement in Open Source.” Note that we are exploring options for providing CEUs (continuing education credits) for Business Summit participation—to align more closely with government agency accounting requirements.

Diversity, fundraising, and registration discounts

We believe that true innovation depends on hearing from, and listening to, people with a variety of perspectives. Please read our Diversity Statement and learn more about our Diversity & Inclusion scholarship program. We had several recipients in 2017, and are looking to nearly double that number in 2018.

While JupyterCon registration is open, we’re raising funds for PyLadies, an international mentorship group with a focus on helping more women become active participants and leaders in the Python open source community. We ask that you consider joining us in supporting this worthy organization by making a modest donation when you sign up. O’Reilly will match those donations at the end of the conference. We wouldn’t usually make a financial contribution selected by default, but we hope this underscores how crucial we think it is to support diversity. Also, we welcome your thoughts about this or other successful diversity efforts you’ve encountered. Send suggestions, comments, and feedback to

Along with the diversity scholarships, there are several other categories for discount rate eligibility in JupyterCon registrations:

  • Government/Academic/Nonprofit: You are eligible for this rate if you are a full-time employee of the government, an academic institution, or a 501(c)(3). Please register with your .gov, .edu, or .org email address.
  • Students: For the student rate, please provide proof of full-time student status and register with your .edu email address if possible. If you are a college-level student based in the U.S. and need financial support to attend JupyterCon 2018, aid is available from Project Jupyter. Please apply by June 16, 2018.
  • Jupyter Volunteer: You are eligible for this rate if you are contributing to the Jupyter planetary ecosystem as a volunteer via the code base or other contribution. Please include your GitHub ID or other relevant links.
  • Alumni/Safari: You are eligible for this rate if you have attended a previous O’Reilly conference or have a current Safari membership.
  • Group discounts: 20% off per person if you register 3-5 people from one company; use TEAM in the discount code field. We offer 25% off for teams of 6-9 and 30% off for 10 or more; please contact for details.

Or in general, save 20% on conference registration for having read this article all the way to the end! Use PJ20 in the discount code field.

Continue reading What to expect at the JupyterCon 2018 Business Summit.

Categories: Technology

The evolution of data science, data engineering, and AI

O'Reilly Radar - Thu, 2018/05/24 - 07:10

The O’Reilly Data Show Podcast: A special episode to mark the 100th episode.

This episode of the Data Show marks our 100th episode. This podcast stemmed out of video interviews conducted at O’Reilly’s 2014 Foo Camp. We had a collection of friends who were key members of the data science and big data communities on hand and we decided to record short conversations with them. We originally conceived of using those initial conversations to be the basis of a regular series of video interviews. The logistics of studio interviews proved too complicated, but those Foo Camp conversations got us thinking about starting a podcast, and the Data Show was born.

To mark this milestone, my colleague Paco Nathan, co-chair of Jupytercon, turned the tables on me and asked me questions about previous Data Show episodes. In particular, we examined the evolution of key topics covered in this podcast: data science and machine learning, data engineering and architecture, AI, and the impact of each of these areas on businesses and companies. I’m proud of how this show has reached so many people across the world, and I’m looking forward to sharing more conversations in the future.

Continue reading The evolution of data science, data engineering, and AI.

Categories: Technology

Winner of the Top Innovator Award at AI NY 2018: temi

O'Reilly Radar - Thu, 2018/05/24 - 04:00

The personal robot temi refactors robotic human behaviors we encounter in the “iPhone Slump,” and moves those back to actual robots.

When I was a child, I viewed as a child—viewed a lot of science fiction, that is. We were promised a future with amazing robots. As a child, I viewed that as a completely awesome possibility. Fully embodied robots with which we could talk, reason, argue, and possibly even trade jokes. Robots sophisticated enough to understand emotion. How cool would that be? Rosie in The Jetsons. Class B–9-M–3 General Utility Non-Theorizing Environmental Control Robot from Lost In Space. Maschinenmensch from Metropolis. Bishop 341-B in Aliens. Replicants!

That was a long time ago. Along the way, we got some amazing science fiction-ish tech marvels. For example, Steve Jobs’ “god phone”—which reeducated +2 billion people worldwide how to communicate effectively, or something. I only met Steve once, and he’s been gone for several years now. Even so, I encounter his ghost everyday: myriads of people slumped over, absorbed in swiping their smartphones, unknowingly mimicking Jobs’ edgy indifference to the world around—exercising their primary means of “communicating” with others. Yeah, I do it now, too.

What about the AI we’d been promised by futuristic stories? That raptured off to ephemeral clouds. Machine learning, disembodied. More closely resembling the “bodiless exultation of cyberspace” described in William Gibson’s Neuromancer novel. Heartless and sometimes horribly biased algorithms, carefully cordoned behind layers of firewalls. Secretly curated as “disruptive accelerators of synergies” by product managers hellbent on their drive toward GA. Digital innovation hubs of collaborative social multidisciplinary ecosystem working groups! Gobbledegoo exhaust from a strange new species of corporation hellbent on racing toward trillion dollar valuations. Nothing even vaguely close to the cuddly likeableness of Rosie the robot, zooming brightly on her castor wheels with antennae blaring.

That’s probably why I felt so captivated by the AI NY '18 keynote “Autonomy and human-AI interaction,” by Professor Manuela Veloso at CMU. Her team has developed CoBots—short for Collaborative Robots—which are capable of seeing, planning, and moving. One catch: based on CMU’s concept of “symbiotic autonomy,” those robots need help from humans. Often. For example, any time a CoBot needs to move between floors in the building, someone must help call an elevator. Because, so far, CoBots lack arms. Sarah Conner can sleep soundly. From the CoBot researchers:

Our CoBot robots follow a novel symbiotic autonomy, in which the robots are aware of their perceptual, physical, and reasoning limitations and proactively ask for help from humans, for example for object manipulation actions.

Through the CoBots, Veloso’s team is researching how humans can interact with AI. CoBots understand their own limitations, expressing need and vulnerability—two words about human-like characteristics rarely overheard in Silicon Valley, outside of VC strategy meetings. How refreshing! CMU’s outcomes may put a ding in the universe.

Over in the expo hall of AI NY ’18—or rather, all about the expo hall—I encountered another handsomely affective embodiment: temi the personal robot. Winner of the Top Innovator Award at AI NY ’18. Judges from our program committee evaluated the AI start-ups participating in this award contest based on:

  • overall market potential
  • value proposition: disruptive potential in industry and society
  • stage of development and time to market

I watched carefully as Danny Isserles, head of temi HQ in NYC, summoned temi the robot. Most immediately, the head tracking stood out: temi tilted its “face” upward to track Danny. Practically speaking, temi adjusted its tablet screen so that Danny’s face would be centered in the video camera perspective—but that seemed exactly like temi was glancing up to look at Danny. “Because temi cares,” was my immediate impression. Rosie would’ve been proud.

From that point, Danny began putting temi through its paces: making a video call, transcribing a conversation, playing music, suggesting restaurants in a particular area, etc.

At first, some people might only notice the parts: roughly speaking, part Alexa, part iPad, part high-end boombox, all rolling atop a Roomba and wired together with some software. However, that misses the bigger picture: temi refactors those aforementioned robotic human behaviors that we encounter in the “iPhone Slump,” moving those back to actual robots. When you get home from work, ditch your smartphone atop temi’s charging deck, then talk with people close to you, who aren’t currently nearby, via video chat while you do other things—fix dinner, play guitar, throw pottery, whatever. Hell, go play guitar with them. Because temi can follow you, keeping you in the video chat talking with loved ones while you’re living your life and not stuck slumping over some smartphone. This is particularly engaging for families separated by distance. Kiddos can talk with their parents or grandparents more naturally in video.

While the company behind temi has nine years developing robots for the DoD (e.g., med-evac bots) the origin story for temi happened closer to home. The company founder was visiting an elderly relative who wanted to serve him tea—walking slowly, hands shaking, focused on the serving tray and hot liquid so much that she nearly stumbled and fell. It turns out that older folks fall in their own homes most often while trying to carry things. Unfortunately, I had a parent in ER recently for that very reason. Now we have a personal robot, temi, that can carry things, follow people, and do much more.

There’s a $1,400 retail price for temi, which seems remarkable given how that costs so much less than the laptop on which I’m typing. The designers of temi decided that, except for its tablet, they needed to build every component—including 16 sensors for lidar, laser distance, multiple cameras, etc. Their software is based on Android apps, with an SDK in the works for release soon so that people can customize temi, plug-in other software services, make extensions for fully embodied connections.

My one chance encounter with Steve Jobs was when he’d been blocking the only path to the restroom at our mutually favorite Palo Alto coffeehouse. I asked politely, somewhat urgently. He glanced up from his smartphone distractedly and mumbled “Oh, sorry” then moved aside. Pretty sure that temi would’ve moved graciously without even needing to be asked. And served my chai latte at exactly 165 degrees.

h/t @wu_ming_80, @randerzander, @FloWi

Continue reading Winner of the Top Innovator Award at AI NY 2018: temi.

Categories: Technology

7 questions to ask before you launch an enterprise blockchain project

O'Reilly Radar - Thu, 2018/05/24 - 04:00

Successful projects will think seriously about what blockchains mean, and how to use them effectively.

We’re only at the beginning of the blockchain story, not the middle or the end. There’s no shortage of activity and ferment. If the Bitcoin blockchain is the first-generation proof-of-concept, and the Ethereum blockchain is the second, we’re now starting to see third-generation blockchains. Those blockchains include projects like Hyperledger, Cardano, and EOS. They focus on the obvious shortcomings of the existing blockchains--most notably, transaction throughput and a user interface that can fairly be described as “savage.”

Enterprise blockchain projects will only be successful if they take advantage of a blockchain’s unique properties. I’ve written elsewhere that a blockchain is a distributed ledger, shared by untrusted participants, with strong guarantees about accuracy and consistency. With that in mind, here are some questions to ask if you’re considering an enterprise blockchain project.

What exactly are you trying to accomplish?

Look carefully at your requirements, and ask yourself whether you really need a blockchain. Do you need the additional guarantees about agreement that a blockchain provides, or would a distributed database suffice?

How much do you trust your partners?

Untrusted business partners point you in the direction of blockchains. And they may also point toward proof-of-work or proof-of-stake, rather than simpler (and faster) permissioned blockchains.

How public or open do you need to be?

Who needs to participate in your blockchain? There is a continuum between a public blockchain like Bitcoin or Ethereum and the smallest, most carefully controlled, private blockchain. I can imagine special-purpose public blockchains for applications like power microgrids (or, for that matter, locavore farming). I can imagine blockchains in financial services that only serve a small number of partners, and are essentially private. A blockchain that only serves a single organization is probably a cargo cult. It may look like a blockchain, but it doesn't add any value.

What are your data integration issues?

The biggest problem facing enterprise blockchains might not be an agreement protocol, but integrating all the legacy data formats and structures that blockchain participants use. Health care blockchains are a good example of this problem. There are hundreds of medical records formats in use, and any medical blockchain will have to do something to reconcile those formats. Any blockchain that crosses enterprise boundaries (and even blockchains that live within corporate boundaries) will need to deal with data integration, and solving those problems may well be harder than building the blockchain itself.

If you need “miners,” who will they be, and how will you compensate them?

On most current blockchains, including Bitcoin and Ethereum, “miners” do the job of validating the blockchain’s consistency and adding blocks. They don’t do this work for free. ICOs (initial coin offerings) are all the rage, and it’s easy to imagine paying miners with cryptocurrency (after all, that’s what Bitcoin and Ethereum do), but it’s hard to imagine established enterprises issuing their own currencies. Are there alternate forms of compensation (for example, data or CryptoKitties) that might work?

What are your performance requirements, and how will you meet them?

The Bitcoin and Ethereum blockchains currently handle about a dozen transactions per second. For many enterprise applications, that is too slow, by several orders of magnitude. You need to think about what kind of performance you need, and how you’ll get it. There are a number of possible solutions, including the Bitcoin Lightning Network; replacing the compute-intensive “proof of work” that miners perform to add blocks; and permissioned blockchains, such as Hyperledger’s Fabric.

What are the legal ramifications?

Recently, I’ve seen several people ask whether blockchain applications can comply with GDPR and other regulations. That is certainly uncharted territory. It’s very difficult to see how a “right to be forgotten” could be implemented on a ledger that doesn’t allow previous entries to be deleted. I don’t think the answer is that blockchains can’t comply; the answer will depend on exactly what data you’re storing in the blockchain, how that data is used, and how private or public your blockchain is.

Many cryptocurrency advocates are critical of enterprise blockchains, and these questions are largely drawn from those criticisms. Those criticisms don’t mean that enterprise blockchains don’t work, but they do raise issues that need to be addressed. You don’t want to build a blockchain just to discover that you’ve really created a very slow distributed database, or that nobody wants to verify your blockchain’s consistency because you haven’t thought through compensation.

This is a great time to be experimenting with blockchain technologies. Aside from Bitcoin itself, we haven’t seen many projects emerge from the tire-kicking stage yet. I’m confident that, in the next few years, we will see many enterprise blockchains in production. The ones that survive won’t be the “me too” projects; they’ll be the projects that have thought seriously about what blockchains mean, and how to use them effectively.


Learning Path: Introduction to Blockchain Applications — Dr. Jonathan Reichental helps you fully understand the scope of blockchain technology and how it can be used across a variety of applications and industries.

Continue reading 7 questions to ask before you launch an enterprise blockchain project.

Categories: Technology


Subscribe to LuftHans aggregator