You are here

Technology

PLUG Security meeting topic for Aug 15th

PLUG - Thu, 2019/08/08 - 09:45

Gavin Klondike: Machine Learning for Security Analysts

Description:
Today, over a quarter of security products for detection have some form of machine learning built in. However, “machine learning” is nothing more than a mysterious buzzword for many security analysts. In order to properly deploy and manage these products, analysts will need to understand how the machine learning components operate to ensure they are working efficiently. In this talk, we will dive head first into building and training our own machine learning models using the 7-step machine learning process.

Biography:
Gavin is a senior consultant and researcher who has a passion for network security, both attack and defense. Through that passion, he runs NetSec Explained; a blog and YouTube channel which covers intermediate and advanced level network security topics, in an easy to understand way. His work has given him the opportunity to be published in industry magazines and speak at conferences such as Defcon and CactusCon. Currently, he is researching into ways to address the cybersecurity skills gap, by utilizing machine learning to augment the capabilities of current security analysts.

Topics for Aug 8th's meeting

PLUG - Mon, 2019/08/05 - 09:28
Dhruva Lokegaonkar: Shell Scripting for everyone

Description:
An introduction to Shell scripting.
- The basics of stringing together various commands
- Pipes and Parallelization
- Conditionals and Loops
- How to use these things to create useful scripts, like creating basic website generators, background switches, keyboard hotkeys, etc.

Biography:
Dhruva is a ASU Computer Science Freshman. He's been using Linux for the past 5 Years. He's been involved with the Indian Linux Users Group Bombay (ILUG-BOM) in their mission to introduce Linux to High School and College students by making it a default in the Indian Curriculum.


Austin Godber: Stream Processing with Python and Kafka

Description:
A quick intro to Kafka, a distributed log system, and how to interact with it using Python.

PLUG Security meeting on 7/18

PLUG - Thu, 2019/07/11 - 20:00
At this month's PLUG Security meeting:
Donald McCarthy: passiveDNS For fun and Profit (part1)

For more information:
http://phxlinux.org/index.php/meetings/20-plug-security.html

Description:
If you DNS infrastructure has a bad day, your network has a bad day. If your DNS infrastructure has a good day, something else is bound to go wrong. PassiveDNS generally wont help you fix either.

PassiveDNS is a historical look at observed DNS queries over time. It is akin to The Internet Archive's Way Back Machine, but for DNS zones. Its utility as an operations and security tool is valuable and not easily replaced by another type of data.

In this presentation we will cover exactly what passiveDNS is and isn't, passiveDNS architecture, some security use cases, and if time allows some live demonstration.

In part 2 of the presentation (another month) I will demonstrate some passiveDNS tooling and more in depth practical knowledge to turn theoretical use cases into automated assistance for a SOC or NOC.

About Donald:
Donald "Mac" McCarthy is a 15 year veteran of the IT industry with the last 8 years focused on InfoSec. He has worked on a variety of different systems ranging from cash registers to super computers. It was while serving as a systems administrator for a scientific computing cluster that he discovered his passion for using linux for highly distributed complex tasks. His current focus is using linux with open source technologies like kafka and elastic search to build tooling for security analysts and network operations. He is a proud Veteran of the United States Army and recently relocated from Atlanta to the East Valley.

Four short links: 9 July 2019

O'Reilly Radar - Tue, 2019/07/09 - 04:40

Future of Work, GRANDstack, Hilarious Law Review Article, and The Platform Excuse

  1. At Work, Expertise Is Falling Out of Favor (The Atlantic) -- an interesting longform exploration of "the future of work" (aka automation, generalists, lifelong learning) in the context of the Navy's Littoral Combat Ship experiment. So much applicability to the business world ("experiment" becomes "must succeed flagship project" when CEO changes; chaos is opportunity to learn; etc.).
  2. GRANDstack -- GraphQL, React, Apollo, and Neo4j.
  3. The Most Important Law Review Article You’ll Never Read: A Hilarious (in the Footnotes) Yet Serious (in the Text) Discussion of Law Reviews and Law Professors (SSRN) -- the best discussion of foolish academic publishing measures you'll read today.
  4. The "Platform" Excuse is Dying (The Atlantic) -- The platform defense used to shut down the why questions: Why should YouTube host conspiracy content? Why should Facebook host provably false information? Facebook, YouTube, and their kin keep trying to answer: "We’re platforms!" But activists and legislators are now saying, "So what?"

Continue reading Four short links: 9 July 2019.

Categories: Technology

The circle of fairness

O'Reilly Radar - Tue, 2019/07/09 - 04:00

We shouldn't ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement.

Fairness isn't so much about "being fair" as it is about "becoming less unfair." Fairness isn't an absolute; we all have our own (and highly biased) notions of fairness. On some level, our inner child is always saying: "But that's not fair." We know humans are biased, and it's only in our wildest fantasies that we believe judges and other officials who administer justice somehow manage to escape the human condition. Given that, what role does software have to play in improving our lot? Can a bad algorithm be better than a flawed human? And if so, where does that lead us in our quest for justice and fairness?

While we talk about AI being inscrutable, in reality it's humans who are inscrutable. In Discrimination in the Age of Algorithms, Jon Kleinberg, et al., argue that algorithms, while unfair, can at least be audited rigorously. Humans can't. If we ask a human judge, bank officer, or job interviewer why they made a particular decision, we'll probably get an answer, but we'll never know whether that answer reflects the real reason behind the decision. People often don’t know why they make a decision, and even when someone attempts an honest explanation, we never know whether there are underlying biases and prejudices they aren't aware of. Everybody thinks they're "fair," and few people will admit to prejudice. With an algorithm, you can at least audit the data that was used to train the algorithm and test the results the algorithm gives you. A male manager will rarely tell you he doesn't like working with women, or he can't trust people of color. Algorithms don't have those underlying and unacknowledged agendas; the agendas are in the training data, hiding in plain sight if we only search for them. We have the tools we need to make AI transparent–not explainable, perhaps, but we can expose bias, whether it’s hiding in the training data or the algorithm itself.

Auditing can reveal when an algorithm has reached its limits. Julia Dressel and Hany Farid, studying the COMPAS software for recommending bail and prison sentences, found that it was no more accurate than randomly chosen people at predicting recidivism. Even more striking, they built a simple classifier that matched COMPAS’s accuracy using only two features–the defendant’s age and number of prior convictions–not the 137 features that COMPAS uses. Their interpretation was that there are limits to prediction, beyond which providing a richer set of features doesn’t add any signal. Commenting on this result, Sharad Goel offers a different interpretation, that “judges in the real world have access to far more information than the volunteers...including witness testimonies, statements from attorneys, and more. Paradoxically, that informational overload can lead to worse results by allowing human biases to kick in.” In this interpretation, data overload can enable unfairness in humans. With an algorithm, it’s possible to audit the data and limit the number of features if that’s what it takes to improve accuracy. You can’t do that with humans; you can’t limit their exposure to extraneous data and experiences that may bias them.

Understanding the biases that are present in training data isn't easy or simple. As Kleinberg points out, properly auditing a model would require collecting data about protected classes; it's difficult to tell whether a model shows racial or gender bias without data about race and gender, and we frequently avoid collecting that data. In another paper, Kleinberg and his co-authors show there are many ways to define fairness that are mathematically incompatible with each other. But understanding model bias is possible, and if possible, it should be possible to build AI systems that are at least as fair as humans, if not more fair.

This process is similar to the 19th-century concept of the "hermeneutic circle." A literary text is inseparable from its culture; we can't understand the text without understanding the culture, nor can we understand the culture without understanding the texts it produced. A model is inseparable from the data that was used to train it; but analyzing the output of the model can help us to understand the data, which in turn enables us to better understand the behavior of the model. To philosophers of the 19th century, the hermeneutic circle implies gradually spiraling inward: better historical understanding of the culture that produces the text enables a better understanding of the texts the culture produced, which in turn enables further progress in understanding the culture, and so on. We approach understanding asymptotically.

I’m bringing up this bit of 19th-century intellectual history because the hermeneutic circle is, if nothing else, an attempt to describe a non-trivial iterative process for answering difficult questions. It’s a more subtle and self-reflective process than “fail forward fast” or even gradient descent. And awareness of the process is important. AI won’t bring us an epiphany in which our tools suddenly set aside years of biased and prejudiced history. That’s what we thought when we “moved fast and broke things”: we thought we could non-critically invent ourselves out of a host of social ills. That didn’t happen. If we can get on a path toward doing better, we are doing well. And that path certainly entails a more complex understanding of how to make progress. We shouldn't ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement. If we can make progress through several possibly painful iterations, we approach the center.

The hermeneutic circle also reminds us that understanding comes from looking at both the particular and the general: the text and the context. That is particularly important when we’re dealing with data and with AI. It is very easy for human subjects to become abstractions–rows in a database that are assigned a score, like the probability of committing a crime. When we don’t resist that temptation, when we allow ourselves to be ruled by abstractions rather than remembering our abstractions represent people, we will never be “fair”: we’ve lost track of what fair means. It’s impossible to be fair to a table in a database. Fairness is always about individuals.

We're right to be skeptical. First, European thought has been plagued by the notion that European culture is the goal of human history. “Move fast and break things” is just another version of that delusion: we’re smart, we’re technocrats, we’re the culmination of history, of course we’ll get it right. If our understanding of "fairness" degenerates into an affirmation of what we already are, we are in trouble. It's dangerous to put too much faith in our ability to perform audits and develop metrics: it's easy to game the system, and it's easy to trick yourself into believing you've achieved something you haven't. I’m encouraged, though, by the idea that the hermeneutic circle is a way of getting things right by being slightly less wrong. It’s a framework that demands humility and dialog. For that dialog to work, it must take into account the present and the past, the individual and the collective data, the disenfranchised and the franchised.

Second, we have to avoid turning the process of fairness into a game: a circle where you're endlessly chasing your tail. It's easy to celebrate the process of circling while forgetting that the goal isn't publishing papers and doing experiments. It’s easy to say “we’re not making any progress, and we probably can’t make any progress, but at least our salaries are being paid and we’re doing interesting work.” It’s easy to play the circle game when it can be proven different definitions of fairness are incompatible, or when contemplating the enormous number of dimensions in which one might want to be fair. And we will have to admit that fairness is not an absolute concept that’s graven on stone tablets, but that is fundamentally situational.

It was easy for the humanistic project of interpretation to lose itself in the circle game because it never had tools like audits and metrics. It could never measure whether it was getting closer to its goal, and when you can't measure your progress, it's easy to get lost. But we can measure disenfranchisement, and we can ensure that marginalized people are included in our conversations, so we understand what being "fair" means to people who are outside the system. As Cathy O'Neil has suggested, we can perform audits to black-box systems. We can understand fairness will always be elusive and aspirational, and use that knowledge to build appeal and redress into our systems. We can't let the ideal of perfect fairness become an excuse for inaction. We can make incremental progress toward building a world that's better for all of us.

We'll never finish that project, in part because the issues we're tracking will always be changing, and our old problems will mutate to plague us in new ways. We’ll never be done because we will have to deal with messy questions like what “fair” means in any given context, and those contexts will change constantly. But we can make progress: having taken one step, we'll be in a position to see the next.

Continue reading The circle of fairness.

Categories: Technology

PLUG meeting on Jul 11th

PLUG - Mon, 2019/07/08 - 23:01
We'll have 2 presenters this month with a distribution theme.

Artemii Kropachev: Red Hat Enterprise Linux 8 Beta 1 Overview

Description:
Learn about the first version release of Red Hat Enterprise Linux in over four years. The latest release features unprecedented ease of deployment, ease of migration, and ease of management enabling you to upgrade existing customers and attract new ones.
Red Hat Enterprise Linux 8 gives organizations a stable, security-focused, and consistent foundation across hybrid cloud deployments—and the tools they need to deliver applications and workloads faster with less effort.

About Artemii:
Worldwide IT expert and international consultant with over 20 years of high level IT experience and expertise. I have trained, guided and consulted hundreds of architects, engineer, developers, and IT experts around the world since 2001. My architect-level experience covers DC, Clouds, DevOps, NFV solutions built on top of any Red Hat and Open Source technologies. I am one of the highest Red Hat Certified Specialists in the world.


der.hans: Hey Buster! Debian 10 released

Description:
Debian 10 brings with it many ch-ch-changes.

Reproduciable Builds, Wayland, AppArmor, nftables, cups.

10 hardware architectures, 59000 packages, 28,939 source packages, 11,610,055 source files, and 76 languages.

Stretch updates.

Get or upgrade to Debian 10 now.

Coming soon on Blu-ray.

About der.hans:
der.hans is a Free Software, technology and entrepreneurial veteran. He is a repeat author for the Linux Journal with his article about online privacy and security using a password manager as the cover article for the January 2017 issue.

He's chairman of the Phoenix Linux User Group (PLUG), BoF organizer for the Southern California Linux Expo (SCaLE), and founder of the Free Software Stammtisch and Stammtisch Job Nights.

He often presents at large community-led conferences (SCaLE, SeaGL, LFNW, Tübix) and many local groups.

https://floss.social/@FLOX_advocate
https://mastodon.social/@lufthans

Highlights from the O'Reilly Artificial Intelligence Conference in Beijing 2019

O'Reilly Radar - Mon, 2019/07/08 - 08:51

Experts explore the future of hiring, AI breakthroughs, embedded machine learning, and more.

Experts from across the AI world came together for the O'Reilly Artificial Intelligence Conference in Beijing. Below you'll find links to highlights from the event.

The future of hiring and the talent market with AI

Maria Zheng examines AI and its impact on people’s jobs, quality of work, and overall business outcomes.

The future of machine learning is tiny

Pete Warden digs into why embedded machine learning is so important, how to implement it on existing chips, and some of the new use cases it will unlock.

AI and systems at RISELab

Ion Stoica outlines a few projects at the intersection of AI and systems that UC Berkeley's RISELab is developing.

Top AI breakthroughs you need to know

Abigail Hing Wen discusses some of the most exciting recent breakthroughs in AI and robotics.

Data orchestration for AI, big data, and cloud

Haoyuan Li offers an overview of a data orchestration layer that provides a unified data access and caching layer for single cloud, hybrid, and multicloud deployments.

AI and retail

Mikio Braun takes a look at Zalando and the retail industry to explore how AI is redefining the way ecommerce sites interact with customers.

Why do we say AI should be cloud native?

Yangqing Jia reviews industry trends supporting the argument that AI should be cloud native.

--> Designing computer hardware for artificial intelligence

Michael James examines the fundamental drivers of computer technology and surveys the landscape of AI hardware solutions.

Toward learned algorithms, data structures, and systems

Tim Kraska outlines ways to build learned algorithms and data structures to achieve “instance optimality” and unprecedented performance for a wide range of applications.

Continue reading Highlights from the O'Reilly Artificial Intelligence Conference in Beijing 2019.

Categories: Technology

AI and retail

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Mikio Braun takes a look at Zalando and the retail industry to explore how AI is redefining the way ecommerce sites interact with customers.

Continue reading AI and retail.

Categories: Technology

The future of hiring and the talent market with AI

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Maria Zheng examines AI and its impact on people’s jobs, quality of work, and overall business outcomes.

Continue reading The future of hiring and the talent market with AI.

Categories: Technology

Top AI breakthroughs you need to know

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Abigail Hing Wen discusses some of the most exciting recent breakthroughs in AI and robotics.

Continue reading Top AI breakthroughs you need to know.

Categories: Technology

Data orchestration for AI, big data, and cloud

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Haoyuan Li offers an overview of a data orchestration layer that provides a unified data access and caching layer for single cloud, hybrid, and multicloud deployments.

Continue reading Data orchestration for AI, big data, and cloud.

Categories: Technology

Toward learned algorithms, data structures, and systems

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Tim Kraska outlines ways to build learned algorithms and data structures to achieve “instance optimality” and unprecedented performance for a wide range of applications.

Continue reading Toward learned algorithms, data structures, and systems.

Categories: Technology

The future of machine learning is tiny

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Pete Warden digs into why embedded machine learning is so important, how to implement it on existing chips, and shares new use cases it will unlock.

Continue reading The future of machine learning is tiny.

Categories: Technology

AI and systems at RISELab

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Ion Stoica outlines a few projects at the intersection of AI and systems that UC Berkeley's RISELab is developing.

Continue reading AI and systems at RISELab.

Categories: Technology

Designing computer hardware for artificial intelligence

O'Reilly Radar - Mon, 2019/07/08 - 08:50

Michael James examines the fundamental drivers of computer technology and surveys the landscape of AI hardware solutions.

Continue reading Designing computer hardware for artificial intelligence.

Categories: Technology

Four short links: 8 July 2019

O'Reilly Radar - Mon, 2019/07/08 - 03:50

Algorithmic Governance, DevOps Assessment, Retro Language, and Open Source Satellite

  1. Algorithmic Governance and Political Legitimacy (American Affairs Journal) -- Mechanized judgment resembles liberal proceduralism. It relies on our habit of deference to rules, and our suspicion of visible, personified authority. But its effect is to erode precisely those pro­cedural liberties that are the great accomplishment of the liberal tradition, and to place authority beyond scrutiny. I mean “authori­ty” in the broadest sense, including our interactions with outsized commercial entities that play a quasi-governmental role in our lives. That is the first problem. A second problem is that decisions made by an algorithm are often not explainable, even by those who wrote the algorithm, and for that reason cannot win rational assent. This is the more fundamental problem posed by mechanized decision-making, as it touches on the basis of political legitimacy in any liberal regime.
  2. The 27-Factor Assessment Model for DevOps -- The factors are the cross-product of current best practices for three dimensions (people, process, and technology) with nine pillars (leadership, culture, app development/design, continuous integration, continuous testing, infrastructure on demand, continuous monitoring, continuous security, continuous delivery/deployment).
  3. Millfork -- a middle-level programming language targeting 6502- and Z80-based microcomputers and home consoles.
  4. FossaSat-1 (Hackaday) -- FossaSat-1 will provide free and open source IoT communications for the globe using inexpensive LoRa modules, where anyone will be able to communicate with a satellite using modules found online for under 5€ and basic wire mono-pole antennas.

Continue reading Four short links: 8 July 2019.

Categories: Technology

Four short links: 5 July 2019

O'Reilly Radar - Fri, 2019/07/05 - 06:10

Online Not All Bad, Emotional Space, Ted Chiang, Thread Summaries

  1. How a Video Game Community Filled My Nephew's Final Days with Joy (Guardian) -- you had a rough week. Treat yourself to this heart-warming story of people going the extra mile for someone.
  2. Self-Report Captures 27 Distinct Categories of Emotion Bridged by Continuous Gradients -- Although reported emotional experiences are represented within a semantic space best captured by categorical labels, the boundaries between categories of emotion are fuzzy rather than discrete. By analyzing the distribution of reported emotional states, we uncover gradients of emotion—from anxiety to fear to horror to disgust, calmness to aesthetic appreciation to awe, and others—that correspond to smooth variation in affective dimensions such as valence and dominance. Reported emotional states occupy a complex, high-dimensional categorical space. In addition, our library of videos and an interactive map of the emotional states they elicit are made available to advance the science of emotion. (via Dan Hon)
  3. Sci-Fi Author Ted Chiang on Our Relationship to Technology, Capitalism, and the Threat of Extinction (GQ) -- Right now I think we’re beginning to see a correction to the wild techno-boosterism that Silicon Valley has been selling us for the last couple decades, and that’s a good thing as far as I’m concerned. I wish we didn’t swing back and forth from the extremes of Pollyannaish optimism to dystopian pessimism; I’d prefer it if we had a more measured response throughout, but that doesn’t appear to be in our nature. +1 to this. I don't like the way we have spent 20 years imagining dystopias and then building them.
  4. Wikum -- Summarize large discussion threads.

Continue reading Four short links: 5 July 2019.

Categories: Technology

Four short links: 4 July 2019

O'Reilly Radar - Thu, 2019/07/04 - 06:50

Debugging AI, Serverless Foundations, YouTube Bans, and Pathological UI

  1. TensorWatch -- open source Microsoft, a debugging and visualization tool designed for data science, deep learning, and reinforcement learning.
  2. Formal Foundations of Serverless Computing -- the serverless computing abstraction exposes several low-level operational details that make it hard for programmers to write and reason about their code. This paper sheds light on this problem.
  3. YouTube Bans Videos Showing Hacking and Phishing (Kody) -- We made a video about launching fireworks over Wi-Fi for the 4th of July only to find out @YouTube gave us a strike because we teach about hacking, so we can't upload it. YouTube now bans: "Instructional hacking and phishing: Showing users how to bypass secure computer systems."
  4. User Inyerface -- an exercise in frustration.

Continue reading Four short links: 4 July 2019.

Categories: Technology

Tools for machine learning development

O'Reilly Radar - Wed, 2019/07/03 - 06:35

The O'Reilly Data Show: Ben Lorica chats with Jeff Meyerson of Software Engineering Daily about data engineering, data architecture and infrastructure, and machine learning.

In this week's episode of the Data Show, we're featuring an interview Data Show host Ben Lorica participated in for the Software Engineering Daily Podcast, where he was interviewed by Jeff Meyerson. Their conversation mainly centered around data engineering, data architecture and infrastructure, and machine learning (ML).

Continue reading Tools for machine learning development.

Categories: Technology

New live online training courses

O'Reilly Radar - Wed, 2019/07/03 - 04:20

Get hands-on training in TensorFlow, AI applications, critical thinking, Python, data engineering, and many other topics.

Learn new topics and refine your skills with more than 151 new live online training courses we opened up for July and August on the O'Reilly online learning platform.

AI and machine learning

Getting Started with Tensorflow.js, July 23

Building Intelligent Analytics Through Time Series Data, July 31

Natural Language Processing (NLP) from Scratch , August 5

Cloud Migration Strategy: Optimizing Future Operations with AI, August 7

Intermediate Natural Language Processing (NLP), August 12

Machine Learning for Business Analytics: A Deep Dive into Data with Python, August 19

Inside unsupervised learning: Semisupervised learning using autoencoders, August 20

TensorFlow 2.0 Essentials – What's New , August 23

A Practical Introduction to Machine Learning , August 26

Artificial Intelligence: Real-world Applications , August 26

Inside Unsupervised Learning: Generative Models and Recommender Systems, August 27

Hands-On Algorithmic Trading with Python, September 3

Artificial Intelligence: AI for Business, September 4

TensorFlow Extended: Data Validation and Transform, September 11

Blockchain

Introducing Blockchain, August 2

Business

Building Your People Network, July 8

Getting Unstuck, August 5

How to Choose Your Cloud Provider, August 7

Spotlight on Data: Data Pipelines and Power Imbalances—3 Cautionary Tales with Catherine D’Ignazio and Lauren Klein, August 19

Salary Negotiation Fundamentals, August 20

Fundamentals of Cognitive Biases, August 20

Empathy at Work, August 20

Developing Your Coaching Skills, August 21

Applying Critical Thinking, August 22

Building Your People Network, August 27

60 Minutes to Designing a Better PowerPoint Slide , August 27

60 Minutes to a Better Prototype, August 27

Introduction to Critical Thinking, August 27

Spotlight on Learning from Failure: Fixing HealthCare.gov with Sha Hwang, August 27

Managing Your Manager, August 28

Scrum Master: Good to Great, August 29

Being a Successful Team Member, September 4

Fundamentals of Learning: Learn faster and better using neuroscience, September 5

Leadership Communication Skills for Managers, September 10

Getting S.M.A.R.T about Goals, September 10

Spotlight on Innovation: Enabling Growth Through Disruption with Scott Anthony, September 11

Writing User Stories, September 11

Data science and data tools

Applied Probability Theory from Scratch, July 17

Interactive Visualization Approaches, July 25

Apache Hadoop, Spark and Big Data Foundations , August 1

Visualizing Software Architecture with the C4 Model, August 2

Data Engineering for Data Scientists, August 6

Analyzing and Visualizing Data with Microsoft Power BI, August 9

Hands-on Introduction to Apache Hadoop and Spark Programming, August 12-13

Scalable Data Science with Apache Hadoop and Spark, August 19

IoT Fundamentals, August 20-21

Algorithmic Risk Management in Trading and Investing, August 23

Business Data Analytics Using Python, August 26

Python Data Science Full Throttle with Paul Deitel: Introductory Artificial Intelligence (AI), Big Data and Cloud Case Studies, August 26

Real-time Data Foundations: Flink, August 27

Managing Enterprise Data Strategies with Hadoop, Spark, and Kafka, August 29

Design and product management

Introduction to UI & UX design, August 28

Programming

Kotlin for Android, July 11-12

SQL for Any IT Professional, July 16

Design Patterns in Java, July 29-30

Discovering Modern Java, August 2

Essentials of JVM Threading, August 2

Getting Started with Pandas, August 6

Programming with Data: Foundations of Python and Pandas, August 12

Beginner’s Guide to Writing AWS Lambda Functions in Python, August 12

Solving Java Memory Leaks, August 12

Introduction to Python Programming , August 12

Working with Dataclasses in Python 3.7, August 15

Reactive Programming with Java Completable Futures, August 15

Getting Started with Python's Pytest, August 19

Visualization in Python with Matplotlib, August 19

Python Full Throttle with Paul Deitel: A One-Day, Fast-Paced, Code-Intensive Python, August 19

Oracle Java SE Programmer I Crash Course: Pass the 1Z0-815 or 1Z0-808 Exams, August 19-21

Linux Troubleshooting: Advanced Linux Techniques, August 20

Introduction to the Bash Shell, August 21

Getting Started with Node.js, August 21

Applied Cryptography with Python, August 22

Mentoring Technologists, August 22

CSS Layout Fundamentals: From Floats to Flexbox and CSS Grid, August 22

React Hooks in Action, August 23

Getting Started with Java: From Core Concepts to Real Code in 4 Hours, August 23

Bash Shell Scripting in 4 Hours, August 23

Continuous Delivery and Tooling in Go, August 26

Mastering SELinux, August 26

Functional Programming in Java, August 26-27

Scalable Concurrency with the Java Executor Framework, August 29

SOLID Principles of Object-Oriented and Agile Design, August 30

Fraud Analytics using Python, September 3

Getting Started with Spring and Spring Boot, September 3-4

Linear Algebra with Python: Essential Math for Data Science, September 5

Python-Powered Excel, September 9

Design Patterns Boot Camp , September 9-10

Secure JavaScript with Node.js, September 12

Security

Introduction to Digital Forensics and Incident Response (DFIR), July 31

Cisco Security Certification Crash Course , August 16

Security Operation Center (SOC) Best Practices, August 19

Expert Transport Layer Security (TLS), August 20

CompTIA A+ Core 1 (220-1001) Certification Crash Course, August 21-22

Introduction to Ethical Hacking and Penetration Testing, August 22-23

CISSP Crash Course, August 27-28

CISSP Certification Practice Questions and Exam Strategies, August 28

Defensive Cybersecurity Fundamentals, August 29

Cybersecurity Offensive and Defensive Techniques in 3 Hours, August 30

Azure Security Fundamentals, September 4

Systems engineering and operations

DevOps on Google Cloud Platform (GCP), July 8

Getting Started with Microsoft Azure, July 12

Getting Started with Amazon Web Services (AWS), July 24-25

Ansible for Managing Network Devices, August 1

Software Architecture for Developers, August 1

Practical Software Design from Problem to Solution , August 1

Facebook Libra, August 1

Introducing Infrastructure as Code with Terraform, August 1

AWS CloudFormation Deep Dive, August 5-6

Rethinking REST: A hands-on guide to GraphQL and queryable APIs, August 6

Julia 1.0 Essentials, August 6

Getting Started with Serverless Architectures on Azure, August 8

Deploying Container-Based Microservices on AWS, August 12-13

AWS Access Management, August 13

Exam AZ-103: Microsoft Azure Administrator Crash Course, August 15-16

Architecture for Continuous Delivery, August 19

Getting Started with OpenStack, August 19

AWS Certified Big Data - Specialty Crash Course, August 19-20

Google Cloud Platform – Professional Cloud Developer Crash Course, August 19-20

CompTIA Network+ N10-007 Crash Course , August 19-21

Shaping and Communicating Architectural Decisions, August 20

AWS Certified Cloud Practitioner Exam Crash Course, August 20-21

Software Architecture Foundations: Characteristics and Tradeoffs, August 21

Google Cloud Platform Professional Cloud Architect Certification Crash Course, August 21-22

Red Hat RHEL 8 New Features, August 22

Introduction to Google Cloud Platform , August 22-23

Istio on Kubernetes: Enter the Service Mesh, August 27

AWS Monitoring Strategies, August 27-28

Red Hat Certified System Administrator (RHCSA) Crash Course, August 27-30

Azure Architecture: Best Practices, August 28

Web Performance in Practice, August 28

AWS Account Setup Best Practices , August 29

Getting Started with Amazon SageMaker on AWS, August 29

Jenkins 2: Beyond the Basics, September 3

Comparing Service-based Architectures , September 3

Microservice Collaboration, September 3

Introduction to Docker Compose, September 3

Chaos Engineering: Planning and Running Your First Game Day, September 3

Next-level Git: Master Your Workflow, September 4

Introduction to Knative, September 5

Reactive Spring and Spring Boot, September 9

Developing DApps with Ethereum, September 9

Building a Deployment Pipeline with Jenkins 2, September 9-10

Building Data APIs with GraphQL, September 11

Creating React Applications with GraphQL , September 12

Jenkins 2: Up and Running, September 12

Microservices Caching Strategies, September 12

Chaos Engineering: Planning, Designing, and Running Automated Chaos Experiments, September 12

Google Cloud Platform Security Fundamentals, September 12

Understanding AWS Cloud Compute Option, September 12-13

Google Cloud Certified Associate Cloud Engineer Crash Course, September 12-13

Continue reading New live online training courses.

Categories: Technology

Pages

Subscribe to LuftHans aggregator - Technology