All Training Courses
Search and filter results below
This is an introduction to the exciting new field of quantum computing, including programming actual quantum computers in the cloud. Quantum computing promises to revolutionise cryptography, machine learning, cyber security, weather forecasting and a host of other mathematical and high-performance computing fields. A practical component will include writing quantum programs and executing them on simulators as well as on actual quantum computers in the cloud.
With the advent of automation, humans’ role has become to do what computers cannot. Many more white-collar workers—perhaps all of them—will end up “working with data” to some extent. This course for managers and workers without a strong quantitative background introduces a range of skills and applications related to critical thinking in such areas as forecasting, population measurement, set theory and logic, causal impact and attribution, scientific reasoning and the danger of cognitive biases. There are no prerequisites beyond high-school mathematics; this course has been designed to be approachable for everyone.
Our leading course has transformed the machine-learning and data-science practice of the many managers, sponsors, key stakeholders, entrepreneurs and beginning data-science practitioners who have attended it. This course is an intuitive, hands-on introduction to data science and machine learning. The training focuses on central concepts and key skills, leaving the trainee with a deep understanding of the foundations of data science and even some of the more advanced tools used in the field. The course does not involve coding, or require any coding knowledge or experience.
This course goes deeper into the tidyverse family of packages, with a focus on advanced data handling, as well as advanced data structures such as list columns in tibbles, and their application to model management. Another key topic is advanced functional programming with the purrr package, and advanced use of the pipe operator. Optional topics may include dplyr on databases, and use of rmarkdown and Rstudio notebooks.
With big data expert and author Jeffrey Aven. The second module in the “Big Data Development Using Apache Spark” series, this course provides the knowledge needed to develop real-time, event-driven or -oriented processing applications using Apache Spark. It covers using Spark with NoSQL systems and popular messaging platforms like Apache Kafka and Amazon Kinesis. It covers the Spark streaming architecture in depth, and uses practical hands-on exercises to reinforce the use of transformations and output operations, as well as more advanced stream-processing patterns.
This course is an introduction to the highly celebrated are of Neural Networks, popularised as “deep learning” and “AI”.
The course will cover the key concepts underlying neural network technology, as well as the unique capabilities of a number of advanced deep learning technologies, including Convolutional Neural Nets for image recognition, recurrent neural nets for time series and text modelling, and new Artificial Intelligence techniques including Generative Adversarial Networks and Reinforcement Learning. Practical exercises will present these methods in some of the most popular Deep Learning packages available in Python, including Keras and Tensorflow.
Trainees are expected to be familiar with the basics of machine learning from the introductory course, as well as the python language.
Python is a high-level, general-purpose language used by a thriving community of millions. Data-science teams often use it in their production environments and analysis pipelines, and it’s the tool of choice for elite data-mining competition winners and deep-learning innovations. This course provides a foundation for using Python in exploratory data analysis and visualisation, and as a stepping stone to machine learning.
With big data expert and author Jeffrey Aven. The third module in the “Big Data Development Using Apache Spark” series, this course provides the practical knowledge needed to perform statistical, machine learning and graph analysis operations at scale using Apache Spark. It enables data scientists and statisticians with experience in other frameworks to extend their knowledge to the Spark runtime environment with its specific APIs and libraries designed to implement machine learning and statistical analysis in a distributed and scalable processing environment.
This class builds on the introductory Python class. Jupyter Notebook advanced use and customisation is covered as well as configuring multiple environments and kernels.
The Numpy package is introduced for working with arrays and matrices and a deeper coverage of Pandas data analysis and manipulation methods is provided including working with time series data.
Data exploration and advanced visualisations are taught using the Plotly and Seaborne libraries.
With big data expert and author Jeffrey Aven. The first module in the “Big Data Development Using Apache Spark” series, this course provides a detailed overview of the spark runtime and application architecture, processing patterns, functional programming using Python, fundamental API concepts, basic programming skills and deep dives into additional constructs including broadcast variables, accumulators, and storage and lineage options. Attendees will learn to understand the Spark framework and runtime architecture, fundamentals of programming for Spark, gain mastery of basic transformations, actions, and operations, and be prepared for advanced topics in Spark including streaming and machine learning.