, Age and Earnings
, Credit Wizard
, Bacterial Growth
, ATM Locations
, Ban Users
, Bank Branches
, Birthday Cards
, Cheapest Product
, Class Grades
, Credit Score
, Department Report
, Employee Manager
, Free Throws
, Hospital Patients
, Index Performance
, Menu Items
, Merge Stock Index
, Movies Live
, Restaurant Menu
, SMS Messages
, Student Activities
, Student Max Score
, Youngest Child
, Median Height
, Clean CSV
, Distribution Fitting
, Cubic Approximation
, Student Rankings
, Average Salary
, Welfare Organization
, Auto Show
, Movie Genres
, Manager Sales
SQL is the dominant technology for accessing application data. It is increasingly becoming a performance bottleneck when it comes to scalability. Given its dominance, SQL is a crucial skill for all engineers.
The delete statement is used to delete records in a table and is one of the four basic CRUD functions (create, read, update, and delete) required for working with any persistent storage.
Subqueries are commonly used in database interactions, making it important for a programmer to be skilled at writing them.
Conditional statements are a feature of most programming and query languages. They allow the programmer to control what computations are carried out based on a Boolean condition.
A database view is a result set that is defined by a stored query, the results of which can can also be queried. As a fundamental and widely used database construct, it's useful for candidates to understand how and when they should be used.
The UPDATE statement is used to modify the existing records in a table and is one of the most used operations for working with the database.
When we need to discover the information hidden in vast amounts of data, or make smarter decisions to deliver even better products, data scientists hold the key to the answers you need.
Linear regression is one of the most frequently used methods for data analysis due to its simplicity and applicability to a wide variety of problems.
Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring within a fixed interval of time and/or space, if these events occur with a known average rate and independently of the time since the last event. As one of the most widely used distributions, it is important for all Data Scientists to be familiar with it.
Probability theory is the foundation of most statistical and machine-learning algorithms.
A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences. It is usually a tool for displaying an algorithm that contains only conditional control statements and is a must-know for every data scientist.
Binomial distribution is the discrete probability distribution of the number of successes in a sequence of independent yes/no experiments, each of which yields success with a given probability.
An important concept, p-value is defined as the probability of obtaining a result equal to or "more extreme" than what was actually observed, when the null hypothesis is true.
Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points. This is basic knowledge of every data scientist.
Constraints are used to define rules and relationships. They are applied to a dataset. A constraint may take many forms, such as x ≤ 5 in a programming language and a NOT NULL constraint in a SQL table definition.
The CREATE TABLE statement is used to create a new table in a database. It is an essential command when creating new database.
A database schema defines how data is stored in a database. An SQL database uses a schema to define tables consisting of rows and columns that use fixed data types to store data. Formalizing how data is stored is the first step towards building an application or service.
The SELECT statement is used to select data from a database. It is the most used SQL command.
The CREATE INDEX statement is used to create indexes for tables. Indexes are used to retrieve data from the database more quickly. They are very important for making performant queries.
The performance of an application or system is important. The responsiveness and scalability of an application are all related to how performant an application is. Each algorithm and query can have a large positive or negative effect on the whole system.
The ALTER TABLE statement is used to add, delete, or modify columns and constraints in an existing table. Alter table statements are important for all programmers who have to modify existing schemas.
Grouping is the process of separating items into different groups. Developers and data scientists often need to group data so they can examine them separately.
Pandas is a library for the Python programming language that’s used for data manipulation and analysis. It is an essential library for any data scientist who works with Python.
Every programmer should be familiar with data-sorting methods, as sorting is very common in data-analysis processes.
An aggregate function is typically used in database queries to group together multiple rows to form a single value of meaningful data. A good programmer should be skilled at using data aggregation functions when interacting with databases.
NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. NumPy is an essential library for any data scientist who works with Python.
Knowing how to order data is a common task for every programmer.
Classification is the problem of identifying to which set of categories a new observation belongs, on the basis of a training set of data containing observations whose category membership is known. As one of the common tasks in machine learning, it’s important for all data scientists.
An important Data Science algorithm, the k-nearest neighbors algorithm is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression.
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It’s important for all tasks where it’s infeasible to construct conventional algorithms, which is often the case in Data Science.
Scikit-learn (or sklearn) is a machine learning library for the Python programming language. Every data scientist who works with Python and tasks such as classification, regression, and clustering algorithms should know how to use it.
Even though most database insert queries are simple, a good programmer should know how to handle more complicated situations like batch inserts.
LEFT JOIN is one of the ways to merge rows from two tables. We use it when we also want to show rows that exist in one table, but don't exist in the other table.
Everyone makes mistakes. A good programmer should be able to find and fix a bug in their or someone else's code.
Data aggregation is the process of gathering and summarizing information in a specified form. It is a common component of most statistical analysis processes.
The proper implementation and use of indexes are important for improving the performance of database queries.
The UNION operator is used to combine the result-set of two or more SELECT statements. It is often used when a report needs to be made based on multiple tables.
The GROUP BY statement groups rows by some attribute into summary rows. It is a common command when making various reports.
A normalized database is normally made up of multiple tables. Joins are, therefore, required to query across multiple tables.
Data cleaning or data cleansing is the process of detecting and correcting (or removing) corrupt or inaccurate records. Data scientists should be familiar with it to avoid incorrect records that can affect analysis.
A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas. Processing CSV files is a common task when working with tabular data.
Cauchy distribution is the distribution of the ratio of two independent normally distributed Gaussian random variables. As one of the most widely used distributions, it is important for all Data Scientists to be familiar with it.
Exponential distribution is the probability distribution that describes the time between events in a process in which events occur continuously and independently at a constant average rate. As one of the most widely used distributions, it is important for all Data Scientists to be familiar with it.
Normal distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. As one of the most widely used distributions, it is important for all Data Scientists to be familiar with it.
SciPy is a Python library used for scientific and technical computing. Every data scientist who uses Python as a programming language should know how to use it for tasks such as optimization, linear algebra, integration, etc.
Nonlinear regression is a form of regression analysis in which observational data are modeled by a function which is a nonlinear combination of the model parameters and depends on one or more independent variables. Since many problems are not linear, nonlinear regression is important for machine learning practitioners.
The CASE statement is SQL's control statement. It goes through conditions and returns a value.
A CTE (Common Table Expression) is a temporary result set that can be referenced within another SELECT, INSERT, UPDATE, or DELETE statement. Recursive CTEs can reference themselves, which enables developers to work with hierarchical data.