These are the sessions submitted so far for SQLBits 2018.

Microsoft has two different types of SQL Server services available in Azure. The SQL Server on an Azure VM is the IaaS (Infrastructure as a Service), which is easier to understand and migrate to.
Microsoft offers two types of PaaS (Platform as a Service) or DBaaS (Database as a Service) for SQL Server Database native to could – Single Database and Elastic Pool. Choosing the right tier for your database gets harder with the complexity in calculating the resource utilization for your Azure SQL Database.
In this session, I will walk you through the steps involved in analysing the resource utilisation and estimating the right tier to choose for your database in Azure. I will also uncover the mysterious DTU and how it is calculated.
When SQL Server 2016 was released, it offered a fantastic new feature with the Query Store. Long term, statistics based, query tuning became a reality. But what about the thousands of servers that aren't upgrading to SQL 2016 or newer? The open source project Open Query Store is designed to fulfill that need.

This session will give a short introduction to the Query Store feature in SQL 2016 and then dive into the Open Query Store (OQS) solution. Enrico and William (the co-creators of the OQS project) will explain the design of OQS and demonstrate the features. You will leave this session with an understanding of the features of Query Store and Open Query Store, and a desire to implement OQS in your systems when you return to the office.
In this talk you will learn how to use Power BI to prototype/develop a BI solution in days and then (if needed) evolve it into a fully scalable Azure BI solution.
This talk is all about showing real world tips from real world scenarios of using Power BI: the goods and the bads.

This session is targeted to whom is using or start using Power BI and want to take home some really good tips & tricks ;)
In this session we will run through all of the latest technologies and tooling we are developing at
Microsoft to democratise machine learning.

We will look at Cognitive Services with prebuilt deep convolutional neural networks and your own custom neural networks,

We will look at R tooling and how to create your own image recognition models in R.

We will cover methods for operationalizing you learning such as SQL Server 2017 and Azure Data Lake Analytics along with a few other surprises

All of this with just practical demos and no PowerPoints
In this talk we will discuss best practices around how to design and maintain an Azure SQL Data Warehouse for best throughput and query performance. We will look at distribution types, index considerations, execution plans, workload management and loading patterns. At the end of this talk you will understand the common pitfalls and be empowered to either construct a highly performant Azure SQL Data Warehouse or address performance issues in an existing deployment.
Hierarchies and graphs are the bread and butter of most business applications and you find them almost everywhere:

  • Product Categories
  • Sales Territories
  • Bill of Material
  • Calendar and Time

Even when there is a big need from a business perspective, the solutions in relational databases are mostly sort of awkward. The most flexible hierarchies are usually modeled as self-referenced tables. If you want to successfully query such self-referenced hierarchies, you will need either loops or recursive Common Table Expressions. SQL Server 2017 comes now with a different approach: Graph Database.

Join this session for a journey through best practices to transform your hierarchies into useful information. We will have fun playing around with a sample database based on G. R. R. Martin’s famous “Game of Thrones”.
Your developers need a copy of the production database, and they need it now! How do you keep up with the shift towards agile development? VMs are a good solution, but we can make environments easier to manage, smaller, and cheaper with containers. Containers let you run SQL Server in an isolated, lightweight environment but working with them can be tricky. In this session I'll explain the different types of containers available for SQL Server, why some options are better than others, and why they're worth considering. You will learn how to use Docker and Windocks containers to turn SQL Server infrastructure into a on-demand service for your developers and testers, letting them create a new instance of SQL Server with a copy of your production data in less than a minute.
Sometimes things don’t work out as planned. The same thing
happens to our SQL Server execution plans. This can lead to horrible slow
queries, or even queries failing to run at all. In this session you will see
some scenarios demonstrated where SQL Server produces a wrong plan, you will
learn how to identify them and what you can do to avoid them.

You will also learn more on Adaptive Query Processing, a new
feature in SQL Server 2017. This allows your SQL Server to adjust wrong plans
while the plan is being executed. So, if running queries performantly is one of
your concerns, don’t miss out on this session!
You’ve probably already seen that R icon in the Power BI GUI.
It shows up when creating sources, transformations and reports. But the ugly
textbox you got when you clicked upon those icons didn’t encourage you to proceed?
In this session you will learn just a few basic things about R that will
greatly extend your Power BI data loading, transformation and reporting skills
in Power BI Desktop and the Power BI service.
In the current just-in-time world we want to analyze what is
happening now, not what happened yesterday. Companies start to embrace Azure
Stream Analytics, which makes it easy to analyze streams of incoming events
without going into advanced coding. But for advanced analytics we need machine
learning to learn patterns in your data. Azure Machine learning can do this for
you. But the real beauty is that both products can easily work together.

So if you want to see how within 60 minutes we can learn patterns in streams of
data and apply them on live data, be sure to attend this demo-oriented session.
If your regular SQL Server becomes too slow for running your data warehouse queries, or uploading the new data takes too long, you might benefit from the Azure Data Warehouse. Via its “divide and conquer” approach it provides significant performance improvements, yet most client applications can connect to it as if it is a regular SQL Server. To benefit from these performance improvements we need to implement our Azure Data Warehouse in the right way. In this session - through a lot of demos - you will learn how to setup your Azure Data Warehouse (ADW), review indexing in the context of ADW and see that monitoring is done slightly different from what you’re used to.
With over 30 years of personal experience, Charlie will deliver this entertaining and sometimes humorous session to inform those with leadership and management responsibilities about how they can provide the support and motivation to their teams, which is so essential for teams and organisations to succeed.
Staff engagement is a key challenge for organisations and, with a significant skills shortage, the IT industry is particularly susceptible to high staff turnover.  Staff retention is achieved through a variety of incentives, but effective leadership is fundamental to all areas of a business.
Based on personality types, Charlie discusses the current thinking on how leaders can stretch themselves into different leadership styles to provide optimal leadership for specific situations.  The modern workplace is fast moving and ever changing, so modern leaders need to have great self-awareness, emotional intelligence and an adaptable approach to leading their teams.
Query optimizer is getting smart, computers are taking DBAs jobs. In this session MVP Fabiano Amorim will talk about new “automatic” optimizations on SQL Server 2017. Adaptive query processing, auto tune and few other features added into the product. Are you taking weekend off? What about turn automatic tune to avoid bad queries to show up after an index rebuild or an ‘unexpected’ change?
SQL is a tricky programming language, if you work with SQL Server in any capacity, as a developer, DBA, or a SQL user, you need to know how to write a good T-SQL code. A poorly written query will bring even the best hardware to its knees, for a truly performing system, there is no substitute for properly written queries that takes advantage of all SQL Server has to offer. Come to this session to learn how re-write a query and see many tips on what to do to make queries execute as fast as possible.
If you are a developer+DBA, consultant+DBA, IT Manager+DBA, Intern+DBA, technical support+DBA or just a DBA, this session will be useful to you. After working for many years as developer and consultant, the SQL Server MVP Fabiano Amorim has being working with many day-by-day DBA tasks. In this session he will speak a little about the DBA job and show some very good tips about how to do it with efficiency.
Tired to look at colored and nice shape plans? Want to go further and see geek stuff? Came to this session to explore the query trees, internals and deep analysis on execution plans on SQL Server. This is a advanced session, so, expect to see lots of traceflags, undocumented and nasty execution plans.
In this session, I'll present some hidden and tricky optimizations that will help you to "speed-up" your queries. It all begins by looking at the query execution plan and from there, we'll explore the alternatives that were not initially considered by query optimizer and understand what is it doing. If you need to optimize queries in your work, don't miss this session. 
In this session the MVP Fabiano Amorim (@mcflyamorim) will show 7 different development techniques that you should avoid in case your company DBA suffer from some heart diseases. How to not write a T-SQL, triggers pitfalls, indexes, functions, parameter sniffing, SQL Injection, cache bloat and sort warnings. Come to this session to learn the most common issues when developing to SQL Server, and how to avoid them.
Back to the Future is the greatest time travel movie ever. I'll show you how temporal tables work, in both SQL Server and Azure SQL Database, without needing a DeLorean.

We cover point in time analysis, reconstructing state at any time in the past, recovering from accidental data loss, calculating trends, and my personal favourite: auditing.

There's even a bit of In-Memory OLTP.

There are lots of demos at the end.
Do you move to the cloud because it's fashionable, or because it's a good strategy for your organization?

How do you decide between Azure SQL Database (Platform as a Service), SQL Server on a Azure VM (Infrastructure as a Service), or perhaps a hybrid solution with both?

This session also covers Stretch Database, Data Migration Assistant, and BACPAC files, as well as some hidden gems in SQL Server 2017.
"The database is slow" is one of those eye-rolling, panic-inducing statements, but by then you're already reacting.

This session takes you on a proactive journey through basic database internals, hardware and operating system setup, and how to configure SQL Server from scratch, so that you avoid hearing that dreaded statement.

Think of this as best practices from the ground up, before you get into query tuning.
A DBA in charge of a whole lot of databases and servers has to check regularly that there are no likelihood of problems. The task is well suited for automation as workload increases. But be honest. Have you tried to do that with copy and paste in a Word Document ? If yes, you know how painful and how many time you will spend doing that. But if I told you that you can do it in seconds ? 
In this session I will introduce a PowerShell-based reporting framework that aims to simply provide a Word-based report with colour-coded alerts where there are problems or best practices aren't being followed.
Machine Learning is not magic.  You can’t just throw the data through an algorithm and expect it to provide insights. You have to prepare the data and very often you have to tune the algorithm.  Some algorithms - Neural Nets, Deep Learning, Support Vector Machines and Nearest Neighbour  - are starting to dominate the field.  A great deal of attention is often focused on the maths behind these, and it IS fascinating. 
But you don’t have to understand the maths to be able to use these algorithms effectively.  What you do need
to know is how they work because that is the information that allows you to tune them effectively.  This talk will explain how they work from a non-mathematical standpoint.
AWS DMS is a fantastic service that allows you migrate your data to the heterogenous databases in the AWS Cloud. In this session we will check how to use this service, what are replication instances and why they are so important, creating and logging tasks, tips and tricks  and finalizing how to troubleshooting it without need to open a case with AWS.
Analysing highly connected data using SQL is hard! Relational databases were simply not
designed  to handle this,  but graph databases were.  Built from the ground up to understand
interconnectivity, graph databases enable a flexible performant way to analyse
relationships, and one has just landed in SQL Server 2017! SQL  Server supports two new table types NODE andEDGE and a new function MATCH, which enables deeper exploration of the
relationships in your data than ever before.

In this session, weseek to explore, what is a graph database, why you should be interested, what
query patterns does they solve and how does SQL Server compare with
competitors. We will explore each of these based on real data shredded from


If you're looking to move data in Azure, you have inevitably heard of Data
Factory. You may have also heard it is clunky, limited and requires a lot of
effort; you are correct.  

What if you had the necessary PowerShell tools to automate the tedious and
repetitive elements of a Data Factory, allowing you to kick back while it
deploys all your pipelines to Azure?  

In this session, we will look at how to automate the mundane creation and
deployment of Data Factory artefacts so that you can reduce valuable
development time and increase agility. 

We will look at a real-world example, moving a database from an
on-premise SQL Server database to Azure, without writing any code or any JSON.
Whether you're new to Azure Data Factory, or you are a seasoned pipeline
developer, this automation framework will save you time, increase quality and
maintain consistency. 

RDS SQL SERVER is a managed service for SQL Server from AWS. In this session we will have a brief introduction o RDS SQL Server and practical examples on
how to setup and some basic operations as use native backup and restore to point in time and it limitations. We also will cover some questions that will allow you understand and consider if its feasible to your business to use RDS SQL Server instead of EC2 Instance with SQL Server
The most effective T-SQL support feature comes installed with every edition of SQL Server, is enabled by default, and costs no overhead. Yet, the vast majority of database administrator underutilize or completely neglect it. That feature’s name is “comments”.

In this session, Microsoft Certified Master Jennifer McCown will demonstrate the various commenting methods that make code supportable. Attendees will learn what’s important in a header comment, use code blocking to edit code, build a comprehensive help system, and explore alternative comment methods in stored procedures, SSIS packages, SSRS reports, and beyond.
Microsoft Azure Analysis Services and SQL Server Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will reveal new features for large, enterprise models in the areas of performance, scalability, advanced calculations, model management, and monitoring. Learn how to use these new features to deliver tabular models of unprecedented scale, with easy data loading and simplified user consumption, enabling the best reporting experiences over corporate, managed datasets.
SQL Server Integration Services (SSIS) has been around since the cloud was just a term to describe the weather. SSIS is great at handling most any on-premises data load need, but that doesn't mean that it can't be used for cloud or on-prem/cloud hybrid architectures. With the flexibility in its legacy behaviors and the new cloud-specific tasks and components, Integration Services is versatile enough to wrangle both traditional on-prem and cloud-based ETL needs.

In this session, we will cover how SQL Server Integration Services can play well with the cloud. We'll review and demonstrate how existing SSIS tasks and components can be used for cloud or hybrid load scenarios, and will walk through some of the newest tools built specifically for cloud endpoints. We will also discuss the role SSIS plays in the enterprise alongside other cloud data integration tools, including Azure Data Factory (ADF).
For years, SQL Server Reporting Services chugged along with very few updates. Although it remained a reliable and popular reporting tool, the feature set largely remained unchanged for a decade. With the most recent two major editions (2016 and the upcoming 2017), everything changed. Microsoft delivered a brand new SSRS, instantly transforming Reporting Services from a spartan reporting tool to a rich portal for at-a-glance metrics. No longer do you have to purchase a third-party reporting tool; everything you need is right here!

This session will review and demonstrate the newly-remodeled SQL Server Reporting Services. We'll walk through the essential changes in SSRS, from the all-new reporting portal to the new visualizations. We'll also discuss the SSRS ecosystem and how it fits together with mobile reports and its recent integration with PowerBI.
Joins are a thing you learn on Day 1 of T-SQL 101. But they are so much more involved than what you learned then. Logical v physical, Semi Joins, Lookup Joins, Redundant Joins, not to mention those times when you thought you specified one kind of join and the execution plan says it's doing something else.

Luckily, it's not magic - it's all very straightforward once you understand the different types of joins and how they work. This session will cover the different types of logical and physical joins - and even look at joins that don't exist at all.
In a real data mining or machine learning project, you spend more than half of the time on data preparation and data understanding. The R language is extremely powerful in this area. The Python language is a match. Of course, you do work with data by using T-SQL. You will learn in this session how to get data understanding with really quickly prepared basic graphs and descriptive statistics analysis. You can do advanced data preparation with many data manipulation methods available out of the box and in additional packages fro R and Python. After this session, you will understand what tasks the data preparation involves, and what tools you have in SQL Server suite for these tasks.
Databases that serve business applications should often support temporal data. For example, suppose
a contract with a supplier is valid for a limited time only. It can be valid from a specific point in time onward, or it can be valid for a specific time interval—from a starting time point to an ending time point. In addition, many times you need to audit all changes in one or more tables. You might also need to be able to show the state in a specific point in time, or all changes made to a table in a specific period of time. From the data integrity perspective, you might need to implement many additional temporal specific constraints.
This session introduces the temporal problems, deals with solutions that go beyond SQL Server support, and shows out-of-the-box solution in SQL Server, including defining temporal data, application versioned tables, system versioned tables, and what kind of temporal support is still missing in SQL Server.
Do you really need to learn R or Python to do some statistical analyses with SQL Server? Of course, not. SQL Server 2012 – 2017 Database Engine has so many business intelligence (BI) improvements that
it might become your primary analytic database system. However, to get the maximum out of these features, you need to learn how to properly use them. This in-depth session shows extremely efficient statistical queries that use the window functions and are optimized through algorithms that use mathematical knowledge and creativity. During the session, the formulas and usage of those statistical procedures are explained as well. This session is useful not only for BI developers; database and other developers can successfully learn how to write efficient queries. Or maybe you want to become a data scientist? Then you need to know statistics and programming. You get the best of both in this session.
The range of options for storing data in Microsoft Azure keeps growing, the most notable recent addition is the Managed Instance. But what is it, and why is it there? Join John as he walks through what they are
and how you might start using them.

Managed Instances add a new option for running workloads in the cloud. Allowing near parity with a traditional on-premises SQL Server. Including SQL Agent, Cross Database Queries, Service Broker, CDC, and many more. Overcoming many of the challenges to using Azure SQL Databases.

But, what is the reality, how do we make use of it, and are there any gotcha’s that we need to be aware of? This is what we will cover, going beyond the hype and looking at how we can make use of this new
With SQL Server 2017 Microsoft has added Linux as an Operating System choice. The same SQL Server engine, but there are some subtle differences in behaviour. In this session, we will walk through getting SQL Server up and running on Linux.

From install, to creating databases, viewing monitoring counters through to setting up High Availability. The same principals apply in SQL Server on Linux as in Windows. However, there are several subtle and some
not so subtle differences. In this demo-driven session, we will look at some of these. Looking at where we might need to alter some of our go-to scripts and tools, as well as what still works fine.
Monitoring cloud platforms needs a different approach to on-premises. For a start, there is a lot of abstraction meaning there is less to see. But what is important and how do I get it? Here, I will demonstrate how to get at this data via the APIs. 

Getting both service metadata and performance metrics is possible, even with PowerShell. Together we will walk through configuring the appropriate security in Azure. Then look at what the APIs have to offer, finally pulling the data out and having a look at what we can do with it.
Once data leaves your SQL Server, do you know what happens, or is the world of networking a black box to you? Would you like to know how data is packaged up and transmitted to other systems, and what to do when things go wrong? Are you tired of being frustrated with the network team?

In this session, we introduce how data moves between systems on networks, then look at TCP/IP internals. We’ll discuss real world scenarios showing you how your network’s performance impacts the performance of your SQL Server and even your recovery objectives.
So you’re a SQL Server administrator and you just installed SQL Server on Linux. It’s a whole new world. Don’t fear, it’s just an operating system. It has all the same components Windows has and in this session we’ll show you that. We will look at the Linux operating system architecture and show you where to look for the performance data you’re used to! Further we'll dive into SQLPAL and how it architecture and internals enables high performance for your SQL Server. By the end of this session you’ll be ready to go back to the office and have a solid understanding of performance monitoring Linux systems and SQL on Linux. We’ll look at the core system components of CPU, Disk, Memory and Networking monitoring techniques for each and look some of the new tools available from DMVs to DBFS.

In this session we’ll cover the following 
- System resource management concepts, CPU, disk, memory and networking
- Introduce SQLPAL architecture and internals and how it's design enables high performance for SQL Server on Linux
Challenged with this problem, let me show you how I deployed 80 SQL Servers for a client, fast!  For this task, I needed to go from DSC Zero to DSC Hero. In this “notes from the field” session, I’ll share with you how I was able to achieve my client’s goals. 

In this session we’ll learn:
DSC Fundamentals
DSC Resources and where to get them
Configuration Data
Best practice SQL Server configurations implemented in DSC
Leveraging this configuration for Disaster Recovery
One of the most highly anticipated new features in the SQL Server 2016 release was Query Store. It's referred to as the "flight data recorder" for SQL Server because it tracks query information over time – including the text, the plan, and execution statistics. The addition of wait statistics information – tracked for each query plan – in SQL Server 2017 makes Query Store a tool that every data professional needs to know how to use, whether you're new to troubleshooting or someone who's been doing it for years. When you include the new Automatic Tuning feature in SQL Server 2017, suddenly it seems like you might spend less time fighting fires and more time enjoying a lunch break that’s not at your desk. In this session, we'll walk through Query Store with a series of demos designed to help you understand how you can immediately start to use it once you’ve upgraded to SQL Server 2016 or 2017. We'll review the different options for Query Store, look at the data collected (including wait stats!), check out how to force a plan, and dive into how you can leverage Automatic Plan Correction and reduce the time you spend on Severity 1 calls fighting fires. It’s time to embrace the future and learn how to make troubleshooting easier using the plethora of intelligent data natively captured in SQL Server and SQL Azure Database.
Ever wondered if you can optimize your sql projects so you don't have to do unnecessary work? Take a look at how I've optimized SSDT Deployments by a use case. This session will take a deep dive into the dacpac, and give you ideas on how you can leverage the knowledge to write your own tools, to get the best out of SSDT. The session will focus on two areas. SSDT and MSBuild. 
A few of you have heard of SQL Server Data Tools (SSDT), you may have started using it but not entirely sure where to start and you're being pushed to make sure it's "Agile", "DevOps", "CI/CD" etc etc. This is more of a beginners session on how I've gone about getting monolithic old databases into an Agile practice so you can hit the ground running should you require to do so.
Entity Framework doesn't have the best reputation amongst DBAs, but the good news is it isn't inherently terrible; just very easy to get wrong. In this session, we'll explore the mistakes which make Entity Framework stress SQL Server, and show how you can resolve them. We'll talk about how you can spot issues, either in production or during development. Finally we'll discuss ways of working with your development team to prevent these problems occurring in the first place. You might not leave convinced that Entity Framework is a good idea, but you should go home with the understanding needed to get it running well on your systems.
For a long time people have not been unit testing databases. Luckily in today's world the Unicorns and Leprechauns are making an appearance in real life! In this session I’ll take you from an absolute beginner to an intermediate/Advanced level of understanding of SQL Server Unit Tests. The Pro’s, the Con’s and the Gotya’s. And when all else fails how to write your own SQL Server Unit Test.
The session consider situations of multiple use of table in single query with multi level views or inline functions.  Detecting the place of problem with execution plan is impossible. We will try to find which objects should be considered for performance tuning. Session will show some techniques and their variant for finding the starter performance tuning objects.
In simple words technique of monitoring deadlocks. By result of monitoring we will get all the necessary details that help to fix the problem. Technique does not require DBA attention for monitoring and allows performance tuner find the proper changes to fix the problem.
By now, all the SQL world should have heard about the R language, especially since Microsoft is committed to integrate it into their data platform products. So you installed the R base system and the IDE of your choice. But it's like buying a new car - nobody is content with the standard. You know there are packages to get you started with analysis and visualisation, but which ones?

A bundle called The Tidyverse comes in handy, consisting of a philosophy of tidy data and some packages mostly (co-)authored by Hadley Wickham, one of the brightest minds in the R ecosystem. We will take a look at the most popular Tidyverse ingredients like tidyr, ggplot2, dplyr and readr, and we'll have lots of code demos on real world examples.
„A picture is worth a thousand words“ - well, that is especially true when it comes to analyzing data. Visualization is the quick and easy way to get the big ‘picture’ in your data and the R ecosystem has a lot to offer in this regard. 

They may not add up to exactly 50, but in this session I’ll show you lots of compelling visualizations produced with the help of the ggplot2 package and friends - and their usual small effort of code. We will start beyond the usual bar, line or scatter plots. 

Instead our screen will show diagrams that always made you think „How do they do that?“. We will see waterfall diagrams, violins, joyplots, marginal histograms, maps and more… and you’ll get the code to reproduce everything.