These are the sessions submitted so far for SQLBits 2018.

Microsoft has two different types of SQL Server services available in Azure. The SQL Server on an Azure VM is the IaaS (Infrastructure as a Service), which is easier to understand and migrate to.
Microsoft offers two types of PaaS (Platform as a Service) or DBaaS (Database as a Service) for SQL Server Database native to could – Single Database and Elastic Pool. Choosing the right tier for your database gets harder with the complexity in calculating the resource utilization for your Azure SQL Database.
In this session, I will walk you through the steps involved in analysing the resource utilisation and estimating the right tier to choose for your database in Azure. I will also uncover the mysterious DTU and how it is calculated.
When SQL Server 2016 was released, it offered a fantastic new feature with the Query Store. Long term, statistics based, query tuning became a reality. But what about the thousands of servers that aren't upgrading to SQL 2016 or newer? The open source project Open Query Store is designed to fulfill that need.

This session will give a short introduction to the Query Store feature in SQL 2016 and then dive into the Open Query Store (OQS) solution. Enrico and William (the co-creators of the OQS project) will explain the design of OQS and demonstrate the features. You will leave this session with an understanding of the features of Query Store and Open Query Store, and a desire to implement OQS in your systems when you return to the office.
In this talk you will learn how to use Power BI to prototype/develop a BI solution in days and then (if needed) evolve it into a fully scalable Azure BI solution.
This talk is all about showing real world tips from real world scenarios of using Power BI: the goods and the bads.

This session is targeted to whom is using or start using Power BI and want to take home some really good tips & tricks ;)
In this session we will run through all of the latest technologies and tooling we are developing at
Microsoft to democratise machine learning.

We will look at Cognitive Services with prebuilt deep convolutional neural networks and your own custom neural networks,

We will look at R tooling and how to create your own image recognition models in R.

We will cover methods for operationalizing you learning such as SQL Server 2017 and Azure Data Lake Analytics along with a few other surprises

All of this with just practical demos and no PowerPoints
In this talk we will discuss best practices around how to design and maintain an Azure SQL Data Warehouse for best throughput and query performance. We will look at distribution types, index considerations, execution plans, workload management and loading patterns. At the end of this talk you will understand the common pitfalls and be empowered to either construct a highly performant Azure SQL Data Warehouse or address performance issues in an existing deployment.
Hierarchies and graphs are the bread and butter of most business applications and you find them almost everywhere:

  • Product Categories
  • Sales Territories
  • Bill of Material
  • Calendar and Time

Even when there is a big need from a business perspective, the solutions in relational databases are mostly sort of awkward. The most flexible hierarchies are usually modeled as self-referenced tables. If you want to successfully query such self-referenced hierarchies, you will need either loops or recursive Common Table Expressions. SQL Server 2017 comes now with a different approach: Graph Database.

Join this session for a journey through best practices to transform your hierarchies into useful information. We will have fun playing around with a sample database based on G. R. R. Martin’s famous “Game of Thrones”.
Your developers need a copy of the production database, and they need it now! How do you keep up with the shift towards agile development? VMs are a good solution, but we can make environments easier to manage, smaller, and cheaper with containers. Containers let you run SQL Server in an isolated, lightweight environment but working with them can be tricky. In this session I'll explain the different types of containers available for SQL Server, why some options are better than others, and why they're worth considering. You will learn how to use Docker and Windocks containers to turn SQL Server infrastructure into a on-demand service for your developers and testers, letting them create a new instance of SQL Server with a copy of your production data in less than a minute.
Sometimes things don’t work out as planned. The same thing
happens to our SQL Server execution plans. This can lead to horrible slow
queries, or even queries failing to run at all. In this session you will see
some scenarios demonstrated where SQL Server produces a wrong plan, you will
learn how to identify them and what you can do to avoid them.

You will also learn more on Adaptive Query Processing, a new
feature in SQL Server 2017. This allows your SQL Server to adjust wrong plans
while the plan is being executed. So, if running queries performantly is one of
your concerns, don’t miss out on this session!
You’ve probably already seen that R icon in the Power BI GUI.
It shows up when creating sources, transformations and reports. But the ugly
textbox you got when you clicked upon those icons didn’t encourage you to proceed?
In this session you will learn just a few basic things about R that will
greatly extend your Power BI data loading, transformation and reporting skills
in Power BI Desktop and the Power BI service.
In the current just-in-time world we want to analyze what is
happening now, not what happened yesterday. Companies start to embrace Azure
Stream Analytics, which makes it easy to analyze streams of incoming events
without going into advanced coding. But for advanced analytics we need machine
learning to learn patterns in your data. Azure Machine learning can do this for
you. But the real beauty is that both products can easily work together.

So if you want to see how within 60 minutes we can learn patterns in streams of
data and apply them on live data, be sure to attend this demo-oriented session.
If your regular SQL Server becomes too slow for running your data warehouse queries, or uploading the new data takes too long, you might benefit from the Azure Data Warehouse. Via its “divide and conquer” approach it provides significant performance improvements, yet most client applications can connect to it as if it is a regular SQL Server. To benefit from these performance improvements we need to implement our Azure Data Warehouse in the right way. In this session - through a lot of demos - you will learn how to setup your Azure Data Warehouse (ADW), review indexing in the context of ADW and see that monitoring is done slightly different from what you’re used to.
With over 30 years of personal experience, Charlie will deliver this entertaining and sometimes humorous session to inform those with leadership and management responsibilities about how they can provide the support and motivation to their teams, which is so essential for teams and organisations to succeed.
Staff engagement is a key challenge for organisations and, with a significant skills shortage, the IT industry is particularly susceptible to high staff turnover.  Staff retention is achieved through a variety of incentives, but effective leadership is fundamental to all areas of a business.
Based on personality types, Charlie discusses the current thinking on how leaders can stretch themselves into different leadership styles to provide optimal leadership for specific situations.  The modern workplace is fast moving and ever changing, so modern leaders need to have great self-awareness, emotional intelligence and an adaptable approach to leading their teams.
Query optimizer is getting smart, computers are taking DBAs jobs. In this session MVP Fabiano Amorim will talk about new “automatic” optimizations on SQL Server 2017. Adaptive query processing, auto tuning and few other features added into the product. Are you taking weekend off? What about turn automatic tuning on to avoid bad queries to show up after an index rebuild or an ‘unexpected’ change?
SQL is a tricky programming language, if you work with SQL Server in any capacity, as a developer, DBA, or a SQL user, you need to know how to write a good T-SQL code. A poorly written query will bring even the best hardware to its knees, for a truly performing system, there is no substitute for properly written queries that takes advantage of all SQL Server has to offer. Come to this session to learn how re-write a query and see many tips on what to do to make queries execute as fast as possible.
If you are a developer+DBA, consultant+DBA, IT Manager+DBA, Intern+DBA, technical support+DBA or just a DBA, this session will be useful to you. After working for many years as developer and consultant, the SQL Server MVP Fabiano Amorim has being working with many day-by-day DBA tasks. In this session he will speak a little about the DBA job and show some very good tips about how to do it with efficiency.
Tired to look at colored and nice shape plans? Want to go further and see geek stuff? Came to this session to explore the query trees, internals and deep analysis on execution plans on SQL Server. This is a advanced session, so, expect to see lots of traceflags, undocumented and nasty execution plans.
In this session, I'll present some hidden and tricky optimizations that will help you to "speed-up" your queries. It all begins by looking at the query execution plan and from there, we'll explore the alternatives that were not initially considered by query optimizer and understand what is it doing. If you need to optimize queries in your work, don't miss this session. 
In this session the MVP Fabiano Amorim (@mcflyamorim) will show 7 different development techniques that you should avoid in case your company DBA suffer from some heart diseases. How to not write a T-SQL, triggers pitfalls, indexes, functions, parameter sniffing, SQL Injection, cache bloat and sort warnings. Come to this session to learn the most common issues when developing to SQL Server, and how to avoid them.
Back to the Future is the greatest time travel movie ever. I'll show you how temporal tables work, in both SQL Server and Azure SQL Database, without needing a DeLorean.

We cover point in time analysis, reconstructing state at any time in the past, recovering from accidental data loss, calculating trends, and my personal favourite: auditing.

There's even a bit of In-Memory OLTP.

There are lots of demos at the end.
Do you move to the cloud because it's fashionable, or because it's a good strategy for your organization?

How do you decide between Azure SQL Database (Platform as a Service), SQL Server on a Azure VM (Infrastructure as a Service), or perhaps a hybrid solution with both?

This session also covers Stretch Database, Data Migration Assistant, and BACPAC files, as well as some hidden gems in SQL Server 2017.
"The database is slow" is one of those eye-rolling, panic-inducing statements, but by then you're already reacting.

This session takes you on a proactive journey through basic database internals, hardware and operating system setup, and how to configure SQL Server from scratch, so that you avoid hearing that dreaded statement.

Think of this as best practices from the ground up, before you get into query tuning.
A DBA in charge of a whole lot of databases and servers has to check regularly that there are no likelihood of problems. The task is well suited for automation as workload increases. But be honest. Have you tried to do that with copy and paste in a Word Document ? If yes, you know how painful and how many time you will spend doing that. But if I told you that you can do it in seconds ? 
In this session I will introduce a PowerShell-based reporting framework that aims to simply provide a Word-based report with colour-coded alerts where there are problems or best practices aren't being followed.
Machine Learning is not magic.  You can’t just throw the data through an algorithm and expect it to provide insights. You have to prepare the data and very often you have to tune the algorithm.  Some algorithms - Neural Nets, Deep Learning, Support Vector Machines and Nearest Neighbour  - are starting to dominate the field.  A great deal of attention is often focused on the maths behind these, and it IS fascinating. 
But you don’t have to understand the maths to be able to use these algorithms effectively.  What you do need
to know is how they work because that is the information that allows you to tune them effectively.  This talk will explain how they work from a non-mathematical standpoint.
AWS DMS is a fantastic service that allows you migrate your data to the heterogenous databases in the AWS Cloud. In this session we will check how to use this service, what are replication instances and why they are so important, creating and logging tasks, tips and tricks  and finalizing how to troubleshooting it without need to open a case with AWS. It will be 2 hours of pure DMS tips and tricks for the most widely databases used in the industry as source to migration.
Analysing highly connected data using SQL is hard! Relational databases were simply not designed  to handle this,  but graph databases were.  Built from the ground up to understand interconnectivity, graph databases enable a flexible performant way to analyse relationships, and one has just landed in SQL Server 2017! SQL  Server supports two new table types NODE and EDGE and a new function MATCH, which enables deeper exploration of the relationships in your data than ever before.

In this session, we seek to explore, what is a graph database, why you should be interested, what query patterns does they solve and how does SQL Server compare with competitors. We will explore each of these based on real data shredded from IMDB.

 

 
If you're looking to move data in Azure, you have inevitably heard of Data Factory. You may have also heard it is clunky, limited and requires a lot of effort; you are correct.  What if you had the necessary PowerShell tools to automate the tedious and repetitive elements of a Data Factory, allowing you to kick back while it deploys all your pipelines to Azure?  

In this session, we will look at how to automate the mundane creation and deployment of Data Factory artefacts so that you can reduce valuable development time and increase agility. 

We will look at a real-world example, moving a database from an on-premise SQL Server database to Azure, without writing any code or any JSON. Whether you're new to Azure Data Factory, or you are a seasoned pipeline developer, this automation framework will save you time, increase quality and maintain consistency. 

 
RDS SQL SERVER is a managed service for SQL Server from AWS. In this session we will have a brief introduction o RDS SQL Server and practical examples on
how to setup and some basic operations as use native backup and restore to point in time and it limitations. We also will cover some questions that will allow you understand and consider if its feasible to your business to use RDS SQL Server instead of EC2 Instance with SQL Server
The most effective T-SQL support feature comes installed with every edition of SQL Server, is enabled by default, and costs no overhead. Yet, the vast majority of database administrator underutilize or completely neglect it. That feature’s name is “comments”.

In this session, Microsoft Certified Master Jennifer McCown will demonstrate the various commenting methods that make code supportable. Attendees will learn what’s important in a header comment, use code blocking to edit code, build a comprehensive help system, and explore alternative comment methods in stored procedures, SSIS packages, SSRS reports, and beyond.
Microsoft Azure Analysis Services and SQL Server Analysis Services enable you to build comprehensive, enterprise-scale analytic solutions that deliver actionable insights through familiar data visualization tools such as Microsoft Power BI and Microsoft Excel. This session will reveal new features for large, enterprise models in the areas of performance, scalability, advanced calculations, model management, and monitoring. Learn how to use these new features to deliver tabular models of unprecedented scale, with easy data loading and simplified user consumption, enabling the best reporting experiences over corporate, managed datasets.
SQL Server Integration Services (SSIS) has been around since the cloud was just a term to describe the weather. SSIS is great at handling most any on-premises data load need, but that doesn't mean that it can't be used for cloud or on-prem/cloud hybrid architectures. With the flexibility in its legacy behaviors and the new cloud-specific tasks and components, Integration Services is versatile enough to wrangle both traditional on-prem and cloud-based ETL needs.

In this session, we will cover how SQL Server Integration Services can play well with the cloud. We'll review and demonstrate how existing SSIS tasks and components can be used for cloud or hybrid load scenarios, and will walk through some of the newest tools built specifically for cloud endpoints. We will also discuss the role SSIS plays in the enterprise alongside other cloud data integration tools, including Azure Data Factory (ADF).
For years, SQL Server Reporting Services chugged along with very few updates. Although it remained a reliable and popular reporting tool, the feature set largely remained unchanged for a decade. With the most recent two major editions (2016 and the upcoming 2017), everything changed. Microsoft delivered a brand new SSRS, instantly transforming Reporting Services from a spartan reporting tool to a rich portal for at-a-glance metrics. No longer do you have to purchase a third-party reporting tool; everything you need is right here!

This session will review and demonstrate the newly-remodeled SQL Server Reporting Services. We'll walk through the essential changes in SSRS, from the all-new reporting portal to the new visualizations. We'll also discuss the SSRS ecosystem and how it fits together with mobile reports and its recent integration with PowerBI.
Joins are a thing you learn on Day 1 of T-SQL 101. But they are so much more involved than what you learned then. Logical v physical, Semi Joins, Lookup Joins, Redundant Joins, not to mention those times when you thought you specified one kind of join and the execution plan says it's doing something else.

Luckily, it's not magic - it's all very straightforward once you understand the different types of joins and how they work. This session will cover the different types of logical and physical joins - and even look at joins that don't exist at all.
In a real data mining or machine learning project, you spend more than half of the time on data preparation and data understanding. The R language is extremely powerful in this area. The Python language is a match. Of course, you do work with data by using T-SQL. You will learn in this session how to get data understanding with really quickly prepared basic graphs and descriptive statistics analysis. You can do advanced data preparation with many data manipulation methods available out of the box and in additional packages fro R and Python. After this session, you will understand what tasks the data preparation involves, and what tools you have in SQL Server suite for these tasks.
Databases that serve business applications should often support temporal data. For example, suppose
a contract with a supplier is valid for a limited time only. It can be valid from a specific point in time onward, or it can be valid for a specific time interval—from a starting time point to an ending time point. In addition, many times you need to audit all changes in one or more tables. You might also need to be able to show the state in a specific point in time, or all changes made to a table in a specific period of time. From the data integrity perspective, you might need to implement many additional temporal specific constraints.
This session introduces the temporal problems, deals with solutions that go beyond SQL Server support, and shows out-of-the-box solution in SQL Server, including defining temporal data, application versioned tables, system versioned tables, and what kind of temporal support is still missing in SQL Server.
Do you really need to learn R or Python to do some statistical analyses with SQL Server? Of course, not. SQL Server 2012 – 2017 Database Engine has so many business intelligence (BI) improvements that
it might become your primary analytic database system. However, to get the maximum out of these features, you need to learn how to properly use them. This in-depth session shows extremely efficient statistical queries that use the window functions and are optimized through algorithms that use mathematical knowledge and creativity. During the session, the formulas and usage of those statistical procedures are explained as well. This session is useful not only for BI developers; database and other developers can successfully learn how to write efficient queries. Or maybe you want to become a data scientist? Then you need to know statistics and programming. You get the best of both in this session.
The range of options for storing data in Microsoft Azure keeps growing, the most notable recent addition is the Managed Instance. But what is it, and why is it there? Join John as he walks through what they are
and how you might start using them.

Managed Instances add a new option for running workloads in the cloud. Allowing near parity with a traditional on-premises SQL Server. Including SQL Agent, Cross Database Queries, Service Broker, CDC, and many more. Overcoming many of the challenges to using Azure SQL Databases.

But, what is the reality, how do we make use of it, and are there any gotcha’s that we need to be aware of? This is what we will cover, going beyond the hype and looking at how we can make use of this new
technology.
With SQL Server 2017 Microsoft has added Linux as an Operating System choice. The same SQL Server engine, but there are some subtle differences in behaviour. In this session, we will walk through getting SQL Server up and running on Linux.

From install, to creating databases, viewing monitoring counters through to setting up High Availability. The same principals apply in SQL Server on Linux as in Windows. However, there are several subtle and some
not so subtle differences. In this demo-driven session, we will look at some of these. Looking at where we might need to alter some of our go-to scripts and tools, as well as what still works fine.
Monitoring cloud platforms needs a different approach to on-premises. For a start, there is a lot of abstraction meaning there is less to see. But what is important and how do I get it? Here, I will demonstrate how to get at this data via the APIs. 


Getting both service metadata and performance metrics is possible, even with PowerShell. Together we will walk through configuring the appropriate security in Azure. Then look at what the APIs have to offer, finally pulling the data out and having a look at what we can do with it.
Once data leaves your SQL Server, do you know what happens, or is the world of networking a black box to you? Would you like to know how data is packaged up and transmitted to other systems, and what to do when things go wrong? Are you tired of being frustrated with the network team?

In this session, we introduce how data moves between systems on networks, then look at TCP/IP internals. We’ll discuss real world scenarios showing you how your network’s performance impacts the performance of your SQL Server and even your recovery objectives.
So you’re a SQL Server administrator and you just installed SQL Server on Linux. It’s a whole new world. Don’t fear, it’s just an operating system. It has all the same components Windows has and in this session we’ll show you that. We will look at the Linux operating system architecture and show you where to look for the performance data you’re used to! Further we'll dive into SQLPAL and how it architecture and internals enables high performance for your SQL Server. By the end of this session you’ll be ready to go back to the office and have a solid understanding of performance monitoring Linux systems and SQL on Linux. We’ll look at the core system components of CPU, Disk, Memory and Networking monitoring techniques for each and look some of the new tools available from DMVs to DBFS.


In this session we’ll cover the following 
- System resource management concepts, CPU, disk, memory and networking
- Introduce SQLPAL architecture and internals and how it's design enables high performance for SQL Server on Linux
Challenged with this problem, let me show you how I deployed 80 SQL Servers for a client, fast!  For this task, I needed to go from DSC Zero to DSC Hero. In this “notes from the field” session, I’ll share with you how I was able to achieve my client’s goals. 

In this session we’ll learn:
DSC Fundamentals
DSC Resources and where to get them
Configuration Data
Best practice SQL Server configurations implemented in DSC
Leveraging this configuration for Disaster Recovery
One of the most highly anticipated new features in the SQL Server 2016 release was Query Store. It's referred to as the "flight data recorder" for SQL Server because it tracks query information over time – including the text, the plan, and execution statistics. The addition of wait statistics information – tracked for each query plan – in SQL Server 2017 makes Query Store a tool that every data professional needs to know how to use, whether you're new to troubleshooting or someone who's been doing it for years. When you include the new Automatic Tuning feature in SQL Server 2017, suddenly it seems like you might spend less time fighting fires and more time enjoying a lunch break that’s not at your desk. In this session, we'll walk through Query Store with a series of demos designed to help you understand how you can immediately start to use it once you’ve upgraded to SQL Server 2016 or 2017. We'll review the different options for Query Store, look at the data collected (including wait stats!), check out how to force a plan, and dive into how you can leverage Automatic Plan Correction and reduce the time you spend on Severity 1 calls fighting fires. It’s time to embrace the future and learn how to make troubleshooting easier using the plethora of intelligent data natively captured in SQL Server and SQL Azure Database.
Ever wondered if you can optimize your sql projects so you don't have to do unnecessary work? Take a look at how I've optimized SSDT Deployments by a use case. This session will take a deep dive into the dacpac, and give you ideas on how you can leverage the knowledge to write your own tools, to get the best out of SSDT. The session will focus on two areas. SSDT and MSBuild. 
A few of you have heard of SQL Server Data Tools (SSDT), you may have started using it but not entirely sure where to start and you're being pushed to make sure it's "Agile", "DevOps", "CI/CD" etc etc. This is more of a beginners session on how I've gone about getting monolithic old databases into an Agile practice so you can hit the ground running should you require to do so.
Entity Framework doesn't have the best reputation amongst DBAs, but the good news is it isn't inherently terrible; just very easy to get wrong. In this session, we'll explore the mistakes which make Entity Framework stress SQL Server, and show how you can resolve them. We'll talk about how you can spot issues, either in production or during development. Finally we'll discuss ways of working with your development team to prevent these problems occurring in the first place. You might not leave convinced that Entity Framework is a good idea, but you should go home with the understanding needed to get it running well on your systems.
For a long time people have not been unit testing databases. Luckily in today's world the Unicorns and Leprechauns are making an appearance in real life! In this session I’ll take you from an absolute beginner to an intermediate/Advanced level of understanding of SQL Server Unit Tests. The Pro’s, the Con’s and the Gotya’s. And when all else fails how to write your own SQL Server Unit Test.
The session consider situations of multiple use of table in single query with multi level views or inline functions.  Detecting the place of problem with execution plan is impossible. We will try to find which objects should be considered for performance tuning. Session will show some techniques and their variant for finding the starter performance tuning objects.
In simple words technique of monitoring deadlocks. By result of monitoring we will get all the necessary details that help to fix the problem. Technique does not require DBA attention for monitoring and allows performance tuner find the proper changes to fix the problem.
By now, all the SQL world should have heard about the R language, especially since Microsoft is committed to integrate it into their data platform products. So you installed the R base system and the IDE of your choice. But it's like buying a new car - nobody is content with the standard. You know there are packages to get you started with analysis and visualisation, but which ones?

A bundle called The Tidyverse comes in handy, consisting of a philosophy of tidy data and some packages mostly (co-)authored by Hadley Wickham, one of the brightest minds in the R ecosystem. We will take a look at the most popular Tidyverse ingredients like tidyr, ggplot2, dplyr and readr, and we'll have lots of code demos on real world examples.
„A picture is worth a thousand words“ - well, that is especially true when it comes to analyzing data. Visualization is the quick and easy way to get the big ‘picture’ in your data and the R ecosystem has a lot to offer in this regard. 

They may not add up to exactly 50, but in this session I’ll show you lots of compelling visualizations produced with the help of the ggplot2 package and friends - and their usual small effort of code. We will start beyond the usual bar, line or scatter plots. 

Instead our screen will show diagrams that always made you think „How do they do that?“. We will see waterfall diagrams, violins, joyplots, marginal histograms, maps and more… and you’ll get the code to reproduce everything.
You’ve just been given a server and you either need to fix a problem or just want to make sure that it’s set up correctly. This session will take you through designing your own toolkit to help you quickly diagnose a wide array of problems.

We will walk through scripts that will help you pin point various issues quickly and efficiently. This session will help you diagnose the following concerns;

  • Settings – is your server set up correctly from the beginning?
  • Hardware – Check your basic hardware specifications and whether it’s going to meet your needs
  • Bottlenecks – We’ll see if there are any areas of the system that are throttling us.
By the end of this session you should have the knowledge of what you need to do in order to start on your own kit.All code we’ll go through is either provided as part of this presentation or are open source/community tools.
Open source alternatives to the SQL Server data platform are becoming more and more popular in large enterprises.

Today's marketplace means that your next project may be considering moving away from 'traditional' relational data stores - indeed, you may have already been involved in one.

This session will help you understand the Apache Cassandra eco-system, and can help you evaluate or implement a complimentary DBMS to add to your data platform toolkit.
"Can't we all just get along ?"

This lighthearted session looks at some of the common pitfalls of collaboration in the workplace, taking a side-swipe at thought-showering, stand-ups, sit-downs, and other associated hokey-cokey moves that most of us have come across, and offer some ways to combat them.

Project Managers need not attend...
In this session, we will go through some case studies to show queries that I've seen written by junior developers with common mistakes. We will go through how you would troubleshoot them and improve their performance.

We'll go through common mistakes, gotchas and unexpected issues. We will use inbuilt and free tools to troubleshoot the demo code and watch how much faster we can make them with little tweaks.

Designed for people with a basic understanding of T-SQL who want to know how to start writing faster queries (and people who want to impress their resident DBA).
You swear nothing has changed, but all of a sudden, out of nowhere, queries that used to be fast are suddenly slow. Even weirder, you take the slow query from the application, run it in SSMS, and it's totally fast! What's going on? You restart SQL Server or update statistics, and the problem seems to go away - but only for a few days or weeks, and then it comes right back. You're familiar enough with execution plans to realize that you're getting different plans, but...why? And how do you fix it long term?

In this session, you'll see live demos of the most common culprit: parameter sniffing. You'll learn how to recognize parameter sniffing when it strikes, understand what causes it, see how to fix it short term with the lowest impact possible, and learn 7 ways to fix it long term.

To get the most out of this session, you should have already watched the free online class How to Think Like the Engine, and be familiar with using SET STATISTICS IO ON to see logical reads.
Someone comes running in and asks, "Are you doing something on the SQL Server right now? Can you take a look at it? It seems really slow."


It's one of the most common complaints we get about our databases - and yet, most of us don't have a written plan with what steps to take, in order.


I do this for a living, so I've had to make a written checklist to handle it. I'll give you the exact same checklist I use, then walk you through several performance emergencies. I'll show you how I use open source scripts like sp_WhoIsActive, sp_BlitzFirst, sp_BlitzCache, and more to find the root cause fast. We won't cover performance tuning - our goal in this hour is just to have a written triage process so we don't look like bumbling idiots. Let's get good at this!
Warning: this is not an introductory session. These are going to be tough problems.

You've been performance tuning queries and indexes for a few years, but lately, you've been running into problems you can't explain. Could it be RESOURCE_SEMAPHORE, THREADPOOL, or lock escalation? These problems only pop up under heavy load or concurrency, so they're very hard to detect in a development environment.

In a very fast-paced session, I'll show these three performance problems pop up under load. I won't be able to teach you how to fix them for good - not inside the span of 75 minutes - but at least you'll be able to recognize the symptoms when they strike, and I'll show you where to go to learn more.
Generating JPG filese inside SQL Serve"R" 2016/2017

Write XLS file using CLR proc

Write XLS files manually - "by hand"

Write CSV file using CLR proc to save output from Rscript connecting to multidimensional cube

Read XML file stored in file table

Apply Full Text Index on files stored in file table

Generating ORC & flat files by inserting into external tables with Polybase - which pit falls to avoid
Poor data quality has a cost. 
Examples of data quality challenges and their impact.
Having correct data is very important to make correct decisions. 
Data quality goes hand in hand with proper data modelling.
GUID's can be a horrible choice.
Knowledge about use cases and workload is important input to your data modelling. 
Different kinds of compression can be relevant depending on your usage scenario.
Having focus on deadlines whithout having (data) quality in mind will hit you hard at an later point in time.


Delivering good query performance and reports in time is important to business users,
but how to measure it from their perspective?


And obviously the features you provide should work. 
This is where you benefit from unit testing. (If time allows: demo of unit testing with SSDT.)
You don't like the idea to change the data model to create kind of 
virtual private database?

Come and see how foreign key relationships can be used.
Why to avoid is_member.
How to cache AD-role membership, example with job catching output from PowerShell scripts
How to write tests to check TVF's working correctly
The analysis of text documents is rapidly growing in importance and not just of social media  but also for legal, academic and financial documents.   We'll use a case study based on the analysis of a bank's corporate responsibility reports to understand the changing priorities of the bank over the last decade.  We'll employ several analytic techniques; frequency analysis, finding words and phrases specific to one or a few documents in a collection, and many visualisations  using a variety of tools; R, text analytics web services and Power BI.
MDX appetizers
 - Histogram
 - Quartile
 - Percent Share Problem
 - How to check you MDX expression


When to use DAX.

Connecting via R to cube.
Azure Cosmos DB is a globally distributed database service designed to enable you to elastically and independently scale throughput and storage across any number of geographical regions with a comprehensive SLA. In this session we will discover how Cosmos DB works and what are the key features that enables you to become polyglot in persistency. A single "database" for multiple models.
In this session I'll show how to prepare and implement a Continuous Delivery (+ Deployment) solution, introducing also the concepts of DevOps and Continuous Integration. We will design a real scenario and deploy the database using Visual Studio Team Services, its build server and its release management integrated with RedGate DLM Automation plugin.
We used to test our code, but what about our databases? How can we automatically test the programmability? Is there any framework to install? How can I put it in a Continuous Integration Scenario? tSQLt is a great framework to start with, and third party tools, like SQL Test by RedGate, can be integrated on SSMS IDE. We will discuss about unit testing on SQL Server and how we can write down our test suite.
DevOps is not just a buzzword. It's a culture, it's a mindset. It can be also considered as an how to make something repeatable, reliable, strong. In this session we will understand how we can act as DevOps people, using Octopus Deploy for delivering automatically two types of items: SSRS and SSIS. We will focus on Octopus steps and how to create the nuget package from Reports and Integration Services.
In this session we will review the new enhancement to SQL Server security available in SQL Server 2016 and Azure SQL DB. These include Always Encrypted, Row-Level Security and Dynamic Data Masking as well as whatever else Microsoft has released since I've written this abstract. We'll look at how to set these features up, how to use them, and most importantly when to use them.
In this fun session, we'll review a bunch of problem implementations that have been seen in the real world. Most importantly we will look at why these implementations went horribly wrong so that we can learn from them and never repeat these mistakes again.

This session is a story telling session. In other words, it's stories about what people have done which have gone very, very wrong, usually with predictably horrible results. During this session, we'll publicly (and anonymously) shame them so that everyone else can learn from their mistakes.

If you’re looking for demos, well-crafted architecture designs, and proper speaking styles; this is not that session. If, however you want to learn from the horrible mistakes of others in a fast-pasted session, this is the session for you.
One of the biggest issues in database performance centers around storage. It’s also one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not speak the same language. In this session, we’ll be looking at storage from both the database and storage perspectives. We’ll be digging into LUNs, HBAs, the fabric, as well as RAID Groups.
As you first start to look at the Microsoft Azure cloud, or really any cloud platform, a question comes up. What can I really do with this "cloud" thing.

In this session, we'll touch on the different kinds of things that you can do with the cloud.

 'll give you a high-level touch view on a wide variety of the things which you can use the cloud for, and the how.
There has always been a challenge of publishing an on-prem developed reports to external clients (non-AD users) and this has been decently achieved for a very simple requirement. Is this achievable with the latest SQL Server 2017 with RLS enabled? 

But how does this solution would work for a non-AD user, if SSRS 2017 Mobile Report is published to Sharepoint and then enabling ADFS & external IDaaS (Identity as a Service) to view the report and render only their own specific data.

Key Learning: SSRS 2017 Mobile Reports, 
ADFS Configuration, 
IDaaS (3rd Party Authorization)

The software development landscape is changing. More and more, there is an increased demand for AI and cloud solutions. As a user buying cinema tickets online, I would like to simply ask "I want to buy two cinema tickets for the movie Dunkirk, tomorrow's viewing at 1pm" instead of manually following a pre-defined process. In this session, we will learn how to build, debug and deploy a chatbot using the Azure Bot Service and the Microsoft Bot Framework. We will enrich it using the Microsoft Cognitive suite to achieve human like interactions. Will it pass the Turing test, no, but we can extend the bot service using Machine Learning (LUIS), APIs (Web Apps) and Worflows (Logic Apps).
The big data landscape is vast and confusing, where to start is often a daunting prospect. Terms like NoSQL, NewSQL, Cap Theorem, lambda, MapReduce are enough to intimidate the most seasoned database developer. 
This session will introduce big data applications and concepts, such as Spark, Azure Stream Analytics and many other; as well as file formats. Once these foundations have been established the session will then demonstrate how best to connect to these in PowerBI so that you can leverage different technologies to address new challenges.
Why Upgrade?

In this session, we will overview in depth end-to-end upgrade process that covers the essential phases, steps and issues involved in upgrading SQL Server (2000 to 2012), SQL Server 2014 (with a good overview on 2016 too) by using best practices and available resources. What to-do and what not-to-do?

This topic is a popular session that I have been presenting since the year 2008, in MS Tech-Ed, SQL Saturday & SQLbits UK. We will cover the complete upgrade cycle, including the preparation tasks, upgrade tasks, and post-upgrade tasks. Real-world examples from my Consulting experience expanding on why & how such a solution.
HA/DR options with SQL Server are easy to design and deploy. With the changing arena in Azure and hybrid we need to be decisive to choose: HA/DR options with SQL Server are easy to design and deploy. With the changing arena in Azure and hybrid we need to be decisive to choose: What are all the high availability (HA) and disaster recovery (DR) options for SQL Server in a Azure VM (IaaS)? Which of these options can be used in a hybrid combination (Azure VM and on-prem)? This session will overview features such as Always On AG, Failover cluster, Azure SQL Data Sync, Log Shipping, SQL Server data files in Azure, Mirroring, Azure Site Recovery, and Azure Backup.

One of the key topics in DBA arena and have received a good feedback from one of the SQL Saturday event, looking forward to fine tune the topic with latest trends on Azure & SQL 2016/2017.
In order to ascertain the abilities of cloud computing platform, let us overview what is available & offered on Microsoft Azure. Microsoft Azure has the ability to move, store and analyze data within the cloud. It is essential to evaluate multiple opportunities and options with Microsoft Azure data insights. In this session let us talk about strategies on data storage, data partitioning and availability options with Azure. A tour on how best these Azure components can help you achieve success for your Big Data platform.

DBA is key when a database platform change occurs and necessary to support the application, release processes and there is a miracle waiting to happen! Based on my experience DBA is left out in the key element of DEVOPS, this is unfortunate. DBAs have a lot to offer . In this session let us overview where exactly DBAs can make miracles with their magic wand, let's talk about process and procedures. To evaluate each change request to ensure that it is well thought out, is compliant with organizational best practices. Take away best practices associated in DEVOPS and DBA world.
As a data platform engineer you need to understand best (affordable) methods to dip in big data lake, not just with number of open source tools that are available to help us out analysis and build visualisation. 

There is a subtle difference in methods of how could we use SQL on Hadoop and how to get more out of it!
More you follow classic methods, more you are engaging with your data intelligently. 
What tools and methods are available to build better integration with SQL?

In this session let us overview range of methods and tools available in the big data world.
Since SQL Server 2016, SSAS Tabular has included Tabular Model Scripting Language (TMSL). This allows to define objects in the Analysis Services model. Instead of taking days or weeks to create a tabular model, with TMSL and PowerShell it is now possible to create a tabular model in seconds.

In this session we'll go through the component parts of TMSL; the PowerShell cmdlets; and a practical demonstration of TMSL and PowerShell working together to create and deploy a tabular model. 
The Azure Data Lake is one of the newest additions to the Microsoft Azure Cloud Platform, bringing together cheap storage and massive parallel processing in two quick-to-setup and, relatively, easy-to-use technologies. In the session we will dive into the Azure Data lake.

First of all exploring: 
Azure Data Lake Store: How is data stored? What are the best practices?; 
Azure Data Lake Analytics: How does the query optimiser work? What are the best practices?

Second, a practical demonstration of: Tuning U-SQL: Partitioning and distribution: Job Execution in Visual Studio
Are you the only database person at your company? Are you both the DBA and the Developer? Being the only data professional in an environment can seem overwhelming, daunting, and darn near impossible sometimes. However, it can also be extremely rewarding and empowering. This session will cover how you can keep your sanity, get stuff done, and still love your job. We'll cover how I have survived and thrived being a Lone DBA for 15 years and how you can too. When you finish this session, you'll know what you can do to make your job easier, where to find help, and how to still be able to advance and enrich your career.
Many of us have to deal with hardware that doesn’t meet our standards or contributes to performance problems. This session will cover how to work around hardware issues when it isn’t in the budget for newer, faster, stronger, better hardware.  It’s time to make that existing hardware work for us. Learn tips and tricks on how to reduce IO, relieve memory pressure, and reduce blocking.  Let’s see how compression, statistics, and indexes bring new life into your existing hardware.
What separates great developers from average developers?  Ike Ellis has led development teams for fifteen years.  He has hired developers, fired developers, promoted developers, and written on the topic of software development.  Great developers have a few general characteristics in common and great SQL developers have even more in common. 

This session is a combination of the following things:

  • Reminder to continue doing what we all know we should be doing.
  • Showing the value of soft skills and personal habits
  • Demonstrating software development habits, keystrokes, and code writing strategies.
Come learn the habits that truly great developers foster to create value for their organization and make themselves indispensable.   
In this session, we'll share tips for report creation, including tips about gathering requirements, creating dashboards, understanding business drivers, implementing machine learning quickly and easily, judiciously using coloring, delivering reports strategically, and learn when it's time to retire a report.  Each tip is about sixty seconds and most have demos.  All code will be available for download.
Join this session and learn everything you need to know about T-SQL windowing functions!

SQL Server 2005 and later versions introduced several T-SQL features that are like power tools in the hands of T-SQL developers. If you aren’t using these features, you’re probably writing code that doesn’t perform as well as it could. This session will teach you how to avoid cursor solutions and create simpler code by using the windowing functions that have been introduced between 2005 and 2012. You'll learn how to use the new functions and how to apply them to several design patterns that are commonly found in the real world.

You will also learn what you need to know to take full advantage of these features to get great performance. We’ll also discuss which features perform worse or better than older techniques, what to watch out for in the execution plan, and more.
This session is aimed at SQL Server DBAs and Developers who have a basic understanding of containers and want to learn how to deploy container clusters in the cloud to provide high availability/resiliency.


As containers are becoming more and more prevalent, this session provides an introduction to Microsoft's offering, Azure Container Services (ACS). ACS provides the ability to quickly and easily deploy containers in a clustered environment.


I'll cover the different aspects of ACS including the Azure CLI, resource groups, the different types of clusters available, deploying SQL Server containers to the cluster and a dive into the objects created in the background that support the cluster created.


Each topic will be backed up with demos (live and recorded, depending on the time needed) which will show how simple it is to get up and running with this technology.
Session will be aimed at Database Administrators\Developers who have not previously implemented partitioning within an OLTP database and is designed to give a overview into the concepts and implementation. 

Session will cover the following:- 
An introduction to partitioning, core concepts and benefits. 
Overview of partitioning functions & schemes. 
Considerations for selecting a partitioning column. 
Creating a partitioned table. 
Explanation of aligned and non-aligned indexes. 
Manually switching a partition. 
Manually merging a partition. 
Manually splitting a partition. 
Demo on partition implementation & maintenance, covering automatic sliding windows.


After the session, attendees will have an insight into partitioning and have a platform on which to be able to investigate further into implementing partitioning in their own environment.
T-SQL window functions allow you to perform data analysis calculations like aggregates, ranking, offset and more. When compared with alternative tools like grouping, joins and subqueries, window functions have several advantages that enable solving tasks more elegantly and efficiently. Furthermore, window functions can be used to solve a wide variety of T-SQL querying tasks well beyond their original intended use case, which is data analysis. This session introduces window functions and their evolution from SQL Server 2005 to SQL Server 2017, explains how they get optimized, and shows practical use cases.
Long gone are the days where the only architecture decision you had to make when scaling an environment was deciding which part of the datacenter would store your new server. There is a dizzying array of options available in the SQL Server and Azure ecosystems and those are evolving by the day. Is “the cloud” a fad? Are private datacenters a thing of the past? Could both questions have a kernel of truth in them? In this session I will go over real world scenarios and walk you through real world solutions that utilize your datacenter, cloud providers, and everything in between to keep your data highly available and your customers happy.
Every few months the headlines are filled with news of yet another system outage inconveniencing customers and users. As data platform professionals, the systems and servers for which we are responsible generally form the foundation of our companies’ customer-facing applications. In this fast-paced session, we’ll discuss the differences between high availability and disaster recovery as well as the tools and technologies Microsoft provides us within SQL Server to keep our databases up, our users happy, and our DBAs well rested.
Session deals with the database maintenance problems (like defragmentation) in situation of 24/7 system. As we walk through the basic maintenance we keep in mind that our database is big and it takes a lot of time to do the proper maintenance. We will try to solve this problem using T-SQL and CLR.
DML is used in most cases without thinking about the multiple operations for the db engine. This session will give a deep dive into the internal storage engine down to record level.

After finishing the theory the differen DML commands and their tremendous operational tasks for the db engine will be investigated.

See, what effect a workload will have to handle page splits and/or fowarded records.
This session is a demo session. The demonstration of the different workloads will be explained in detail while the demos are executed.
You know the situation that a query yesterday still worked
quickly and satisfactorily and today it suffers from performance problems?

What will you do in such a situation?

- you may restart SQL Server (did work all the other times before)
- you drop the procedure cache (has been told by a dba)
- you  get yourself a coffee and think about what you learned in this session

Microsoft SQL Server requires statistics for ideal execution
plans. If statistics are not up-to-date, Microsoft SQL Server may create execution
plans that run a query many times slower. In addition to the basic
understanding of statistics, special situations are shown in this session that
are only known to a small group of experts.

After a brief introduction to the functionality of
statistics (Level 100), the special query situations, which lead to wrong
decisions without experience, are immediately apparent. The following topics
are covered by usage of large number of demos:

- When will statistics get updated?
- examples for estimates and how they can go wrong?
- outdated statistics and ascending keys?
- when will outdated statistics updated for unique indexes?
- drawback of statistics in empty tables?

Follow me on an adventurous journey through the world of statistics of Microsoft SQL Server.
You see benefits and disadvantages of the different use cases with FILLFACTOR in different scenarios. The audience see what deep impact a wrong usage of FILLFACTOR may have to your applications in the same way as the speed up for different workloads if a FILLFACTOR will be chosen for the objects in the database.

We will do a deep dive into the dependencies of indexes and buffer pool and locate impacts of bad indexes and wrong usage of FILLFACTOR to the buffer pool.

This session is a demo driven presentation (75%)
When was the price of an article changed and how was the original price?
How has the price of an article developed over a period of time?
Developers must solve their own solution with the help of triggers and/or stored procedures.
With temporary tables an implementation is ready in a few seconds - but what are the special requirements?
In addition to a brief introduction to the technology of Temporal Tables, this session provides an overview of all the special features associated with Temporal Tables.

- Rename tables and columns
- Temporal Tables related to triggers?
- Temporal Tables and InMemory - can this go well?
- can computed columns be used?
- how to configure the security
- ...
SQL Server is a high frequently used piece of software which need to serve single requests
and/or hundreds of thousands of requests in a minute. Within these different
kinds of workloads Microsoft SQL Server has to handle the concurrency of tasks
in a fashion manner. This demo driven session shows different scenarios where
Microsoft SQL Server has to wait and manage hundreds of tasks. See, analyze and
solve different wait stats due to their performance impact:

- CXPACKET: when a query goes parallel
- ASYNC_IO_COMPLETION: speed up IO operations (Growth / Backup / Restore)
- ASYNC_NETWORK_IO: What happens if your application refuses data?
- THREADPOOL starvation: crush of requests for Microsoft SQL Server
- PAGELATCH_xx: How Microsoft SQL Server protects data?
You require a technique to fill your DWH in a fast and easy way but you don’t have an Enterprise
Edition of Microsoft SQL Server? You are dealing with a lot of trade representatives who need actual data but you don’t want to use a replication scenario? If you don’t want to create your own solution, there is an easy and comfortable way to implement these scenarios with Change Tracking.

This session familiarizes you with the barely unknown core technique of Microsoft SQL Server;
CHANG TRACKING. Change tracking is a lightweight solution that provides an efficient change tracking mechanism for applications. Typically, to enable applications to query for changes to data in a database and access Information that is related to the changes, application developers had to implement custom
change tracking mechanisms. Creating these mechanisms usually involved a lot of work and frequently involved using a combination of triggers, timestamp columns, new tables to store tracking information, and custom cleanup processes. With CT you can handle it in a very simple way within a few seconds.
Technology changes quickly - patterns and approaches less so. As we move towards distributed cloud architectures we will employ a range of disparate tools, the patterns that were designed for single box solutions may not be appropriate any more.

This session will take you through the patterns and processes that underpin the Lambda architecture, providing advice and guidance on the tool sets and integration points between them.

We will review a working solution following the movement of data through batch and speed layers via Azure Data Lake Store & Analytics, SQL DataWarehouse and Streaming Analytics, before looking briefly at where business logic tools such as Azure Analysis Services and PowerBI fit in.
Deep learning has been used to write new Shakespearean sonnets, to imagine new delicious recipes, write hilarious Harry Potter novels and even come up with new names for beer! In this session we will understand, what is deep learning, what are neural nets, what are the steps required to build a deep learning model and look at some of the great examples mentioned.

We will then turn our new skills to the problem most speakers have! Writing session abstracts. Together we will develop a recursive neural net designed to generate new SQLBits session abstracts, entirely based on previously submitted sessions to SQLBits. Will we be able to produce a session you would have attended? Come along and find out. 
The productionisation of Machine learning models is not just difficult, it is known as the "hardest problem in data science". Data scientists create models and when they meet the level of required accuracy, they are passed off to operations for implementation. This not only causes a delay but means a model is often reworked into a "Production" language. By the time this has been completed the model is no longer accurate and not fit for purpose. As machine learning practitioners we need manage our models from development right in to production. Does this mean all data scientists need to be data engineers? Microsoft doesn't think so, and they are trying to make this easier for us. 

Azure Workbench and Azure Model Management have arrived to make the process of model generation right through to deployment as simple as a click of a few buttons. In this session we will explore the workbench, build a Python model and deploy it inside a Docker container for use via a REST API. Whether you're new to Azure Machine Learning, or you're an experienced ML practitioner this session will provide you with the knowledge to embrace a new set of skills to smooth the productionisation of your models. 
Big Data using Hadoop, Cloud Computing in Azure, Self Service BI or “classic” BI with SQL Server have grown quickly over the last years.   In my presentation, I will explain how we use these components at InnoGames and how they fit into our holistic enterprise BI architecture.    Gathering reliable insights from large volumes of data quickly is vital in online games – we use this data to optimise the registration of new players and retain existing ones for as long as possible.   This presentation is relevant for all people who like to get their hands ‘dirty’ with data. We will look through the components that make up our BI Infrastructure, as well as giving you the big Picture.
It's understandable that developers love to work in separate code branches, but this can create painful complications if not managed.

Do you dread large merge conflicts when integrating code?
Continuous Integration is a method of working where we merge and fully test our code multiple times a day. This is only possible with a high level of automation.

I'll be discussing the tools I use to achieve this automation when developing SQL Server databases.

Finding automating the deployment of database changes hard?
ReadyRoll is a tool that allows you to test deployments during development.

How do you know your database change won’t affect something you haven’t thought of?
tSQLt and Pester unit tests can put your mind at rest.

Having trouble keeping your test environments in sync with production?
Docker enables us to fix this with infrastructure as code

You will see how a CI approach to database development can increase team efficiency and reduce the time to go from an idea to production.
How can we use data for the management of the organisation's key resource - the people who work for it? This session is aimed at people who are, or are becoming, leaders in their business, and who want to understand how to lead effectively using data.
Today's managers must be able to respond appropriately to the challenges involved in managing people in a dynamic and rapidly changing business environment. This means developing an understanding of the key aspects of people management and learning to apply data to help leaders to address this important challenge.The session examines strategic aspects and the main issues and complexities involved in the management of people. The session also aims to develop an appreciation of the main contextual influences on people management and their impact through an understanding of data. 
In this session, we will use Microsoft's Business Intelligence tools, such has Power BI and SSRS, to analyse and evaluate different perspectives on, and approaches to, the management of people and to develop an understanding of how effective people management contributes to organisational success. 
Our data can help us identify and evaluate drivers that impact on business growth and sustainability. Data can also help us to utilise strategic marketing and evolving communications to provide viable business marketing solutions, sustainable growth and have a competitive hallmark.
This session engages on an innovation journey - critically exploring the drivers and processes associated with value-led marketing and operations. In this session, we will understand, interpret and analysis the inherent problems associated with innovating new products and services. You will gain new insights into creating new business scenarios and providing strategic marketing and communications recommendations.
You know about the cloud but you’re not there yet. Is it hard? Is it easy? How do you get started? Come to this session and see for yourself. We start with nothing and end up with a deployed Azure SQL Database. We even run a quick (though ugly your presenter is a DBA type!) Power BI report and enable Geo-Redundant Disaster Recovery with a couple clicks.

The goal is to take the mystery out, to show the capabilities and get you thinking about what going to the cloud could look like and what it can do for you and your company. This session is nearly PowerPoint free, real world and demo-rich.
The Internet of Things is the new kid on the block offering a wealth
of possibilities for data streaming and rich analytics. Using a Raspberry Pi 3
we will take an end to end look at how to interact with the physical world
collecting sensor values and feeding that data in real-time into cloud services
for manipulation and consumption. This will be a heavily demonstrated session
looking at how such an environment can be setup using Microsoft offerings,
including; Windows 10 IoT Core, a C# Universal Windows Platform application, an
Azure IoT Event Hub, Azure Stream Analytics, Azure SQL DB and Power BI. This is
an overview of what’s possible, but showing exactly how to build such a
simplified solution with a session which will be 90% demonstrations. This
will hopefully add that level excitement to real-time data with plenty of
hardware out there showing what it can do when setup with Microsoft software.
How do we implement Azure Data Lake?
How does a lake fit into our data platform architecture? Is Data Lake going to run in isolation or be part of a larger pipeline?
How do we use and work with USQL?
Does size matter?!
 
The answers to all these questions and more in this session as we immerse ourselves in the lake, that’s in a cloud.
 
We'll take an end to end look at the components and understand why the compute and storage are separate services.
 
For the developers, what tools should we be using and where should we deploy our USQL scripts. Also, what options are available for handling our C# code behind and supporting assemblies.
 
We’ll cover everything you need to know to get started developing data solutions with Azure Data Lake.Finally, let’s extend the U-SQL capabilities with the Microsoft Cognitive Services!
If your organisation doesn't have dirty data, it's because you are not looking hard enough. how do you tackle dirty data for your business intelligence projects, data warehousing projects, or your data science projects?

In this session, we will examine ways of cleaning up dirty customer data using the following technologies in SQL Server 2017 such as:

  • R
  • Python
  • AzureML and Machine Learning
  • SSIS

We will also examine techniques for cleaning data with artificial intelligence and advanced computing such as knowledge-based systems and using algorithms such as Levenshtein distance and its various implementations.

Join this session to examine your options regarding what you can do to clean up your data properly.
Today, CIOs and other business decision-makers are increasingly recognizing the value of open source software and Azure cloud computing for the enterprise, as a way of driving down costs whilst delivering enterprise capabilities.
For the Business Intelligence professional, how can you introduce Open Source for analytics into the Enterprise in a robust way, whilst also creating an architecture that accommodates cloud, on-premise and hybrid architectures?We will examine strategies for using open source technologies to improve existing common Business Intelligence issues, using Apache Spark as our backdrop to delivering open source Big Data analytics.- incorporating Apache Spark into your existing projects
- looking at your choices to parallelize Apache Spark your computations across nodes of a Hadoop cluster
- how ScaleR works with Spark
- Using Sparkly and SparkR within a ScaleR workflowJoin this session to learn more about open source with Azure for Business Intelligence.


Part 1

Azure Data Factory. This is not SSIS in Azure. But it’s a
start for our control flows. Let’s update our terminology and understand how to
invoke our Azure data services with this new controller/conductor who wants to
understand our structured datasets. Learn to create the perfect dependency
driven pipeline with Azure Data Factory and allow your data to flow. What’s an
activity and how do we work with time slices? Is a pipeline a pipeline? Who is
this JSon person? All the answers to these questions and more in this
introduction to working with Azure Data Factory. Plus, insights from a
real-world case study where ADF has been used in production for a big data business
intelligence solution handling log files for 1.5 billion users.

 

 

Part 2

Having covered the basics in part 1 we’ll now go beyond the
Azure Data Factory basic activity types and Azure Portal wizard. Extract and
load are never the hard parts of the pipeline. It is the ability to transform,
manipulate and clean our data that normally requires more effort. Sadly, this
task doesn’t come so naturally to Azure Data Factory, as an orchestration tool
so we need to rely on its custom activities to break out the C# to perform such
tasks. Using Visual Studio, we’ll look at how to do exactly that and see what’s
involved in Azure to utilise this pipeline extensibility feature. What handles
the compute for the compiled .Net code and how can does this get deployed by
ADF? Let’s learn how to fight back against those poorly formed CSV files and
what we can do if Excel files are our only data source.
December 1978: 10 people die in a commercial airliner. Why? Bad troubleshooting skills and poor maintenance – disaster-causing attitudes. As you'll learn in this session, the doomed airliner ran out of fuel while the crew of three wasted time troubleshooting a false alarm. We can draw some parallels in the database world. Poor troubleshooting, disaster-causing attitudes, and a lack of disaster preparedness lead to needless downtime and serious user impact across our environments.

In this session, we'll look at case studies of real-life aviation disasters and see our own production database downtime incidents in some of the ingredients. We'll see similarities in attitudes that cause disasters. Come learn about the importance of preparation, troubleshooting, and teamwork. This will be an interactive session where we’ll pick apart disasters, engage in discussion around case studies, and leave prepared to change attitudes in ourselves and our colleagues and avoid disasters at work. 


DBA to DSA. What’s a DSA? A Data services administrator!
Why? There seems to be a common misconception that once you move from on
premises SQL Server to Azure PaaS offerings a DBA is no longer required. This
perception is wrong and in the session, I’ll show you why. As a business
intelligence consultant, I develop data platform solutions in Azure that once
productionised need administration. As the title suggests, be my Azure DBA.
Maybe not DB for database. Maybe in Azure I need a DSA, a Data Services
Administrator. Specifically, we’ll cover a real business intelligence solution
in Azure that uses Data Factory, Data Lake, Batch Service, Blob Storage and
Azure AD. Help me administer this next generation data solution.
ADFv2 arrived in Sept17 with a bunch of new concepts and features to support our Azure data integration pipelines. In this session, we’ll update your ADFv1 knowledge and start to understand true scale out control flow and data flow options. What’s the integration runtime? Can we easily lift and shift our beloved SSIS packages into the cloud? How do we embed expressions to achieve dynamic activity executions? The answers to all this questions and more.
The Tabular modeling concepts in Analysis Services and Power BI are tremendously powerful, but to become really successful you’ll need to understand the fundamental differences between these technologies and SQL-based environments. The secret to unleash all the power of Power BI is to thoroughly understand the concept of context. In this session, we will introduce you to the different forms of context within a Power BI or Analysis Services Tabular model, and how to work with these to understand and create advanced DAX calculations.
When creating complex DAX measures in Power BI to enable advanced visualizations, it is often helpful to work with driving tables. A driving table enables many scenarios, like dynamically switching between multiple DAX measures, implementing dynamic relationships, or improving the performance of a model. In this session, we introduce you to the concept of driving tables, and explain how and when to use them individually or together.
SQL Server and T-SQL has a lot in common with Reversi – they seem fairly easy to learn, but take a lifetime to master. Many times, overt abuse is not even necessary - an incorrectly configured database is surprisingly capable of destroying any trace of performance all by itself. The engine has some default behaviors that are not necessarily very well understood by many developers. Combine these behaviors with a general lack of understanding of the database engine and the road to performance disaster is truly plotted. Join Alexander for a session of examples how developers can and do abuse SQL Server - both intentionally and unintentionally.
There has been an awakening. Azure SQL Database is no longer merely a comic relief, but an essential part of a good strategy for galactic domination. Some even say that the lack of a good cloud environment played a part in the demise of a certain galactic empire. While this may or may not have been the case, keeping up to date on current technology is always helpful when one tries to avoid the displeasure of the leadership. This session outlines the offerings available in Azure SQL Database today and how the whole database-as-a-service fits in with the rest of the Azure ecosystem. May the cloud be with you - always.
There have been four (4!) new releases of SQL Server since the introduction of Extended Events in SQL Server 2008, and DBAs and developers alike *still* prefer Profiler. Friends, it's time to move on. If you've tried Extended Events and struggled, or if you've been thinking about it but just aren't sure where to begin, then come to this session. Using your existing knowledge and experience, we bridge the gap between Profiler and Extended Events through a series of demos, starting with the Profiler UI you know and love, and ending with an understanding of how to leverage functionality in the Extended Events UI for data analysis. By the end of this session, you’ll know how to use Extended Events in place of Profiler to continue the tasks you've been doing for years--and more. Whether you attend kicking and screaming, with resignation because you’ve finally given up, or with boundless enthusiasm for Extended Events, you'll learn practical techniques you can put to use immediately.
There are two types of non-relational data that you really should not be trying to store in a relational database: binary files and logs. Both types of data are a terrible fit for a relational database for one particular reason: they are LOBs and relational databases are terrible at storing LOBs.
In case you insist doing that (or, more likely, you have no choice), then this session is for you: we will see what are the challenges of working with huge amounts of LOB data and we will see how to work around them without breaking a sweat, with a look at the new features of SQL Server 2017 in this area.
Are you faced with complaints from users, poor performing code from developers, and regular requests to build reports? Do you uncover installation and configuration issues on your SQL Server instances? Have you ever thought that in dire times avoiding Worst Practices could be a good starting point? If the answer is “yes”, then this session is for you: together we will discover how not to torture a SQL Server instance and we will see how to avoid making choices that turn out to be not so smart in the long run.
You are probably thinking: “Hey, wait, what about Best Practices?”. Sometimes Best Practices are not enough, especially for beginners, and it is not always clear what happens if we fail to follow them. Worst Practices can show the mistakes to avoid. I have made lots of mistakes throughout my career: come and learn from my mistakes!
As your pesonal Virgil, I will guide you through the circles of the SQL Server hell:
  • Design sins:
    • Undernormalizers
    • Generalizers
    • Shaky Typers
    • Anarchic Designers
    • Inconsistent Baptists
  • Development sins:
    • Environment Pollutors
    • Overly Optimistic Testers
    • Indolent Developers
  • Installation sins:
    • Stingy Buyers
    • Next next finish installers
  • Maintenance sins:
    • Careless caretakers
    • Performance killers
If your objective is deliver a roadmap of future-proof robust business intelligence solutions that have longevity, then you will need to understand the options for storing data in Azure. The incorrect choice could be costly in terms of financial considerations, time, and project success.
We will cover the lambda architecture as a framework for your business intelligence data. You will learn about the options to store business intelligence data in your organisation.
We will cover when to select data sources such as Hadoop, Azure Cosmos DB, Azure Data Lake, SQL Data Warehouse, Azure SQL Database, Azure blob storage, Table storage, Redis Cache, and Azure Database for MySQL. To illustrate the finer points, we will have plenty of demos to clarify.
You will appreciate the 'why' as well as the 'what' in order to give you an in-depth understanding of the options to store your business intelligence data, whether you are thinking of a cloud architecture in Azure, a hybrid approach of on-premise and Azure, or specifically a on-premise environment.
“Oh! What did I do?”
Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. In this session we will focus on some of the common errors and their resolution. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2016 / 2017 has introduced many new features. In this 75 minute session we will be learning quite a few of the new features of SQL Server 2016 / 2017. Here is the glimpse of the features we will cover in this session.

• Adaptive Query Plans
• Batch Mode Adaptive Join
• New cardinality estimate for optimal performance
• Adaptive Query Processing
• Indexing Improvements
• Introduction to Automatic Tuning

This 75 minutes will be the most productive time for any DBA and Developer, who wants to quickly jump start with SQL Server 2016 / 2017 and its new features.
Slow Running Queries are the most common problem that developers face while working with SQL Server.
While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
Data and Database is a very important aspect of application development for businesses. Developers often come across situations where they face a slow server response, even though their hardware specifications are above par. This session is for all the Developers who want their server to perform at blazing fast speed, but want to invest very little time to make it happen. We will go over various database tricks which require absolutely no time to master and require practically no SQL coding at all. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session.
You already use Service Fabric but you may not know it runs.. SQL Azure, DocDB/Cosmos, Cortana, Skype, PowerBI. It powers Azure. It runs containers, microservices, scales to hundreds of nodes, supports no-downtime upgrades, automatic recovery and load-balancing. This is awesome stuff!

Microsoft releases it FREE - for Azure _and_ on-prem and runs on Linux. Why? It is Microsoft's microservices platform - think Kubernetes, Meso, or Docker Swarm on steroids.

It _completely_ changes how you think about systems, and supports data storage "in the cluster" where data is co-located with code for incredible performance.
This session is a very basic introduction to Integration Services (SSIS). We’ll cover the basics; what it’s used for and the various parts and pieces to get you started creating your own projects in no time. We’ll talk about packages, connections and project parameters and their respective tasks/properties. We’ll also cover some basic performance tuning to make your packages run faster.
You’ve heard all the buzz about Power BI, but you have no idea what it is and how it works. This session explains what Power BI is, who can use it and why you would want to. It’s an introductory session that gives you the information you need to determine if Power BI is right for you and your organization.
In many places, we've been using build servers for some time now. Some call this "CI" - However, the real benefit of a "continuous" strategy starts to pay off when creating a build and deployment pipeline: continuous build and testing lead to less impactful deployments - and data solutions that are of consistent high quality.

In this session, I'll explain doing CI/CD inside VSTS, for the full BI Stack (SQL Server, SSIS, SSAS - and who knows what more...). Also, we'll look at some more architectural issues you might run into when your DW is still one big project. Where possible, we'll include some automated testing as well.
One hot topic with Power BI is security, in this deep dive session we will look at all the aspects of security of Power BI, from users, logging to where and how your data is stored and we even look at how to leverage additional Azure services to secure it even more.
In this session we will look at all the important topics that are needed to get your Power BI modelling skills to the next level. We will cover the in memory engine, relationships, DAX filter context, DAX vs M and DirectQuery. This will allow set you on the path to master any modelling challenge with Power BI or Analysis Services. 
This certification exam prep session is designed for people experienced with analysing, modeling and visualizing Data and are interested in taking the 70-778. 

Attendees of this session can expect to review the topics covered in this exam in a a fast-paced format, as well as receive some valuable test taking techniques.

Attendees will leave with an understanding of how Microsoft Certification works, what are the key topics covered in the exams and an exhaustive look at resources for finalizing and getting ready for the exam.
Satya Nadella, Microsoft's CEO, has promised to improve diversity and inclusion in the Microsoft culture. By taking this stance, Nadella is placing Microsoft at the forefront of efforts to be a successful, inclusive and diverse organisation.

What lessons can we learn in diversity? Diversity is often translated as 'women in technology' but that's an incorrect narrative; it means so much more

How can diversity impact your career? If you are a true community or technical leader in your organisation, then you will need to learn about diversity, how it impacts your team, and what you can do in order to bring about an inclusive organisation.

Join this session to learn more about diversity and how you can be part of the diversity story. You'll also learn about managing diverse teams for success in your organisation.
Extended Events, Dynamic Management Views, and Query Store are powerful and lightweight tools that gives you a lot of data when analyzing performance problems. All this is great news for database administrators. The challenge is which tool to use for which problems and how to combine the data.

Imagine a scenario where you are getting timeouts from a business critical application, the users are complaining, and you are trying to understand what is happening. You have data from XEvents, you are looking in the execution related DMVs, and now you are trying to find the query in Query Store. How do you put it all together?

In this session you will learn techniques for combining the data from these tools, to gain great insight, when analyzing performance problems. We will look at common real-world problems, do the troubleshooting step by step, and visualize the data using PowerBI.
How to install SQL Server for optimal performance, for Windows Administrators and „accidental“ DBAs. We will go through hardware selection, on what to spend more money and where we can save the money. How to configure OS and storage system to leverage the system abilities to their maximum, with the accent on SQL Server-specific workload. Tweaks in installation and configuration of SQL Server, and database settings for optimal performace. You will see some of the tricks used by experienced professionals to make SQL Server run fast.
Is it possible to protect your databases with mighty Availability Groups technology, if it is only Standard Edition and without domain/AD? With asynchronous commit, automatic failover, single IP presented, no shared storage, included in inexpensive Standard Edition? Database Mirroring (DBM) is deprecated and with SQL Server 2016+ on Windows 2016 Microsoft gave us a more powerful replacement: Basic Availability Groups. Although with some limitations compared to Enterprise Edition, it can incredibly boost the availability of your "standard" databases. Ideal for companies on a budget, providing never before seen value of the package. We will go through benefits, limits, and options to handle scenario with multiple databases joining their data from different AG.
"Catch-all" queries are very common, found in nearly every database. You know those "give me a row with this @ID or give me all rows if @ID is -1"? That is an optional filter, and probably you have a combination of multiple. Tuning them is a nightmare that sometimes even professionals fail to solve completely. Such queries are sometimes fast and suddenly get slow without apparent reason, bringing server to its knees if often executed. You will see the secret mastery and wizardry of achieving the best possible performance for those queries, and understand internals of SQL Server deeper than before. Bringing your company huge benefits in smooth databases performance and customers' satisfaction.
Microsoft Power BI is rich with its default visualizations and can also be extended by adding custom visuals from the Office Store (store.office.com). But besides those visuals, there is another option: you can also create your own visual to be used in your reports. How is this done? Where to start? These were also questions I had before I started creating my own visuals. Now with hands-on experience in creating and submitting custom visuals I will explain and demonstrate in this session how to start creating your own visual, what are the best practices and what are the extra next steps needed before submitting the visual to the Office Store.
There are three different ways to combine coding and Power BI: Custom Visuals, Power BI REST API and the Embedding API. In this session I will talk about the REST API and the Embedding API. Both methods have here specific things that are needed before you can start and to make it a succes.
Where to start? How about security? Licenses?
All typical questions that will be covert in this session and supported with a lot of demos.
You are responsible for writing or deploying SQL Server code, and want to avoid unleashing catastrophe in your databases. In this session you will learn how to install and use the tSQLt testing framework to run automated repeatable tests. You will gain an understanding of test driven development in the context of SQL Server to isolate and test the smallest unit of code. Some simple techniques can catch bugs early when they are cheapest to fix and make your development life-cycle far more robust. Remove that stomach churning feeling at release time wondering what will break and when your phone will start ringing ominously. Replace that nightmare with an optimism that your changes have been fully tested and are production ready. Relax, sleep well and get it right first time every time.
The new release of Microsoft SQL Server: SQL Server 2016 brings new functionality for Data Security Professionals. Most of them are even more mature with newest SQL Server 2017 version. Now you can protect your data in your database anywhere (on-premise, in the cloud, in transit, in the hybrid environment) even more simply than before. Transparent Data Encryption with better algorithms and better support for backup, Row-Level Security, Dynamic Data Masking and Always Encrypted for your application are now simple features. Azure Security Center brings a new quality of implementation Best Security Practices.

We focus on hard theory and of course, on demos. We look a little closer for few specific files that exist in our environment. We work on all three different environments (on-premises, in the cloud and hybrid), but our
goal is only one: protect your data.

Creating a proper Tabular Model is essential for the success of your modern BI solution. If you set up the foundations properly, you will benefit when building the relationships, formulas and visualizations. Also your Self-Service BI users will understand and use the data model better. This talk guides you through the process of creating a Tabular Model. The session will be packed with very practical tips and tricks and the steps you should do to create a proper model. The session is based on “real life” projects, and will be backed with some theory. After this hour you will understand how to create a proper model, how to optimize for memory usage and speed, enhance the user experience, use some DAX expressions and to use the right tools for the job. You will go home with a very useful step-by-step-guide.
Persistence is Futile - Implementing Delayed Durability in SQL Server

The concurrency model of most Relational Database Systems are defined by the ACID properties but as they aim for ever-increasing transactional throughput, those rules are bent, ignored, or even broken. In this session, we will investigate how SQL Server implements transactional durability in order to understand how Delayed Durability bends the rules to remove transactional bottlenecks and achieve improved throughput. We will take a look at how this can be used to compliment In-Memory OLTP performance, and how it might impact or compromise other things. Attend this session and you will be assimilated!
Lockless in Seattle: Using In-Memory OLTP Transaction Processing
Locks and latches have long been the mechanism used to implement SQL Server concurrency control, but with the introduction of In-Memory OLTP the paradigm has shifted. Are we really looking at the Brave New World of Transaction Processing or a dystopian nightmare? In this session, we will understand how In-Memory OLTP architecture is implemented and how its mechanics function. We will learn what transactional bad dependencies and other considerations are introduced by its use and what capabilities are provided by SQL Server 2016.
On conferences like this there are many sessions on What's new, Cool new stuff, Roadmaps and so on. But in real life we also have to work with existing and proven technologies. SQL Server Reporting Services is one of those. Many organizations still use it heavily and are dependent on it for creating paginated reports and web dashboards. And it's expected to be there for a long time.

In this session I will share miscellaneous SSRS tips and tricks. For instance, dealing with multi language scenarios, handling corporate styles, storing user preferences, creating a good dashboard experience in the portal, and more. I will try to share as many tips and tricks as will fit in one hour.

This session assumes you have a working experience with SSRS, but you need not be a guru.
What do you do when you have a performance or troubleshooting problem that you can’t figure out? One option is to open a support ticket with Microsoft Support (CSS). The CSS engineer will use a specific set of tools to collect and analyze workload data from your SQL Server. Based on this information they might recommend patches, configuration changes, or identify the worst-performing queries.But what if you could bypass support and do all of this analysis yourself?In this session, you'll learn how to use battle tested tools to analyze your workload, read crash dumps, and error logs. Armed with this information, you'll understand the root cause of the problem and propose solutions to performance and stability problems. Finally, you'll learn the basics of debugging SQL Server so that when you finally run into a problem you can't solve, you'll be able to help Microsoft support help you.
Many organizations would like to take advantage of the benefits of using a platform as a service database like Azure SQL Database. Automated backups, patching, and costs are just some of the benefits. However, Azure SQL Database is not a 100% feature compatible with SQL Server—features like SQL Agent, CLR and Filestream are not supported. Migration to Azure SQL Database is also a challenge, as backup and restore and log shipping are not supported methods. 

Microsoft recently introduced Managed Instances—a new option that provides a bridge between on-premises or Azure VM implementations of SQL Server and Azure SQL Database. Managed Instances provide full SQL Server surface compatibility and support database sizes up to 35 TB.

In this session, you will learn about how to deploy your workloads to managed instances, how to size your instance, costs, and performance options for this new service offering
Moving to the cloud in a big way? In this case study, learn about building a complex end-to-end infrastructure involving SQL Server (on-premises), Microsoft Azure SQL Data Warehouse, and Azure SQL Database.
Gain an understanding of how to use Azure Automation to reduce your costs and automate processes. You will learn about integration with Azure Active Directory, virtual networks, and data flows. Additionally, you will learn how to make decisions based on service and business requirements. 
With the launch of SQL Server 2017 many database professionals may find themselves needing to quickly come up to speed on a new operating system. In this session you will learn the basics of getting started with SQL Server on Linux:

  • Connecting to Linux
  • Installing SQL Server
  • Configuring your server
  • Adding a disk
  • Writing a simple shell script
You may not be an expert sysadmin after this session, but you will be able to manage your SQL Server instances running on Linux. 
Giving presentations isn't easy, no matter the context. You could be trying to teach about a technical feature in the Data Platform, inspire conference attendees in a keynote, encourage a congregation about some Biblical truth, or simply (ha!) make a crowd of people laugh. In this session, Rob will explore some of the principles he's learned across a variety of contexts, and show you ways that you can develop or enhance your presenting skills.
Don't. Just don't. Uninstall it, and definitely don't use a Hex Editor on any of your SQL Server database files. Call Microsoft Support instead. That's the right thing to do.

But if you find yourself thinking you have no other options, then maybe opening up a Hex Editor is viable option... No - still call Microsoft Support. Forget anything you saw in this presentation, where Rob would've shown you through some system tables, and might've demonstrated ways that you could've helped get yourself out of that bind. Call Microsoft Support instead.

Really. Just call Microsoft Support.
Open source is part of the Microsoft strategy. Indeed, you may run SQL Server either on a physical or a virtual machine, and recently on the top of Windows and Linux operation system as well.







At the same time, maintaining business continuity service remains a major challenge for customers because it is synonymous with economic competitivity. Fortunately, high availability features for Linux will also be shipped with SQL Server 2017 to address this daily concern.







This session will be delivering an overview of what is new and how it is going to change the landscape of your mission-critical scenarios.
Azure SQL DW is powerful MPP cluster for processing a big amount of structured data. But, it still requires optimization to maximize performance and optimize resource usage. Some techniques almost similar to what we do with our regular SQL Server, some of them not due to MPP architecture. In this session, I will cover next topics:
 - how to minimize the time of data loading
 - how to choose a right distribution type
 - data movement minimization
 - how scaling and resource class could help to improve data processing performance
 - maximize Column Store index performance
 - update statistics
 - partitioning strategy
SSDT and SSMS are the primary tools of BI developers for developing and managing SSAS Tabular. Unfortunately, their possibilities are limited, and we should look for other tools that help us automate monitoring and partitioning, understand what is going on inside VertiPaq engine and optimize our queries, manage our project and code. In this session, I'm going to show you six amazing tools that must be in your developer/consultant toolbelt. These tools help you develop, manage, monitor and optimize your Tabular model, in other words, it makes your day to day job easier.
Many existing Data Factory solutions include a large number of workarounds due to limitations with the service. Now that Data Factory V2 is available, we can restructure our Data Factories to be lean, efficient data pipelines, and this session will show you how.

The initial Data Factory release was targeted at managing hadoop clusters, with a couple of additional integrations thrown in - it was mistakenly believed to be "the new SSIS" and subsequently there were a lot of very disappointed people. The new release remedies many of these complaints, adding in workflow management, expressions, ad-hoc triggers and many more features that open up a world of possibilities.

This session will run through the new features in ADFV2 and discuss how they can be used to streamline your factories, putting them in the context of real-world solutions. We will also look at the additional compute options provided by the new SSIS integration, how it works within the context of Data Factory and the flexibility it provides.

A working knowledge of ADF V1 is assumed.
If you make a transition from performing workflow activity SSIS to Azure Data Factory, you'll no doubt be disappointed by the flexibility offered, even with ADFV2. The complexity decision paths, failure roots, for-each logic isn't offered - and it shouldn't be, Data Factory is designed to be a very different tool.

But what if you need that level of decision making power? Azure Logic Apps fills a huge gap in the Azure story and if you've not tried it yet - you should. Logic Apps provides control-flow style functionality and can orchestrate small data packets from a huge range of common sources. Want to scrape twitter, read sensor messages, call webservices and record the results, all with zero coding and no server management? Its now possible with zero coding. 

This session will introduce Logic Apps & how it fits into the Modern Analytics Platform then demonstrate building a twitter scraper step by step, demonstrating the process of building, deploying and debugging an app.
Power BI supports a large number of data sources - but did you know you can extend it to add even more? In this session you'll learn how to build custom data connectors in Visual Studio using the M language. Topics covered will include:
  • When to consider building a custom data extension
  • Connecting to web services and relational databases
  • Handling authentication and storing credentials
  • Exposing tables and functions
  • Building a navigation table
  • Implementing query folding
Azure is changing how we build data analytics platforms and Azure SQLDW is one of the key components.

Reflecting on a large-scale Azure DW project, this session gathers together learnings, successes, failures and general opinions into a general SQLDW primer that will accelerate you on this journey.

We'll start by quickly putting the technology in context, so you know WHEN to use it, WHERE it’s appropriate and WHY it works the way it does.
 - Introducing the ADW technology
 - Explaining distributions & performance
 - Explaining polybase

Then we'll dive into HOW to use it, looking at some real-life design patterns, best practice and some “tales from the trenches” from a recent large Azure DW project.

 - Performance tips & tricks (designing for minimal data movement, managing distribution skew, CTAS, Resource classes and more)
 - ETL Patterns (Surrogate keys & Orchestration)
 - Common Mistakes & Pitfalls
 - Conclusions & Recommendations
In SSAS Tabular 2017 and Azure Analysis Services, data access is now handled by Power Query and the M language. In this session you'll learn about topics including:
  • How this affects how Analysis Services connects to data sources
  • New data sources this opens up to Analysis Services
  • Query folding and why it's important
  • Managing connections to data sources
  • Using functions for partitioning
  • When not to use this functionality!
DevOps and continuous integration provide huge benefits to data warehouse development.  However, most BI professionals have little exposure to the tools and techniques involved. John will be showing how you can use VSTS - Visual Studio Team Services (formally known as TFS) to build and test your data warehouse code and how to use Octopus Deploy to deploy everything to UAT and production.  In particular the session will cover:

• Setting up Visual Studio Team Services to act as your build server
• How to use Octopus Deploy to deploy your entire data warehouse
• Developing a build-centric PowerShell script with psake
• Building and deploying SQL Server Data Tools projects with DAC Publish profiles 
• Writing and running automated unit tests 
• The many problems of automating tabular model deployments
In this session we'll look at a range of SQL Server tips, tricks and misconceptions. Ranging from TSQL coding, query plans, statistics and more. 

We'll take a look at a wide range of topics such as some clever TSQL coding practices, query plans, statistics and indexing.
For each topic we'll go through some specific use cases and examine common misconceptions and useful tips and tricks

Delivered in a fun and lighthearted manner, the emphasis of the session will be learning lots of little takeaways that can be applied to your environments and practices when you get back to work 
PowerApps is an exciting and easy to pick up application development platform offered by Microsoft.
In this session we'll take an overview look at PowerApps from both the development and deployment sides.

We'll go through the process of building and publishing an app and show the rapid value that PowerApps can offer.

Using plenty of demos and examples attendees will leave with a good rounded understanding of the PowerApps offering and how it could be used in their organisations   
Performance troubleshooting is a complex subject with many factors under consideration when you find poorly performing SQL statements, using proven methodologies, and evaluating performance data available in the Dynamic Management Views and Functions.

In this session, we’ll go over a foundation of how and which DMVs to use to identify those problematic statements for versions of SQL Server from 2005 – 2017.
We’ll be demonstrating using practical examples, including code that can be taken away and used on attendees’ own SQL Servers. We’ll also discuss how to identify common causes of performance issues and learn how to quickly review and understand the wealth of performance data available.
On 25th May 2018, the General Data Protection Regulations become enforceable and your company must be in compliance. Failure to do so could result in heavy fines on Global revenue and could affect your company regardless of its corporate presence outside of the European Union.

In this session, we will explain what GDPR is, why it is needed, who it effects, and how you can stay inside of compliance. We will explain what steps you must take and what technologies are available in SQL Server and Microsoft Azure to help keep the auditors at bay.
The system database TempDB has often been called a dumping ground, even the public toilet of SQL Server. (There has to be a joke about spills in there somewhere). In this session, you will learn to find those criminal activities that are going on deep in the depths of SQL Server that are causing performance issues. Not just for one session, but those that affect everybody on that instance. 
After this session, you will have learned how to architect TempDB for better performance, how to create code more efficiently for TempDB, understand how space is utilized within TempDB,  and have learned about queries and counters that will help diagnose where your bottlenecks are coming from.
Does your application suffer from performance problems even though you followed best practices on schema design? Have you looked at your transaction log?
There's no doubt about it, the transaction log is treated like a poor cousin. The poor thing does not receive much love. The transaction log, however, is a very essential and misunderstood part of your database. There will be a team of developers creating an absolutely awesome elegant design the likes of which have never been seen before, but the leave the transaction log using default settings. It's as if it doesn't matter, an afterthought, a relic of the platform architecture.
In this session, you will learn to appreciate how the transaction log works and how you can improve the performance of your applications by making the right architectural choices. From understanding the internals we can design for faster recovery and batch processes and ironically a smaller transaction log!
We are more and more living in a polyglot database world. From having one database model to choose from for all our project, we now can choose between relational, document, key-value, graph and column oriented stores. But that means we need to use different tools for different projects. Azure CosmosDB aims to provides all models into one database, in the cloud. In this session, we will see what those data models are, for what kind of project they can be useful, and what are the strengh of Cosmos DB for your database needs.
Relational databases, SQL databases are wild beasts that need to be gently tamed. We cannot use them like dumb data stores. There are rules. Instead of stating good practices out of the blue, this session is addressing the problem differently. Let's list very common implementation and SQL code antipatterns, which you are guaranteed to have at some level in your company, and see, practically, how to solve them. This session is based on real-world experience I have acquired by auditing clients SQL Server databases in ten years of independant practice.
What are memory grants ? How can we see wait statistics for a query ? Can we have some warnings if our query is misbehaving ? Yes, we can. All those details are now in the query plan. More and more, new version after new version, Microsoft is adding invaluable information in the actual query plan. If you can understand it, you can solve many of your performance problems. In this session, we will open the quey plan, look at its XML content, open Plan Explorer, and analyze some tough queries.
I didn't believe it when I heard the news, many months ago. SQL Server was being ported to Linux. Despite being a SQL Server consultant, I spend most of my computer time on Linux. What I have now is what I thought I couldn't dream of: my favourite RDBMS on my favourite operating system, natively. But do I have the same product? The same performances? the same features? Can I do replication? Can I setup an AlwayOn AG cluster? We'll see in this session!
There is set theory, that led to the relational model. And there is graph theory, a branch of discrete mathematics that helps solving countless problems. There are graph oriented databases like Neo4j. Microsoft decided to start implementing graph tables in SQL Server 2017. But what is that? How does it work? Is it ready for the big show?
In this session, we want to know what graphs are, how they can be useful, how we can start to solve data problems with them, and, hum, if it is version 1 or version 0.1 in SQL Server 2017.
Azure SQL Database becomes more and more interesting for data professionals. The concept of a database is still the same but the migration process of an on-premises database to an Azure SQL Database can be quite challenging.  Attend this interactive session and learn how to migrate your schema and your data from the SQL Server database in your current environment into Azure SQL Database.   Attendees of this session will learn:
- How to test for Compatibility
- How to Fix Compatibility Issues
- How to Perform the Migration
- Optimize your Migration and Lessons Learned from the field
SQL Server 2017 has several new integration points with Microsoft Azure. Are you curious about how this might give your organization benefits? In this session, you will learn how you can use SQL Server 2017 to create a hybrid environment. We will see an overview of all the new Microsoft Azure features that are available in SQL Server 2017, like Striped Backups to Microsoft Azure Blob Storage, Stretchdb, Replication to SQL Azure DB and many more. The session is bulk loaded with demos and it will give you a good idea what features can be helpful in your environment.
A good DBA performs his/her morning checklist every day to verify if all the databases and SQL Servers are still in a good condition. In larger environments the DBA checklist can become really time consuming and you don’t even have the time for a coffee… In this session you will learn how you can perform your DBA morning checklist while sipping coffee. I will demonstrate how you can use Policy Based Management to evaluate your servers and how I configured my setup. By the end of this session, you can verify your own SQL environment in no time by using this solution and have plenty of time for your morning coffee!
SQL Server AlwaysOn Availability Groups provides an integrated approach to High Availability and Disaster Recovery. It’s a technology that works really great, but do you know what to do in case something goes wrong? Where do you start looking if end users start complaining or you get alerts from your monitoring system? In this session you will learn how to identify problems like, Failover Cluster issues - Replica Unavailable - Availability Group not Healthy - Performance Issues, … and you will learn what actions you can take in order to solve the issues. At the end of the session you’ll have a good knowledge to bring your availability groups back in a healthy state!
In this session we'll take a look at how to achieve some things in PowerApps that either aren't out of the box functionality or were there are many ways to achieve the same result. 


Using lots of demos we'll look at working with data sources, ways around delegation issues, how to get animation and scrolling side bars to work and a lot more.

We'll explore the best way to code in PowerApps and how to best utilize Collections and Contexts   
This beginner to intermediate session is aimed at those that want to get a good working knowledge of query plans in sql server.

In the first half we will start off with the basics of reading query plans and understanding them. We'll learn what to look out for and how to spot issues

In the second half we'll gradually move into more intermediate topics such as costing and encouraging sql server to make different choices. We'll also start learn how the query optimiser makes it's decisions  This session is ideal for those who are either just starting out with query plans or those whom are familiar with them 

Attendees will leave the session with a good working knowledge of query plans, being able to read and interpret them and make informed decisions about their SQL Coding
Running databases in SQL Servers in your office or data centre?  Considering moving them to Azure?  Heard of Azure SQL DB and Managed instances?  Want to know the options and benefits of Infrastructure as a Service and Platform as a Service? This session will outline the options to migration to Azure data services and the process involved, including the Data Migration Service, offered by Microsoft.  The session provides details of the tools available to automate the process, considerations, risks and mitigation.
We live in a cloud first World, but many organisation still run largely on-premise.  How could you start your cloud journey?  What services are easiest to deploy and how can you demonstrate value, security, control and governance? This session covers how some of the organisations we work with start their journey and leverage Azure data services.  Azure provides tons of capability that isn't available on-premise, this session will also highlight these sessions than can help illustrate and differentiate cloud services to complement your data strategy.
There's a new kid on the block when you're thinking about running SQL Server databases in Azure, and that's managed instances.  This session will cover the benefits, limitations and migration approach. Thinking about Azure SQL DB, but blocked by the restrictions?  Like the flexibility of SQL in a VM (IaaS), but don't like the management overhead?  This service removes many of the barriers for migration and Microsoft have created Azure Data Migration Service to help customers move to Azure faster and easier. Azure SQL Database Managed instances could be the answer, and this session will explain why and how.
Microsoft announced the retirement of Power BI Embedded and recommended that everyone migrate to Power BI Premium and embed those reports. Unfortunately the documentation on doing this is "light" to say the least. In this session we will go through a worked example from end to end so you can add Power BI to your next web project
You are a DBA and have a few years’ experience, but you are having performance problems in your SQL Server environment.

In this demo-only session, you will learn how Premier Field Engineers at Microsoft troubleshoot performance problems and what tool and scripts they use. We will take a look at tools and scripts like SQLDiag, SQLNexus, PAL, and BPCheck.
For years we have had to worry about all of the hardware we need to run our applications but now Microsoft will take care of it for us. In this session we will build a complete application and deploy into production without worrying about servers.

This session will be an expose on the latest Microsoft Serverless cloud technology:

API Management
Azure Active Directory
App Service - Web Apps / Service Apps
Azure Functions
SQLDB



SQL Server has been running on Windows for years. Now Microsoft is making it available on Linux in order to provide a consistent database platform across Window and Linux servers both on-premise and in the cloud. This presentation will discuss the advantages of using SQL Server on Linux, comparing architecture, cost and performance. Several demonstrations on installing and maintaining SQL Server on Linux will be shown along with an introduction of several useful Linux commands.

The participant will learn:
1. The advantages of using SQL Server on Linux, comparing architecture, cost and performance to Windows Servers.
2. How to install, maintain and backup SQL Server on Ubuntu Linux systems.
3. Several useful Linux commands to monitor and manage your SQL Server database.
Query tuning is key to peak performance in SQL Server databases. However, lots of developers and DBAs constantly struggle to pinpoint the root cause of performance issues and spend way too much time in trying to fix them.  In this presentation, I will share my tried and true best practices for tuning SQL statements and other issues by utilizing Wait Time Analysis, reviewing execution plans and using SQL diagramming techniques.  In addition, several case studies will be used to demonstrate these best practices.

Regardless of the complexity of your database or your skill level, this systematic approach will lead you down the correct tuning path with no guessing, saving countless hours of tuning queries and optimizing performance of your SQL Server databases.

Learning objectives

  • Learn how to effectively use wait types to quickly identify bottlenecks and get clues on the best tuning approach
  • Quickly identify inefficient operations through review of query execution plans
  • Learn how to use SQL diagramming techniques to find the best execution plan
Forget the stock answer of just click a button to create a service and off you go in the cloud. What about if you have 100's SQL Servers, some legacy applications, running on older versions and no cloud subscription? Then the path is very different.   This session focuses on the key steps you need prior to data migration such as connectivity, identity, target destination, data migration and running in the cloud.   What is the decision point to lift n shift, modernise then shift or go for the art of the possible? Is Azure SQL Managed Instance the right target? What is it? Should it be Azure SQL Azure DB? Should I run SQL Server on Infrastructure as a Service? What's difference? What pathway fits my scenario?   We will take a real world example of how you can take an application, migrate using the Azure Data Migration Service, deep dive into the Azure SQL Managed Instances and then use the built in features to tune, protect and optimise the data tier. Products: Azure Active Directory Networking Azure SQL Managed Instances Azure Data Migration Service   Background: During the past year, I have been working on architecting several projects which involve moving large volume of SQL Servers to the cloud. We’ve hit blockers, we've had success and during the journey learnt that the art of the possible sometimes scares businesses who are looking at the here and now.
Let's face it, Oracle to SQL migration is complex. Real-world challenges are just too many to be solved by SSMA. In this session, you will understand the teething issues and practical techniques to overcome them. You will learn how to leverage SSMA and at the same time solve a few problems manually without using SSMA. There will be a special focus on programmable objects like procedures and user defined functions. You will see practical examples where Oracle code will fail in SQL Server post SSMA migration and how you can identify the issues and fix it manually. You will also learn how to deal with data types incompatibility between Oracle & SQL Server. Functional testing of programmable objects is very critical post migration. Towards the end of the session we will see how tSQLt unit testing framework can be used to test migrated objects.
If you are a DBA and want to get started with Data Science, then this session is for you. This demo-packed session will show you an end-to-end Data Science project covering the core technologies in Microsoft Data + AI stack. You will learn the basics of R & Python programming languages, Machine Learning, real-world analytics and visualizations that businesses need today. In this session you will get a head start about each component in Microsoft Data + AI stack and the possibilities that can be achieved with them. You will see a real world application in action. We will also touch upon and Cognitive and Bot computing.
Not every workload can benefit from In-Memory tables. Memory Optimized Tables are not a magic bullet that will improve performance for all kinds of transactional workloads. Therefore, it is very critical that you test & benchmark in-memory performance for your SQL deployments before you decide to migrate disk-based tables to memory-optimized tables. In this session, you will learn:

a. Baselining current performance

b. How to identify the right candidates for In-Memory

c. Generate sample production data

d. Create simulated production workload

e. Test & benchmark In-Memory performance with simulations

f. Compare In-Memory performance with the baseline You will also learn about a verity of tools and techniques that can be used in our proof-of-concept. Before you come for this session, please watch Part 1 online from SQLBits 2017 (Telford).
Encryption is a highly effective way of protecting data against cyber-crime, but only if cryptographic keys are kept securely. This session shows how Azure Key Vault manages the full key lifecycle from key creation to key deletion. Using live demos, we use Azure Key Vault to create and store encryption keys, then encrypt SQL Server data, and rotate the keys while the application is in use.  
Date and time data types comprise one of the major SQL Server datatypes groups. Although included in just about every database system, date and time tend to be used less frequently than other data types. They can, however, cause a disproportionate number of bugs. This session considers the major issues that need to be considered when working with date and time. Demos includes working with time zones, implicit conversions, and storing time to a high degree of accuracy.
Are you tired of creating and updating the same SSIS packages over and over and over again? Is your wrist hurting from all that clicking, dragging, dropping, connecting and aligning? Do you want to take the next step and really speed up your SSIS development?

Say goodbye to repetitive work and hello to Biml, the markup language for Business Intelligence projects.

In this session we will look at the basics of Biml. First learn how to use Biml to generate SSIS packages from database metadata. Then see how you can reuse code to implement changes in multiple SSIS packages and projects with just a few clicks. Finally, we will create an example project that you can download and start with to speed up your SSIS development from day one.

Stop wasting your valuable time on doing the same things over and over and over again, and see how you can complete in a day what once took more than a week!
Is your Biml solution starting to remind you of a bowl of tangled spaghetti code? Good! That means you are solving real problems while saving a lot of time. The next step is to make sure that your solution does not grow too complex and confusing - you do not want to waste all that saved time on future maintenance!

Attend this session for an overview of Biml best practices and coding techniques. Learn how to centralize and reuse code with include files and the CallBimlScript methods. Make your code easier to read and write by utilizing LINQ (Language-Integrated Queries). Share code between files by using Annotations and ObjectTags. And finally, if standard Biml is not enough to solve your problems, you can create your own C# helper classes and extension methods to implement custom logic.

Start improving your code today and level up your Biml in no time!
"Wait, what? Biml is not just for generating SSIS packages?"

Absolutely not! Come and see how you can use Biml (Business Intelligence Markup Language) to save time and speed up other Data Warehouse development tasks. You can generate complex T-SQL statements with Biml instead of using dynamic SQL, create test data and populate static dimensions, and even compare tables and views across multiple servers and databases.

Don't Repeat Yourself, start automating those boring, manual tasks today!
You are awesome at your job. You have great technical skills, stay up-to-date on new trends and have already achieved many of your goals. Are you ready to take your career to the next level?

Whether you are a junior developer, a senior database administrator or a chief architect, you can always advance your career further. By becoming a volunteer, you will get invaluable experience while developing your soft skills, building your personal brand and expanding your network. Maybe you will even find your dream job along the way?

In this session we will explore volunteer opportunities and how they can help advance your career. There is something for everyone! From helping others through social media, to sharing your knowledge by blogging or speaking, to organizing small or large events. Volunteer to do something you already love or volunteer to develop a specific soft skill. Either way, you are guaranteed to learn, grow and better position yourself for the next step in your career.
In theory there's no difference between theory and practice. In practice there is. And so it is with women and IT. In theory it should make no difference, in practice you're the only female in the room. Again. I'd like to say it's getting better, and maybe it is, but not by a lot. This is one woman's view of the gender bias in IT. And what we can all do about it.
Technology is not much use if it is not applied in a productive and cost-effective manner.

This session is a personal view of my use of different methodolgies over 30 years in systems development, starting with JDI in the early days through SSADM, DSDM (RAD), AGILE and BLOCK methods of Project Delivery.

The session will also include Lessons Learned from the Block delivery of a DataMart using the SQL Server BI toolset. 
You’re a DBA or Developer, and you have a gut feeling that these simple queries with an egregious number of columns in the SELECT list are dragging your server down.

You’re not quite sure why, or how to index for them. Worst of all, no one seems to be okay with you returning fewer columns.

In this session, you’ll learn why and when queries like this are a problem, your indexing options, and even query tuning methods to make them much faster.
How to move the classic SSIS packages to the cloud for the ETL process? Azure offers Data Factory, Runbooks, Logic App, or Functions. What is hidden behind the individual services and what can I do with them? The examples here show how these components can be assembled to manage a DWH's management in the cloud.
Azure offers a variety of services that can be combined to form a BI solution in the cloud. What options does Azure currently offer to create a modern BI architecture? The components currently available range from Azure SQL DB and SQL DWH to Data Factory, Stream Analytics, Logic App, Analysis Services and Power BI to name a few. This is a very good toolbox, with which you can achieve your first successes very quickly. Step by step you will learn how to create the classic ETL in the cloud and analyze the results in Power BI.
Who does not know the problem, you sit in the bar and just don't know which cocktail to order?
The Cogntive services offer here with face, emotion and recommendation three APIs that can help you. How do you best combine these services to get a suggestion for your cocktail?
Microsoft offers a large playground for young and old with the cognitive services. Here can be tested according to your heart's content which will be maybe tomorrow all in use. With the various building kits such as Bot Framework, Emotion, Face, Text Analytics or Recommendations, to put together in a short time Impressive applications. Come on a little trip with on this playground.
The Query Store has now been around for a little while but
there are still lots of people questioning what to do with it. We will walk
thru some of the common real life scenarios in which the Query Store can help
you to find and fix everyday performance problems. We will also cover the
necessary TSQL along with the built in tools and reports that you should be
familiar with on a day to day basis. See how the Query Store can be an effective
tool in your arsenal when it comes to poor query performance. 
TempDB is not your ordinary user database and should
definitely not be treated like one.  The
usage patterns dictate that the configuration, monitoring and usage be done in
a different way to get the best performance. We will see why aspects such as
configuration and file placement play such a key role and why you need to plan
ahead for TempDB.  See how to detect the
memory and space usage associated with the various users in TempDB along with
some of the most common performance related scenarios that you will encounter
with a well-used TempDB database.  A
poorly configured or under managed tempdb can affect the performance of your
entire instance. Come away with the tools and knowledge to get that under control.  
There are tools available from Microsoft, as well as from a number of third party vendors, to ease the process of integrating database unit testing into your development environments and your continuous delivery pipeline.

This session isn't about any of those tools, but is an attempt to get back to basics - with the open source tSQLt framework - and address questions such as

Why is database unit testing difficult?
What do I need to test?
What do I not need to test?
What does a "good" unit test look like?
What tests should I be writing anwyay?

This session will incorporate real-life experiences with tSQLt, as well as lessons that can be learned from other testing framworks and other programming languages.
This session will present an overview of Continuous Delivery and the benefits such approaches bring in the context of developing database applications with SQL Server. We will take a look at the features of SSDT that faciliate rapid database development, as well as the features of VSTS that can ease development in a shared environment.
 Continuous Delivery for your Module - Put your PowerShell modules in the gallery with just a commit.


An introduction session to show you how YOU can deploy your PowerShell module to a PowerShell Gallery (public or private) automatically


In this fast paced demo heavy session we will- Use Plaster to create our module framework
- Use GitHub for Version Control
- Use Pester to develop our module with TDD
- Use VSTS to Build, Test (with Pester) and Release our changes to the PowerShell Gallery 


At the end of this session you will have the tools that you require to continuously deliver changes safely to your published PowerShell module for the consumption of others all using a module that uses the Microsoft Cognitive Services Faces API to analyse Beards
I was required to prove that I had successfully installed and configured a backup solution across a large estate. I had a number of success criteria that had to be met. Checking all of these by hand (eye) would have been error prone, so I wrote a test to do this for me and an easy for management to read PowerBi reports using PowerShell and Pester.
Pester will enable you to provide an easy to read output to quickly and repeatedly show that infrastructure is as expected for a set of checks. There are many use cases for this type of solution; DR testing, installation, first line checks, presentation setups
Microsoft released the SQL Server Diagnostic Preview enabling you to take SQL Server dump files and upload them for analysis via a SSMS add-in They also included some APIs and I decided to write a PowerShell module for them In this session we will explore the SQL Server Diagnostics Preview and show what you can do using the SSMS add-in and then dive into the command line and show how you can accomplish the same using the PowerShell module SQLDiagAPI
Although Microsoft Azure and the concept of Cloud Computing has been around for a number of years it is still a mystery to many. 

This talk takes offers an introduction to  Microsoft Azure and the services it has to offer.



We will then go onto look in depth at SQL Azure Database; creating, configuring, scaling, connecting, using, securing, monitoring, uploading, scheduling, high availability and DR.
SQL Server Management Studio is at the heart of any SQL Server DBA or developer’s day. We take it for granted  but rarely do we take a look at how we can customise or improve it to make our day to day work easier and more productive.

This presentation will take look at many of the hidden features and shortcuts that you had forgotten about or didn’t know were there including some new features in SSMS 2017. You will be surprised what you can do !
At the end of this session you will have learnt at least one new feature of SSMS that you can use to improve your productivity.
Query Store is an exciting new feature in SQL Server 2016, enhanced in 2017.

It can automatically capture and store a history of queries, query execution plans and execution statistics that makes troubleshooting performance problems caused by query plan changes much easier.



In this session we will examine Query Store, it's architecture, configuration and how it can be used to solve performance problems.
When I read that Microsoft have added graph data to SQL Server 2017 I was intrigued as to what graph data is so I started doing some research.



This presentation is the culmination of my investigations.



If you design complex OLTP relational databases or have data that doesn't fit the rigid hierarchy of a relational database then this talk is for you.



You may be in for a surprise! Some of the questions we will look at:



What is Graph Data?

Who uses it?

What is it used for?

How does it compare to traditional relational database design?

What other companies support graph databases?

How does it work in SQL 2017?

Is there a new language to learn?

What is the so-called Kevin Bacon problem?

And finally.

Will it replace traditional relational database design within the next 10 years?
Beware of the Dark Side - A Guided Tour of Oracle for the SQL DBA



Today, SQL Server DBAs are more than likely at some point in their careers to come across Oracle and Oracle DBAs. To the unwary this can be very daunting and, at first glance, Oracle can look completely different with little obvious similarities to SQL Server.



This talk sets out to explain some of the terminology, the differences and the similarities between Oracle and SQL Server and hopefully make Oracle not look quite so intimidating.



At the end of this session you will have a better understanding of Oracle and the differences between the Oracle RDBMS and SQL Server. 



Although you won’t be ready to be an Oracle DBA it will give you a foundation to build on.
One of most exciting capabilities of SQL Server 2017 is the availability to run the database engine inside a Docker container. This session will cover how to leverage this using the most popular CI engine in the open source community; Jenkins, as used by the likes of eBay and Netflix. The material will cover some of Jenkins particularly powerful features; including:

continuous integration build pipelines as code, using both scripted and declarative notation
scale out build pipelines using build slaves
multi branch build pipelines.

This session assumes a basic working knowledge of containers.
In recent years, the idea of source control has become inextricably linked with git, the version control system created for the development of the Linux kernel.

Whilst the primitives of git are very simple, certain operations, including but not limited to branching, merging, resetting, rebasing, and reverting can be confusing to the uninitiated.

We will look at the most common developer interactions with git version control, using a mixture of command line tools, graphical clients, and IDE integrations, as well as covering how to extract ourselves from a few common difficulties.

We'll also discuss workflows for using git as part of a team, using the context of repository hosting services such as GitHub and Visual Studio Team Services.

As with any other language, you can write good DAX but you can also write bad DAX. Good DAX works fine, it is fast and reliable and can be updated easily. Bad DAX, on the other hand is… well, just bad.

In this session, we will show several DAX formulas, taken from our experience as consultants and teachers, analyzing (very briefly) the performances and looking for errors, or for different ways of writing them. As you will see, writing good DAX means following some simple rules and, of course, understanding well how evaluation contexts work!
The topics covered will be: naming convention, variables, error handling, ALL vs ALLEXCEPT, bidirectional filters, context transition in iterators, and FILTER vs. CALCULATE.
How do you optimize a DAX expression? In this sessionwe analyze some DAX expressions and Tabular models and, through the usage of DAX Studio and some understanding of the VertiPaq model, we will look at how to optimize them.

As you will see, most optimizations are the direct application of best practices, but the session has the additional takeaway of understanding what kind of performance you should expect from your formulas, and the improvement you might expect from learning how to optimize the model and the code
Time Intelligence is probably the most interesting feature of any analytical solution. Computing Year-To-Date, Month-To-Date, Same-Period-Last-Year, is quite easy in DAX. However, DAX formulas starts to be harder as soon as you need to include details such as holidays, working days, weeks and fiscal calendars. Moreover, you need to create the proper data model to make all the measures comparable with the same dates.
In this session, we are going to show how to compute classical time intelligence with the built-in DAX functions. Then, we will show some more complex time intelligence formulas, which require thinking out of the box, leveraging on data modeling and querying techniques to produce interesting and useful formulas.
The Tabular model Power BI and SSAS Tabular seems tooffer only plain-vanilla one-to-many relationships, based on a single column. In 2015 there was the introduction of many-to-many relationships, yet the model
seems poor when compared with SSAS Multidimensional. In reality, by leveraging the DAX language, you can handle virtually any kind of relationship, no matter how complex they are. We will analyze and solve several scenarios with calculated relationships, virtual relationships, complex many-to-many. The goal of the session is to show how to solve complex scenarios with the aid of the DAX language to build unconventional data models.
DirectQuery is a feature of Analysis Services that transforms a Tabular model in a semantic layer on top of a relational database, transforming any MDX or DAX query in a real-time request to the underlying relational engine using the SQL language. This feature has been improved and optimized in the latest versions, including Azure Analysis Services, extending the support to relational databases other than SQL Server, and dramatically improving its performance.
In this session, you will learn what are the features of DirectQuery, how to implement best practices in order to obtain the best results, and what are typical use cases where DirectQuery should be considered as an alternative to the in-memory engine embedded in Analysis Services.
If the events in your model have a duration, that is a starting and an ending date, then you might want to compute advanced metrics like the number of events that were active at a given time, how many started or
ended in a period. You can solve this scenario in different ways, depending on the size of the model and the complexity of the DAX code. In this session, we analyze the scenario presenting different solutions and comparing them in terms of complexity and performance.
Heard of indexes but not sure how they work? We'll go over the basics of how indexes work and how they affect the performance of your queries. 

We'll cover the difference between clustered and non-clustered indexes as well as the concepts behind columnstore and rowstore indexes. 

After this session you will have a great foundation on this fundamental aspect of the SQL Server database engine.

You need to be able to write basic T-SQL for this session.
Whatever your hypothesis could be, and there is a solution with your Microsoft toolsets, SQL Server 2017, Program R, Azure Machine Learning and Power BI.

This session provides a link to all the missing puzzle and a demo includes using all above tools for a given requirement in a flow.
This gathering is for data scientist, data engineers, data professionals and data geeks to network, interact and discuss topics around data science. 

Topics may include talking about starting a data science career, learning data science, future of data science and more
Learn the basics of R Programming and clarify your questions on usage of R across various applications. 

You will learn R Architecture, End to end usage scenario, R syntax, R models and packages to use with MS SQL Server & Power BI.
In this session you’ll learn everything you need to know about using Analysis Services Multidimensional as a data source for Power BI. Topics covered will include the difference between importing data and live connections, how SSAS objects such as cubes and dimensions are surfaced – or not – in Power BI, how to design your cubes so that your users get the best experience in Power BI, how MDX calculations behave, and performance problems to watch out for.
SQL Server 2017 introduces groundbreaking query performance and diagnostics enhancements to truly add intelligence to SQL Server.







From Adaptive Query Processing to Automatic Tuning, Query Store enhancements to new tools that can give quicker and deeper insights into query performance (such as SSMS Plan Scenarios and Performance Dashboard built-in), come join us in the intelligent database world starting with SQL Server 2017.
Take charge of any performance issue coming your way. "SQL Server is hurting!" Turn feelings to symptoms, and become the hero that saved the day.

Streamline the process of troubleshooting performance issues with new tools and capabilities, for faster insights and effective turnaround.
BPCheck came to be back in 2011, as a way to empower Microsoft support engineers onsite with customers to gain insight on best practices not being followed, and evolved to include performance based checks. But this previously internal BPCheck script was released to Microsoft SQL Server GitHub recently and is now free to use. In this session you will learn how to leverage it for a comprehensive perf and health check of your SQL Server instance.































How to read the information, how to interpret results, how to use it for a quick health check, or a full comprehensive performance check as well.

There are now a variety of ways to authenticate with your Azure SQL Databases.  In this session, we'll explore how we can use Azure Active Directory (AAD) to authenticate with Azure SQL Database.  We'll start by understanding the concepts behind AAD followed by some coded examples and live demonstrations in .NET.  So, if you're building a solution today that uses Azure SQL Database, and you want to know how to use AAD to authenticate securely with that database, this session is for you.
Do you know if your database's indexes are being used to their fullest potential? Know if SQL Server wants other indexes to improve performance?

 

Come and learn how SQL Server tracks actual index usage, and how you can use that data to improve the utilization of your indexes. You will be shown how to use this data to identify wasteful, unused, & redundant indexes, and shown some performance penalties you pay for not addressing these inefficiencies. Finally, we will dive into the Missing Index DMVs and explore the art of evaluating its recommendations to make proper indexing decisions.
We developers learn early on that indexing can help our queries. But do you understand how these indexes really work? 


In this session, we will explore the Index B-Tree structure and how SQL Server traverses this structure to retrieve your data. Next we will explore a variety of indexing strategies. Finally, we will practice together with interactive demos.


By the end of this session, you will be prepared to evaluate your queries and make effective indexing decisions.
Do you spend your days slinging T-SQL code? Want to improve your T-SQL game? If you answered yes, then this session is for you.

This demo-intensive session will showcase a collection of my favorite beginner and intermediate level tips and tricks. We will explore how to identify and fix some common T-SQL anti-patterns, my favorite SSMS productivity tricks, and clever solutions to some common but not easily-coded challenges.

This session targets both developers and DBAs; the only prerequisites are the desire to write better T-SQL code and aim of living an easier life!
The ability for multiple processes to query and update a database concurrently has long-been a hallmark of database technology, but this feature can be implemented in many ways. This session will explore the different isolation levels supported by SQL Server and Azure SQL Database, why they exist, how they work, how they differ, and how In-Memory OLTP fits in. Demonstrations will also show how different isolation levels can determine not only the performance, but also the result set returned by a query. Additionally, attendees will learn how to choose the optimal isolation level for a given workload, and see how easy it can be to improve performance by adjusting isolation settings. An understanding of SQL Server's isolation levels can help relieve bottlenecks that no amount of query tuning or indexing can address - attend this session and gain Senior DBA-level skills on how to maximize your database's ability to process transactions concurrently.
Virtualizing your business-critical SQL Servers should not imply that they will run slower than if they were physical. When properly architected and managed, virtual SQL Servers should be equally as fast as their physical counterparts, if not faster. However, if not properly constructed, silent and seemingly random performance killers can strike and significantly hurt your database performance. Background activity, improperly configured components, or silent performance bottlenecks can all strike without warning. But, you can get in front of these and solve them before they become larger problems!

This session is packed with many tips and tricks gained from over seventeen years of virtualization, infrastructure, and cloud experience for getting the most performance from your virtual SQL Servers. The major roadblocks to performance will be discussed and the knowledge gained will help you work with your infrastructure engineers so you can optimize the system stack for performance. Tools, techniques, and processes will be demonstrated to help you measure and validate the system performance of the key components underneath your data. Cloud directly applies, as the same infrastructure concepts on-prem matter in the cloud, too!
If your boss asked you for the list of the five most CPU-hungry databases in your environment six months from now for an upcoming licensing review, could you come up with an answer? Performance data can be overwhelming, but you can make sense of the mess. Twisting your brain and looking at the data in different ways can help you identify resource bottlenecks that are limiting your SQL Server performance today. Painting a clear picture of what your servers should be doing can help alert you when something is abnormal. Trending this data over time will help you project how much resource consumption you will have months away. Come learn how to extract meaning from your performance trends and how to use it to proactively manage the resource consumption from your SQL Servers.
Times are certainly changing with Microsoft’s recent announcement to adopt the Linux operating system with the SQL Server 2017 release, and you should be prepared to support it. But, what is Linux? Why run your critical databases on an unfamiliar operating system? How do I do the basics, such as backing up to a network share or add additional drives for data, logs, and tempdb files?

This introductory session will help seasoned SQL Server DBAs understand the basics of Linux and how it differs from Windows, all the way from basic management to performance monitoring. By the end of the session, you will be able to launch your own Linux-based SQL Server instance on a production ready VM.
See how to use the latest SQL Server Integration Services (SSIS) 2017 to modernize traditional on-premises ETL workflows, transforming them into scalable hybrid ETL/ELT workflows in preparation for Big Data Analytics workloads in the cloud. We will showcase the latest additions to SSIS Azure Feature Pack, introducing/improving connectivity components for Azure Data Lake Store (ADLS), Azure SQL Data Warehouse (SQL DW), and Azure HDInsight (HDI).  We will also take a deep dive into SSIS Scale-Out feature, guiding you end-to-end from cluster installation to parallel execution, to help reduce the overall runtime of your workflows.  Finally, we will show you how to execute the SSIS packages on Azure as PaaS via Azure Data Factory V2.
Are you still asking yourself, what is big data? What is HDInsight and how can I benefit from it? which type of HDInsight Cluster should I use?
In this session I will explain in simple terms:
• When big data is needed and what it is
• What is Hadoop, its components and utilities
• What is HDInsight, its different cluster types and when to use one or the other
• How HDInsight integrates with other Azure Services
Migrating data to Azure SQL Data Warehouse with tips and
techniques to help you achieve an efficient migration. Once you understand the
steps involved in migration, we can practice them by running an example of migrating
a sample database to Azure SQL Data Warehouse.
This session introduces the Receiver Operating Characteristics (ROC) curve and explains its strengths and
weaknesses in evaluating models.  We will then show the use of ROC curves and performance measures used in Azure Machine Learning Studio.
The data industry is exploding and developers and BI professionals are responding by delivering new and innovative solutions. But what happens when this is deployed to a live environment and as a DBA you are asked to look after it? This session will give an overview of some technologies you may be asked to look after and show you what you can do to monitor and diagnose issues.
Many companies need to be ITIL compliant or ITIL aligned but what does this mean for the day to day running of the business? This session provides an overview of the ITIL framework and the roles a DBA and Developer play in supporting it.
There are plenty of resources out there to help you learn R but most of these are targeted at development. This session gives an overview of R from the point of view of implementation and maintenance; specifically for the DBA. We will discuss architecture and versions and show you some basic administrative commands.
Extended Events are much more powerful than any other monitoring technology available in SQL Server. Despite this potential, many DBAs have yet to abandon Traces and Profiler. Partially because of habit, but mostly because the tooling around Extended Events was less intuitive until recently.


Now, it's easier than ever to set up, control and inspect Extended Events sessions with dbatools! Not only does it simplify your basic interaction with XEvents, but it also helps solve your day-to-day problems, such as capturing and notifying deadlocks or blocking sessions.


Join SQL Server MVP Gianluca Sartori and PowerShell MVP Chrissy LeMaire to see how PowerShell can simplify and empower your Extended Events experience. Say goodbye to #TeamProfiler and join #TeamXEvents with the power of dbatools.
In this session we will present lessons learned from helping Microsoft customers use Azure SQL DB for the most challenging workloads. We will show how the latest features of the service, including Managed Instance, help our customers successfully implement their solutions in Azure. We will also cover many frequently asked questions about Azure SQL DB, going beyond answers found in documentation.

Our target audience ranges from SQL Server users who are only considering Azure SQL Database, to experienced SQL DB users who want to learn more about new features and advanced scenarios.
Relational database has been around since decades. It uses a highly structured schema to store information in tabular form and is optimized to find answers about the data. One of the challenges where relational databases struggle with is to find answers on data relationships.

Graph database uses graph theory to store information in a collection of nodes and edges. Graph databases are optimized to find answers about the relationships. With graph data processing available in SQL Server 2017 we will get the best of both relational and graph databases in a single product.

In this session the audience will learn about what graph databases are, why using graph databases, and how to use the new graph data processing extension of SQL Server 2017.
What are the products commonly bought together? How can company increase sales by offering bundles of products? Current trend is to leverage Machine learning and Artificial Intelligence to drive sales. But will machine understand business better, that business person who is involved in his area for 20 years? Maybe.During presentation, you will see how to develop data model using SQL Server Analysis Services and Power BI for manual “Attached Sales” exploration and analysis. You will see how to prepare SQL Server layer, SSAS model and Power BI report.
Successfully implementing DevOps means we need to start version controlling our databases and pushing those changes to lots of environments. How do you manage these requirements and start working with DevOps?
This session will cover how databases fit into the continuous delivery (CD) approach, and how that helps deliver changes faster and safer. We'll cover different methods of managing your databases' schema in version control, and how to track and deploy those changes to environments in your CD pipeline using applications like Octopus Deploy. We'll also touch on ways to get data into those environments using free and paid-for tools. 
Do you have large-scale SSIS platforms which experiencebottlenecks due to the number of packages or volumes of data that need to beprocessed?

Yes! 

Then you need to explore SSIS scale out, allowing multipleworkers to process your workload. But, what is SSIS Scale-out? In this sessionwe will walk through the use cases, system setup and patterns for developingyour SSIS packages to get the most out of this technology. At the end of thissession you will have learned the key elements of this new technology and be ina position to assess if it can help solve some of the problems you are facing.
GDPR is coming, no matter where you are if you are handling data on European data subjects. Laying a solid foundation of data security practices is vital to avoid the potential fines and damage to reputation that being non-compliant can bring.

Practicing good data hygiene is vital to meeting compliance requirements, whether it is GDPR, PCI-DSS, HIPAA or other standards. The fundamentals around data identification, classification, and management are universal. Together we will look at some of the key areas that you can address to speed up your readiness for meeting GDPR requirements. Including what data is covered, principals for gaining consent, data access requests as well as other key recommended practices.

By the end of this session you will be able to start the groundwork on getting your organization in shape for its journey to compliance.If you want to avoid the big fines, up to EUR 20 million or 4% of global turnover whichever is higher, it is important to act early.
Far too many people responsible for production data management systems are reluctant to embrace DevOps. The concepts behind DevOps can appear to be contrary to many of the established best practices for securing, maintaining and operating a reliable database. However, there is nothing inherent to a well-designed DevOps process that would preclude ensuring that the information stored within your data management system is completely protected. This session will examine the various methods and approaches available to the data professional to both embrace a DevOps approach to building, deploying, maintaining and managing their databases and protect those databases just as well as they ever have been. We will explore practices and plans that can be pursued using a variety of tooling and processes to provide DevOps methodologies to the systems under your control. You can embrace DevOps and protect your data.
Getting started reading execution plans is very straight forward. The real issue is understanding the plans as they grow in size and complexity. This session will show you how to explore the nooks and crannies of an execution plan in order to more easily find the necessary information needed to make what the plan is telling you crystal clear. The information presented here will better empower you to traverse the execution plans you’ll see on your own servers. That knowledge will make it possible to more efficiently and accurately tune and troubleshoot your queries.
Moving your databases to the cloud through the use of Azure SQL Database and Azure SQL Data Warehouse can be challenging. These challenges are exacerbated by the fact that you simply don’t have as much tooling available to manage and maintain your databases. Enter Powershell. Through the use of Powershell you can much more easily maintain your Azure databases. Further, once you start using Powershell to maintain the databases, you’ll be able to automate a lot more processes within your Azure environment. This session will provide you with the core knowledge to create your own Powershell scripts and it will give you a set of foundation scripts from which you can build your own. You don’t have to feel limited on what you can just because you’re on Azure. Powershell will empower you to get more done within your Azure environment.
With SQL Server 2017 now supporting Linux, anyone deploying SQL Server on Linux will need a high availability and disaster recovery strategy. Failover cluster instances (FCIs) and availability groups (AGs) are options, same as on a Windows Server-based deployment. However, the underlying clustering mechanism is completely different. This session will cover as well as demonstrate how to plan, implement, and administer the clustered configurations of SQL Server under Linux, as well as teach the differences compared to deploying on a Windows Server-based architecture. This session will cover and demonstrate how to plan, implement, and administer Linux-based AGs and FCIs as well as teach the differences that exist between Linux and Windows Server-based architectures. If your organization plans to embrace SQL Server on Linux, this will be an essential session.
Windows Server 2016 and later have two new ways to enhance SQL Server failover cluster instances (FCIs): storage spaces direct (S2D) and storage replica (SR). S2D is one of the biggest changes for FCI deployments in quite some time and could change how you approach FCIs. Storage replica enhances disaster recovery scenarios. This session will not only show the features in use, but also how to plan and implement them whether you are using physical or virutal (on premises or cloud).
The world of consulting can be rewarding on various levels. People often think they have the right stuff to be a consultant, but do you? Would you need to work for an agency or company that sells consulting services or could you strike out on your own? Or are you always going to be someone who needs to be a full time employee with a set schedule? What are the things you need to know if this is your ultimately succeed if this is your career path? Becoming a consultant is more than having the right technical skills. This session will cover the highs, lows, and realities of what it takes to be a successful consultant based on my experiences and those of my colleagues over the years.
Provisioning dev environments is often a slow, complicated and manual process. Often devs simply don’t have the diskspace. And then there is GDPR.

You can solve many of the these problems with virtualisation technologies and source controlled powershell scripts. We’ll show you how by talking you through:

DOCKER CONTAINERS
1. Defining containers
2. Configuring Windows Server 2016 to run containers
3. Running SQL Server containers
4. Creating custom container images
5. Sharing container images

REDGATE CLONES
6. Defining database clones
7. Configuring the SQL Clone server
8. Creating database images from backups or live databases
9. Provisioning clones to a container in one click

The session will explain concepts via slides which will be backed up by demos.
Learn common patterns for testing data, and the anti-patterns that trip developers up.

Data is a critical asset for many companies, but often it's not treated that way. During this session, we'll discuss common patterns for testing, validating, and monitoring your data to ensure that it is accurate and complete. This will include patterns for
  • data warehousing
  • data integration
  • data migration
We will also discuss the common pitfalls that organizations encounter when they start treating their data as an asset, including:
  • treating data testing like application testing
  • testing the wrong things
  • not supporting your data testing initiatives for the long term
Attend this session, and you will gain valuable information on ensuring your data is a real asset to your organization.
2009. John Allspaw and Paul Hammond deliver the session “10
deploys per day – Dev & ops cooperation at Flickr.” In forty six minutes
they changed the way millions of people would think about software delivery for
years to come. It didn’t have a name yet, but DevOps was born.

Automation, Azure and NoSQL begin chipping away at
traditional on-prem SQL Server DBA responsibilities. In 2013 Kenny Gorman
declared “The DBA is Dead”. For the record, we don’t believe that, but a lot of
people do.

We’ll explain what DevOps is, where it came from, and its
implications for databases - as well as some changes data folk need to make to
stay relevant.
From 25th May organisations who fail to implement
appropriate technical and organisational measures that ensure and demonstrate
that they are compliant with the General Data Protection Regulation (GDPR) will
be liable for a fine of €20M or 4% of annual global turnover – whichever is greater.

As a professional who works with data it is wise to ensure that
you understand GDPR.

We are not qualified to give legal advice. However, we do
know a thing or two about data strategy and Database Lifecycle Management (DLM).
In this talk we will bring your attention to some of the main requirements of
GDPR and discuss how they can be met with effective DLM.
For several years MS have
promoted declarative, model-based SQL development with tools like SSDT. At the
same time people like Jez Humble, Dave Farley and Pramod Sadalage promote an
iterative, migration script based approach asserting that update scripts should
be tested early and not generated by tools.

Presenters of “how to do database DevOps”
sessions annoy me when they say one way is good and the other is bad. It depends.

I’ll illustrate the limitations of each approach
with a simple scenario. I’ll describe which projects are better suited to a
model or a migrations approach, and whether it’s possible to get the best of
both worlds.
In this session, I'll provide an overview of your options when looking to migrate your SQL Server environments to the cloud from either on-site or from another cloud provider. We'll then go through potential cloud infrastructure options such as cloud implementations, compute offerings, and security.After that we’ll then dive into in to an overview of the main cloud providers and provide some examples where they might be a good fit for various SQL Server environments; as well as what you might want to take into consideration during for your migration like your HADR strategy and if your SQL Server data would be better suited in another solution that your chosen cloud provider offers.
In this session we'll go through some of the tools you can use to migrate your SQL Server environments to Azure. I'll cover some of the Pro's and Cons of the various ones like the Azure Portal, PowerShell and Azure Resource Manager (ARM) templates; and we'll discuss some of the applications you can use like Visual Studio.

And this session will include a quick demo which involves SQL Server running on Linux, it will include a GUI as well though.
You have heard about dbatools PowerShell module or you use it already on your day-to-day work.
Pester is a testing framework that can be used to validate anything. If you can PowerShell it, you can Pester it!


Let's join forces and use both modules, we will see how to validate SQL Server best practices along with your company's rules with them!


New database has been created? Is 'AutoClose' option turned off? Are datafiles 'auto growth' option in a fixed size? 
Wouldn't it be awesome receive an early morning email alerting for all the testes that are failing? Or even see it on a dashboard?


This will be a real world demo session.
Do you manage one or many SQL Server Reporting Services instances? Do any of them have multiple folders, dozens of reports or hundreds of subscriptions? 


Historically, managing and/or migrating these subscriptions, reports and folders has been incredibly time-consuming. But what if you could leverage open source PowerShell module from Microsoft to simplify these and other SSRS management tasks? And what if those tasks could be accomplished 500 times faster than the web-based GUI?


Join this session and you'll see all of this action using real-world scenarios!
I was required to prove that I had successfully installed and configured a backup solution across a large estate. I had a number of success criteria that had to be met. Checking all of these by hand (eye) would have been error prone, so I wrote a test to do this for me and an easy for management to read HTML report using PowerShell and Pester.
Pester will enable you to provide an easy to read output to quickly and repeatedly show that infrastructure is as expected for a set of checks. There are many use cases for this type of solution; DR testing, installation, first line checks, presentation setups
Start from nothing and use Test Driven Development to write a PowerShell function that uses the Microsoft Cognitive Services API to analyse pictures. I will take you on a journey from nothing to a complete function, making sure that all of the code works as expected, is written according to PowerShell best practices and has a complete help system. You will leave this session with a good understanding of what Pester can do and a methodology to develop your own PowerShell functions
Want to become a community speaker or involved in the SQL Community?
Don't feel confident enough to try?
Need some advice and guidance?
We want to help you, we will help you

Join Richard and Rob (and some special guests) in a gentle conversational session where we will discuss and help to alleviate some of the common worries about joining the community in a more visible role and you can get advice and guidance on not only the methodology but also the benefits of becoming further involved in the SQL community either as a speaker, a volunteer or an organiser
Introducing the new scripting language for tabular models. Before SQL Server 2016 tabular models was wrapped into a multidimensional constructs. TOM is the new native library for tabular - this makes it easy to maintain, modify and deploy your model. Introducing the new scripting language for tabular models. Before SQL Server 2016 tabular models was wrapped into a multidimensional constructs. TOM is the new native library for tabular - this makes it easy to maintain, modify and deploy your model. During this session I will go through and explain some examples and best practices on generating a SSAS tabular model by using the new TOM. I will spend some time showing and explaining a real world example on pushing measure creation and changes to the key business stakeholders and to ensure quick time to market. Main topics of the session will be: * Understanding the model structure * Solution in a business context * Automation of build and release * Introduction to Tabular Editor * Code ready for deployment and usage when you get home
This session is intended to do a deep dive into the Power BI Service and infrastructure to ensure that you are able to monitor your solution before it starts performing or when your users are already complaining.As part of the session i will give advise you on how to address the main pains causing slow performance by answering the following questions:
* What are the components of the Power BI Service?
     - DirectQuery
     - Live connection
     - Import
* How do you identify a bottleneck?
* What should i do to fix performance?
* Monitoring
     - What parts to monitor and why?
* What are the report developers doing wrong?
     - how do i monitor the different parts?
* Overview of best practices and considerations for implementations
Moving your data warehouse to the cloud? In this session we will go through the consideration and components for data warehousing in the cloud and how these are used and integrated - and when to use which services when looking at:
- Data movement/migration
- Data storage
- Transformation
- Scheduling
- Deployment approach and methods

Second part of the session will focus on the differences and benefits of different approaches ( On-premises, IaaS, PaaS) the following perspectives to undertand and consider the right approach:
- Pricing
- Scalability
- Flexibility
- Strategic considerations



With Power BI and a new enterprise reporting platform it is necessary to incorporate data governance and data stewardship to avoid dataset-hell and leverage data culture of your company.

As part of this session I will also cover architectural consideration which should be taken into account how performance can be a pain and how this should be addressed - especially having a company with branches all around the world.

We will go through the different ways of governing Power BI and working with it in your enterprise to manage the different assets within your organisation and leveraging a tool like Azure Data Catalog.
The last couple of years have seen the emergency of "Big Data", "Cloud" and "Internet of Things".

Subsequently, Gartner at the 2016 Gartner Summit in Barcelona declared the Enterprise data warehouse (EDW) as dead. Which of course is silly.
Sensible companies don't just throw money out of the window.But they need to adapt and change when new opportunities arrive.Enter the hybrid data warehouse. Combining the power of big data and cloud with your trusty EDW.

This session will take a look at a few different approaches to a hybrid data warehouse, with components such as SQLServer 2016, Azure Data Lake, HDInsight, Azure Analysis Services, Azure SQL Data warehouse and Polybase, with some scenarios where the approaches might become relevant. And some pitfalls you need to know about along the way.
Azure is ready to recieve all your event- and devicedata for storage and analysis.
But which options in the Azure IoT portfolio should you use to recieve and manage your data?
In this session I will explain the different options in the portfolio, take a closer look at how they work and what this means for you. Furthermore, I will take a closer look at the Azure Stream Analytics (ASA) language.
You will learn how to develop both simple and complex ASA queries, and how to debug. We will look at the possibilities, limitations and pitfalls in the Azure Stream Analytics language.
And finally look at the different input and output choices and when to use which one. This includes a look at how to build a live stream dashboard with Stream Analytics data in PowerBI. The session is based on real world project experiences and will use real data in the demos.
We will learn what Azure SQL Server have to offer regarding High Availability and during the process creating a follow the Sun application that the Azure SQL Database will support.
We will learn what are the pros and cons of each option that we have so that we can know what the best option for our case is.
We will see if the Azure SQL Database is really elastic or it as some rigid aspects. Starting from fundamentals, passing throw the tools available and ending in the management Services.
In this session, we look under the covers of availability groups. Various demos show how to analyze functionality and performance in relation to availability groups using DMVs and Extended Events. This includes problem analysis during initial seeding
With the latest release of SQL Server Reporting Services (SSRS), more and more organizations are adopting it as an organizational reporting solution.  As the deployments increase, so will the demand on the environment. Join this session as I explain how to architect and deploy a scalable and highly available solution.  I will explain how to architect the back-end SQL Servers using technologies such as Clustering and AlwaysOn for High Availability and Disaster Recovery.  In addition, I will demonstrate how to deploy SSRS scalable SSRS web frontends that leverage Load Balancers to properly manage scale.
Most of us are overwhelmed with data from all the different applications that we use on a daily basis.  Bringing all the data together is often a very time-consuming and sometimes a challenging process.  Even further, attempting to analyze and visualize the data poses new challenges that is sometime difficult or impossible to overcome.  Now with Power BI this can all be made very simple.  Individuals, ranging from novice information workers to advanced IT professionals can quickly and easily transform, analyze and visualize data using a single tool, Power BI Desktop.   In this course we will work through four main topics: Shaping Data, Building a Data Model, Visualizing Data and Using the Power BI.
So, you think you know everything you need to know about the Power BI Report Server because you use traditional SQL Server Reporting Services.  Well, don’t believe it.  In this session we are going to discuss topics such as Configuring Kerberos to resolve connectivity issues. We will discuss different authentication types, when you need them, why you need them and how to use them.  We will then jump into configuring your report server to host Excel workbooks using Office Online Server.   Finally, we will demonstrate how to configure an SSAS Power Pivot instance for the Excel data model.  In addition to these topics, we will discuss other advanced topics such as connectivity and high availability during this demo-heavy session.
Your personal brand is how you distinguish yourself from other people but how you develop and nurture your brand can be the difference between stagnation and opportunity in your career. We'll discuss your branding, the importance of taking control of your brand, and how to begin developing your personal brand. By the end of this session, you'll have a better idea of how to begin building your personal brand into something that can take your career to the next level.
SQL Server Integration Services has not been receiving much press as of late.  However, with the release of SQL Server 2017 and Azure Data Factory version 2, that is quickly changing.  Join this session to learn more about the new features.  SQL Server 2017 introduces features such as scale-out capabilities running packages on Linux, and connectivity improvements.  In addition, deploying and running SSIS packages have been made available in Azure Data Factory.  Join this session to learn all about this and more.
Patrick and Adam answer a lot of questions. Those questions result in videos on their YouTube channel. This session combines some of the best challenges that they have dealt with including Power BI Desktop to the service, data source connectivity and Azure Analysis Services. Don’t miss out, there is a little something for everyone.
The SQL Server query optimiser is incredibly good at its job. It can generate good, fast execution plans for 1 row, three table OLTP queries and 1 billion row, all the tables analytics queries and just about everything in between.

But it’s not perfect, and there are query patterns that will send the optimiser for a complete loop and the query execution times through the roof.

We’re going to look at the more common of those query patterns and see exactly what it is about them that causes problems and we’ll look at a variety of ways to write the queries so that they work with the optimiser, not against it.

Along the way you’ll learn enough about the behaviour of the optimiser to be able to identify other problematic query forms before they cause problems.
One of the new features in SQL Server 2017 is Adaptive Query Plans, query plans that can change after the query starts executing.

This session will show why this is a radical departure from the way that things have worked until now and how it can improve the performance of some query forms. We’ll look at the places where adaptive query plans are used and compare the performance of queries using adaptive query plans to see just what kind of improvement it can make.
The SQLOS scheduler has been a core feature of SQL Server ever since its appearance as the User Mode Scheduler in version 7.0. In this session you will learn what makes it tick, where lines of responsibility are drawn between schedulers, workers and tasks, and how everybody has their own selfish ideas about fairness.

We'll pay particular attention to synchronisation: the need to synchronise, the balancing act between busy waiting and context switching, and examples of internal SQLOS synchronisation primitives. All of this will complement your existing mental model of SQL Server waits.

It is a very deep session (stack traces and obscure functions will be aired!), but not a broad one. As long as you have a healthy interest in either SQL Server or operating system internals, no specific background knowledge is assumed - we will build from first principles.
Where to start when your SQLServer is under pressure? If your server is
misconfigured or strange things are happening, there are a lot of free tools
and scripts available online.These tools will help you decide whether you have
a problem you can fix yourself or you really need a specialized DBA to solve it.  Those scripts and tools are written by renouwned SQLServer specialists. Those tools provide you with insights of what
might be wrong on your SQLServer in an quick and easy manner. You don’t need extensive knowledge of
SQLServer nor do you need expensive tools to do your primary analysis of what
is going wrong
And in a lot of instances these tools will tell you that you yourself can fix the problem. 
Agile BI promises to deliver value much quicker to its end users. But how do you keep track of versions and prioritize all the demands users have?
With Visual Studio Online (cloud version of Team Foundation Server) it is possible to start for free with 5 users, with Version Control, Work Item management and much more.
In my session you will get the directions to a quick start with Visual Studio Online. You will learn the possibilities of Version Control and in which way to implement Scrum work item management with all available tools.
One of the hardest things to do in SQL is to identify the cause of a sudden degradation in performance. The DMVs don’t persist information over a restart of the instance so, unless there was already some query benchmarking (and there almost never is), answering the question of how the queries behaved last week needs a time machine.

Up until now, that is. The addition of the QueryStore to SQL Server 2016 makes identifying and resolving performance regressions a breeze.

In this session we’ll take a look at what the QueryStore is and how it works, before diving into a scenario where overall performance suddenly degraded, and we’ll see why QueryStore is the best new
feature in SQL Server 2016, bar none.
BIML is well known for generating the ETL part of a datawarehouse.
But did you know that with T4 templates in Visual Studio tables, views and stored procedures and functions can be generated.
To close the gap, with the new Tabular Object Model (TOM) generating a Tabular model is as easy as using BIML for the ETL. 
Now we can focus on transform the data into information instead of spending time on repeatable jobs which can be automated easily.
Authoring SSAS tabular models using the standard tools (SSDT) can be a pain when working with large models. This is because SSDT keeps a connection open to a live workspace database, which needs to be synchronized with changes in the UI. This makes the developer experience slow and buggy at times, especially when working with larger models. Tabular Editor is an open source alternative that relies only on the Model.bim JSON metadata and the Tabular Object Model (TOM), thus providing an offline developer experience. Compared to SSDT, making changes to measures, calculated columns, display folders, etc. is lightning fast, and the UI provides a "what-you-see-is-what-you-get" model tree, that lets you view Display Folders, Perspectives and Translations, making it much easier to manage and author large models. Combined with scripting functionality, a Best Practice Analyzer, command-line build and deployment, and much more, Tabular Editor is a must for every SSAS Tabular developer. The tool is completely free, and feedback, questions or feature requests are more than welcome. This sessions will keep the PowerPoint slides to a minimum, focusing on demoing the capabilities of the tool. Attendees are assumed to be familiar with Tabular Model development. https://tabulareditor.github.io/

Managers have always needed to know which Microsoft data technologies and products to adopt and invest in – and until recently that was easy to do.  Now though, the Microsoft data platform is a large collection of technologies and services which all play a different part in an organisation’s technology strategy.

This session helps prepare today’s technology managers make financial, development and operational decisions involving Microsoft data platform technologies.

It provides both product updates and impartial guidance on:

  • What is Microsoft’s strategy and how to adapt to it
  • On-premises vs. the cloud and to play to the strengths of each
  • Relational vs. non-relational and the future role of each to prepare for
  • Transactional vs. analytical and why they have become so different


A little bit of knowledge about how SQL Server works can go a long way towards making large data engineering queries run faster.  Whether you use SQL Server as a data source or as a R or Python query processing platform, knowing how it processes queries, manages memory and reads from disk is key to making it work harder and faster.

This session introduces and demonstrates how SQL Server:

  • operates internally
  • performs select queries
  • uses indexes to make queries run faster 
  • executes machine learning code to make operational predictions

It then introduces some query tuning techniques to help heavyweight analytics queries run faster.

The session uses Gavin Payne’s 20 years’ experience of working with SQL Server – mostly making it run faster, stay secure and remain available.
Most data technology innovation for the last few years has been focussed around analytics.  Not just an evolution of existing technologies like Analysis Services or Reporting Services, but also the introduction of a whole new set of technologies and concepts.  Big data, machine learning, ELT instead of ETL etc. etc.  

If you don’t get to work with these new technologies, then it can be difficult to understand what they do, how they work and why they get used just from online training, never mind the media.

This session introduces the analytics technologies which the media speak about the most.  It aims to help you form an opinion on how they might have a role in a future part of your career or thinking how they’re just a new way of solving an old problem.


The content of the session includes:

  • SQL vs. NoSQL – what’s the actual difference?
  • Big data vs. small data – and what’s so special about Hadoop?
  • ETL vs. ELT vs. ECLT – why the need to change?
  • Machine learning – and how it solves two simple business questions
  • Data science – and how the maths world has met the tech world
Machine Learning uses lots of algorithms. Things like Boosted Decision Trees, Fast Forest Quantile Regression and Multiclass Neural Network. Fortunately, you don't have to know the ins and outs of the algorithms to use them. But where's the fun in that? In this session, I'll walk you through the mathematical underpinnings of three simple algorithms, linear regression, decision trees and neural networks, showing how they work and how they generate their results. Warning: This session contains Mathematics.
Would having a good understanding of all the different versions, editions, OS Builds, CPU & Memory and database details in a single reporting place be useful? Many of my previous clients have found that just to be the case…. Well now you can too and it’s totally FREE

· Come Discover what really exists in your environments
   Visualise valuable details about your SQL Servers
   [Do you even know about all of them]

· Claim your FREE PowerBI SQL Server Estate Report
  Every Attendee can sign-up for their free report
 
· Useful for Capacity Planning, Migrations, Consolidations and Licensing and more.

· Extend the report to add additional reports and information

In this session, we showcase a sample report, giving you a flavour of what we can report on.
This is just the beginning of the journey, if you want to know more then ask us about our SQL Server & BI Health-Checks that expand on these reports
A workshop designed to reveal hidden gems about performance (or lack of) from your Query plans and other DMV’s, starting with a high-level view of what your SQL Servers are processing, and taking ever deeper dives into the murky XML depths of your query plans, extracting tell-tale indicators and rich information from the query plans, figure out what SQL code and performance horrors are behind that graphical plan.

Taking a Hands-on lab approach (so you can follow right along with your own laptops) as we discover those hidden messages and decipher the details that can be addressed.
Database security is one of those topics that too many misunderstand; haven’t learned it to the right depth, or just not sure how to approach designing a database security strategy.
During this session we will examine how to put in the right level of security, evaluate and define an appropriate database security model that is right for the environment.
We will be covering SQL Server’s security hierarchy and terminology, identify security risks (know your security responsibilities), determine when SA usage is appropriate and not and more.
Query Store is a new feature that got released on SQL Server 2016 version and is being improved over the last cumulative updates and on SQL Server 2017 vNext CTP's versions. It's a very useful and interesting feature that allow DBA's ( and non-DBA's :) ) to easily identify performance issues on queries and also allow us to fix it in a fast and simple way. Just the ability to compare a previous execution plan to a new plan is a huge step towards understanding what may be happening in our instance. We can even tell the optimizer which plan we want it to use. These were all either extremely difficult to do before and in some cases impossible to do. I want with this session to give you the insight and knowledge to get started using this new and wonderful feature that will change the way how you do performance tuning.
With the emergence of SQL Server 2017 vNext on Linux, new challenges arise for High Availability and Disaster Recovery solutions. What kind of features and add-on's exists in Linux that provide this type of solutions and the interoperability between instances in hybrid scenarios (with Linux and Windows) ? How can we configure all the scenarios we know of Windows on Linux and additionally how we can implement such hybrid scenarios ? Join me in this session where we will discuss all these points, as well as possible architectures and best practices in implementing HA \ DR scenarios in SQL Server 2017 vNext on Linux.
Using DB Test Driven framework, it is possible to automatically generate unit tests for all objects in under 30 minutes. Learn how easy it is to:

- Generate tests
- Run them
- Analyse the results

Unit testing will provide secure foundations for your database code.
Super-fast queries are an essential part of any business process, but speed will never be more important than during a disaster when you need to restore from backup. Come and see how both backups and restores can be tuned just like a query. In this demo-intensive session, we will discuss the different phases of the backup and restore processes, how to tell how long each of them is taking, and which are the easiest to significantly speed up. You just might be surprised how simple it is to achieve dramatic results - cutting your backup and restore times by 75% or more is absolutely possible using the methods covered here.
Whoever coined the term "one size fits all" was not a DBA. Very large databases (VLDBs) have different needs from their smaller counterparts, and the techniques for effectively managing them need to grow along with their contents. In this session, join Microsoft Certified Master Bob Pusateri as he shares lessons learned over years of maintaining databases over 20TB in size. This talk will include techniques for speeding up maintenance operations before they start running unacceptably long, and methods for minimizing user impact for critical administrative processes. You'll also see how generally-accepted best practices aren't always the best idea for VLDB environments, and how, when, and why deviating from them can be appropriate. Just because databases are huge doesn't mean they aren't manageable, attend this session and see for yourself!
Microsoft Azure SQL Data Warehouse is a fully managed relational data warehouse-as-a-service.  This is the industry first-enterprise-class cloud data warehouse with on-demand compute scaling and pause capability.  In this session, we will provide overview of the product, share best practices for schema design, data loading, performance monitoring and tuning from our learnings and experiences.
We have been working with customers adopting SQL on Linux for many months since the public preview. We will discuss some of the customer scenarios, blockers, challenges and resolutions. We will also look at optimal configurations and troubleshooting scenarios encountered during this journey with customers.
Internet connectivity to everyday devices such as light bulbs, thermostats, smart
watches, and even voice-command devices is exploding. These connected devices
and their respective applications generate large amounts of data that can be
mined to enhance user-friendliness and make predictions about what a user might
be likely to do next. This demo-heavy session will show how to use simple
device and sensors and the Microsoft Azure IoT suite, ideal for collecting data
from connected devices, to learn about real-time data acquisition and analysis
in an end-to-end, holistic approach to IoT with real-world solutions and ideas.
U-SQL. You keep hearing about this new language, that it is supposed to be the next great analytical
language to take over the world. What is it, how does it work and what can you do with it? This demo-heavy session will take detailed tour through the insides of U-SQL and its rich analysis capabilities. This session will look at its language structure, its extensibility, and its ability to query both structured (Azure SQL DB and SQL Data Warehouse) and unstructured (Azure Data Lake) data through a distributed and scalable model. This session will also look at the benefits of Azure Data Lake as an analytical data store for data of all types.
As data warehouses become more advanced and move to the cloud, Master Data Management is often bottom of the list. Being tied to an IaaS VM solely for MDS feels like a big step in the wrong direction! In this session, I will show you the secret of ‘app-iness with a cloud alternative which pieces together Azure and Office 365 services to deliver a beautifully mobile ready front end, coupled with a serverless, scalable and artificially intelligent back end.

Attendees of this session should have a basic understanding of:

  • Azure Services (Data Lake, SQL DB,Data Factory, Logic Apps)
  • SQL Server Master Data Services
In times where agile is no longer a hype but reality, being able to automatically deployan application to any environment is a must. For ‘standard’ applications, toolsupport for automated builds, automated testing and automated deployment isgreat, making continuous delivery relatively straight forward and easy. But notfor the data engineer and BI developer… Lack of full tool support and bestpractices, makes continuous delivery of data intensive applicationschallenging. Using the Microsoft SQL Server and TFS stack, we at Info Supportdeveloped a way to deal with these challenges. In this session you will learnhow we did it!
The JavaScript Object Notation format, known simply as JSON, has now become a standard in data interchange between client-server applications, it is also used to store information in non-relational databases and the SQL Server 2016 engine handles it natively .
However, today, not everyone has passed to the latest version of SQL Server, so, it is possible to process the JSON format even with earlier versions: in this session, it is confident with this format and illustrates the non-native mode that allows you to be ready in the moment of the upgrade to 2016 (or later) with a few simple moves.
Finally, with a few simple examples and a real case study, you will find some tips on how to use JSON in an advanced way with SQL Server 2016.
This session is a special and interactive developer-to-developer format.
Proper way of storing encrypted data is to encrypt it on the client and send it to the server that doesn't know how to decrypt it. However, this solution lacks a simple way of searching through the encrypted data once it's on the server. You can do equality checks, but that's where most applications stop. But what to do if you have thousands of text documents you need to search through? Getting them all to the client and decrypting them there is simply out of the question as it would slow the system down to a crawl due to latency.

In this session we'll take a look at a few algorithms that enable you to search through the encrypted text data on the server without decrypting anything and returning only the search results to the client with performance in mind.
Just the way a search should be performed.
Understanding how to reduce the attack surface area of applications and SQL Server environments is imperative in today's world of constant system attacks from inside and outside threats. Learn about the methodology to increase security related development practices, backed by real world examples. Including securely accessing a database, properly encrypting data, using SSL/TLS and certificates throughout the system, guarding against common front-end attacks like SQL Injection (SQLi) and Cross Site Scripting (XSS), etc.  This session will include both T-SQL and .Net code to give you an overview of how everything works together.
SQL disk configuration and planning can really hurt you if you get it wrong in azure. There is a lot more to getting SQL right on Azure VMs than next-next-next.   Come along and dive deeper into azure storage for SQL. Topics covered include: • SQL storage Capacity Planning concepts • Understanding Storage Accounts, VM limits, and disk types • Understanding and planning around throttling • Benchmarking • Optimal Drive configuration • TempDB considerations in Azure • Hitting “max” disk throughput
Your data warehouse might not just have traditional flat file and data services. Odata and rest based XML (or JSON) is becoming more common as a data source. In this session we will walk you through real world examples of integration with SAP and the European Central Bank REST based data sources. Topics include:
  • Using dot.net to read REST XML
  • Using SQL Broker for message transport
  • Using SQL XML to shred and process XML Messages
  • Building a meta data framework
Microsoft VSTS offers build and release service to fully automate your builds and releases. While SSIS is not officially supported we will walk you through the process of setting up VSTS for automated build of your BI projects, leading onto automated release and deployment. Topics include:



- VSTS overview



- Deploying Agents



- Automating Build



- Automating Release
Learn how to build an Azure Machine Learning model, how to use, integrate and consume the model within other applications, and learn the basic principles and statistics concepts available in the different ML algorithms. If you want to know whether to choose a 'neural net' or a 'two class boosted decision tree', this session will reveal all!
The SQL Server agent was always the place where us DBAs would schedule everything we need to run unattended. In the cloud however, there are many tasks that need to run across the platform and not just on one virtual machine. And if you are using Azure SQL Databases, you don't even have the Agent available. Powershell is the language of choice to automate but where and how do you host the scripts, and how do you schedule execution of these scripts? This session is a an introduction into Azure Automation using Runbooks and Azure DSC specifically aimed at DBAs. All demos will be based on typical tasks a "Cloud DBA" will need to perform.
In this demo-focused session, I will show how to tell a compelling story with data using publicly available data sources like UK price paid data, Ofsted school data, Zoopla API and Bing API. I will be creating some stunning data visuals using built-in Power BI desktop visuals and custom visual from the office store. 

The session will include:
  • Working with API's as data source
  • Create relationships between different data sources
  • How to tell a data story using Power BI drill through functionality 
  • How to use built-in visuals, how to format to get the desired output
  • Using custom visuals like maps and timeline 
Get more out of your text data by analysing it with R - learn the concepts and the code needed to get awesome insight quickly.

Using the R package tidytext, I'll show you how to do common natural language processing tasks in R so you can get up to speed.
Anchor Modelling is a fantastic database modelling paradigm that uses sixth normal form (6NF) to store data and provides third normal form (3NF) views for ease of use. This session deep dives into all the concepts behind Anchor Modelling (and indeed databases generally!) and then takes you through how Anchor Modelling uses these concepts to move away from the traditional data warehouse paradigm to deliver a purely additive, agile database.
Learn how to build a bot that will answer questions asked by users. Learn how to customise it and embed it on your website. Learn how to do all of this without having to write a single line of code. I’ll be using Microsoft’s qnamaker.io site to build an FAQ bot and the putting it live with an Azure Bot Service and the Web Control. The end result is a little snippet you can add to any web page, making implementation a breeze. You can follow along as I build the bot live, or just soak it all in. Either way, you’ll see how easy it is to build a bot and you’ll know the next steps to follow to start building more complex bots.
DevOps is a movement focused on improving quality and time to deliver value by tackling the thorny issues of infrastructure, testing, integration, and deployment. These are big issues that have faced the data & analytics world for years, and tools have been slow to be delivered. This is changing though, so now we can start using the concepts from DevOps and applying them to analytics. Taking you through the principles, the tools, and the journey to DataOps, this session will help you do work with data.
Embedding your R (and soon Python!) models in SQL Server enables you to add predictive capabilities to your applications and your analytics without adding expensive components or going outside your network via expensive API calls. In this demo-packed talk, you’ll see how you can go from a model built in R to making predictions on the fly in SQL Server 2016, SQL Server 2017, and Azure SQL.
This session will look at how Power BI dashboard developers can use R to solve data import and data visualisation challenges. By the end of this session, you’ll know how you can use R to connect to more data sources, do sophisticated data transformation easily, avoid spatial data point limitations, and build custom graphics.
We can be better at our jobs if we have a good grasp of basic statistics.


It doesn't matter if you're a DBA looking to understand query plan performance, a data warehouse person needing to come up with ETL load time estimates, or an analyst needing to report figures to managers. Statistics can help you all.


If only maths classes hadn't been so darn boring!


Instead of going all mathsy, we'll be doing some real-time data capture and taking an intuitive and visual approach through summary statistics right up to understanding how to produce simple predictive models.


By the end of the session, you'll understand concepts like sampling, error, regression, and outliers - important day-to-day stuff and a great base upon which to build. By the end of the session, you'll wonder how people could have it made seem so hard for so many years.
Yes, the cloud is cool. Yes, it can be easy. But... will it save or even make the company money? That is the all-important question!

Learning to build a financial business case for a technology decision is a smart move. Transform the techy topic that C-suite people don't feel like they can discuss into something they can discuss by making it a financial topic. Help people make the right decision for the business.

In this talk, we'll look at how we can determine the Total Cost of Ownership (TCO) and any potential Return on Investment (ROI) for a variety of scenarios, from starting in the cloud, migrating to the cloud, and staying on premises.

We'll see if there are any tipping points when one or the other becomes sensible. We'll take into account basic principles like the future value of money, and even incorporate a measurement of opportunity costs for implementing each solution.

By the end of this talk, you'll have a grasp of basic accounting and you'll be able to talk business i.e. money!
So you're thinking about doing implementing data science project in your business?

You might be considering one or all of these options:
  • Hiring a data scientist
  • Using existing staff
  • Engaging a consultant
Like with most things in business, if you fail to plan, you plan to fail.

Starting out on a project without adequate planning, risks wasted time and money when you hit unexpected roadblocks. Additionally, putting a data science project into production without sufficient testing, monitoring, and due diligence around legal obligations, can expose you to substantial problems.

I want to help you avoid as much as risk as possible by taking you through my data science readiness checklist, including topics like:
  • Application development processes and capabilities
  • Data platform maturity
  • Use of data products within the business
  • Skillsets of existing business intelligence and other analytical teams
  • Analytical teams processes and capabilities
  • IT and analytical teams alignment to business goals
  • Recruitment, induction, and professional development processes
  • Legal, ethical, and regulatory considerations
Armed with the checklist, there'll be fewer "unknown unknowns" that could derail your project or cause extra cost. Let's get planning!

Power BI: Minutes to create and seconds to impress. 

Yes, it takes minutes to create and share Power BI reports. Which means every user who has relevant Power BI access can create work spaces, apps, reports, dashboards and schedules. With no deployment strategy, maintaining Power BI service could become a terrible job. 
In this session, I will cover how having a having deployment strategy can make a terrible maintenance job seamless.

This session will include:
  • What happens when a user creates a Power BI app workspace
  • How to control user access
  • Working with different environments
  • Creating and sharing reports using Power BI Apps 
  • How to monitor dataset schedules and failure notifications
  • How to use Power BI API to document your organisation Power BI Service
I'm a SQL Server DBA and a lot of my time is spent in Powershell these days. Database environments can be quite complex and in my attempts to automate setting up lab environments for (automated) testing I discovered the open source Powershell library called Lability. It has a slight learning curve and leans heavily on DSC which also has a bit of a learning curve. In this session I'll show you how to set up a fairly complex lab from start to finish and take you through my lessons learned.
Need to learn T-SQL but have no idea where to start?
Join us in this session that will be your guide into the world of writing good T-SQL code.
You'll learn the basics of writing T-SQL statements, gotchas that you should be careful about 
when starting and how to not fall into traps most beginners fall into, like overuse of cursors and looping,
writing overly complex queries, not understading how grouping works, etc. 
This will be a demo heavy session with easy to understand explanations on why things work the way they work.
SQL Server and Azure are built for each other. New hybrid scenarios between on-premise SQL Server and Azure mean they don't have to exclude each other but instead you can have the best of both worlds, reducing operational costs.
 

For example, by taking advantage of services like Azure Blob Storage or Azure VMs
we can increase the availability of our services or distribute data in smart
ways that benefit our performance and decrease cost.


In this demo-heavy session, you will learn the strongest use cases for hybrid scenarios
between on-premises and the cloud, and open a new horizon of what you can do
with your SQL Server infrastructure.
Released in 2012 AlwaysOn Availability Groups is being improved version-by-version and nowadays Windows Server and SQL Server 2017 can together broader the capabilities of your systems. With regards to High Availability and Disaster Recovery, we have more flexibility by taking advantage of the improved AlwaysOn Availability Groups, Windows Server Failover Cluster flexibility and, of course, Azure's integration.


During this session we will explore those new possibilities, checking what is new, the current limitations and what we can build by taking advantage of all the improvements.
Everything in life can be hacked… Even SQL Server… Don't believe me? See for yourself… This is a demo-driven session, suited for DBAs, developers and security consultants. Both exploits and security recommendations to avoid them will be covered. Disclaimer: No actual crimes will be committed. Please do not send agents to my