4th - 7th May 2016

Liverpool


Sessions

These are the sessions submitted so far for SQLBits XV.

Filter Sessions:
200
Dev
We all write SQL scripts but how do we know that what we write is returning the correct results?  In this session I will explain the importance of unit testing your code, what to test for and most importantly what free tools you can use to make this easy.  By the end of this session you will be equipped with the information you need to go and implement unit testing on your code so that you can confidently carry out stress-free system releases.
300
BI
In this talk I will show you an overview of the developer ecosystem in Power BI and share my experiences in the creation of Power BI Office Apps (Power BI Tiles and Send to PowerBI) and the community PowerShell module PowerBIPS.

In this presentation you will also learn how to: integrate your app data with Power BI, embed Power BI visualizations in your app and develop your own custom visualizations.
400
DBA
The word Kerberos can strike fear into a SQL DBA as well as many Windows Server Administrators.

What should be a straight forward and simple process can lead to all sorts of issues and trying to resolve them can turn into a nightmare. This talk looks at the principle of Kerberos, how it applies to SQL Server and what we need to do ensure it works.

We will look at
  • What is the purpose of Kerberos in relation to SQL Server?
  • When do we need to use it?  Do we need to worry about it at all?
  • How  do we configure it?  What tools can we use?
  • Who can configure it?  Is it the DBA job to manage and configure Kerberos?
  • Why does it cause so many issues?
Because on the face of it setting up Kerberos for a SQL Server is actually straightforward but it is very easy to get wrong and then sometimes very difficult to see what is wrong preview here https://www.youtube.com/watch?v=uO9NqxizT_8
300
DBA
Beware of the Dark Side - A Guided Tour of Oracle for the SQL DBA



Today, SQL Server DBAs are more than likely at some point in their careers to come across Oracle and Oracle DBAs. To the unwary this can be very daunting and, at first glance, Oracle can look completely different with little obvious similarities to SQL Server.



This talk sets out to explain some of the terminology, the differences and the similarities between Oracle and SQL Server and hopefully make Oracle not look quite so intimidating.



At the end of this session you will have a better understanding of Oracle and the differences between the Oracle RDBMS and SQL Server. 



Although you won’t be ready to be an Oracle DBA it will give you a foundation to build on.
200
DBA
Putting Your head in the Cloud– A Beginner’s Guide to Cloud Computing and SQL Azure Although Microsoft Azure and the concept of Cloud Computing has been around for a number of years it is still a mystery to many.



This talk takes a look at Cloud Computing – what it is, the types of Cloud available and their advantages and disadvantages.



We’ll then look at Windows Azure  and specifically SQL Azure DB, to see how to create and manage SQL databases in the Cloud. By the end of this talk you will be ready to put your head in the cloud and start taking advantage of what the cloud has to offer preview https://www.youtube.com/watch?v=CApKdJJFRYw
200
BI
We all know about Big Data: Velocity, Volume and Variety. What about the missing 'V" - Visualisation? Join this session to see how you can bigviz your data in PowerBI and Open Source technologies, using Azure as a basis for your Big Data needs.

What does "big data" look like? It has to be more than beautiful; it needs to convey the information and insights in a way that people understand. Further, people expect to be able to make actionable insights from their data, and how can we make big data friendly to users?

In this session, we will look at a mix of technologies for visualising big data sources including Power BI and open source technologies, and ways of achieving BigViz harmony in your Big Data!
300
BI
Business analytics platforms have always been tools built for statisticians and data scientists. However, these tools are increasingly being directed at business analysts, and Gartner's rankings on its well-known "completeness of vision" and "ability to execute" axes now ride as much on "ease of use" as they do on offering advanced analytical algorithms.How does this change in landscape affect businesses?

Are you new to the world of business analytics? Are you taking over an existing analytics program, or starting one from scratch? This session will help you to understand how to craft a strategy, provision the right business analytical capabilities, and move towards actionable results. We’ll identify common pitfalls to avoid as you start or reinvigorate your business analytics program. In particular, we’ll explore how companies are using self-service and data discovery techniques to deliver more agile analytics using Azure Machine Learning.

We’ll identify common pitfalls to avoid as well as notes from the field, so join me for a practical session with takeaways you can start using straight away.
200
BI
Does your organisation already use Tableau, or are you thinking about it? Join Tableau expert and book author, Jen Stirrup, as she demonstrates how you can use Tableau and Power BI side-by-side, as well as tips and tricks on using the two tools together in your organisation effectively.
300
DBA
Your company’s data is probably the most important thing that they have, if that data is lost your company may not be able to continue trading and you will be out of a job.

One of the fundamental responsibilities of a DBA is to ensure that the databases you manage are safe and secure, not just protection from malicious attack but also safeguarding data from accidental unwanted changes by users.

In this talk we will look at the recommended SQL Server Security best practices such as compliance, security, design, auditing, backups and encryption that DBAs and developers can implement to keep your databases and instances secure.
400
BI
Anyone who can type commands into R but that is not the same as actually 'doing' statistics for analytics. They may even misuse those methods, and it's an entirely different thing to really understand what’s happening. 

Knowledge is what really drives each phase of your analysis, and create effective models for the business to use in order to create actionable insights. It can be difficult to see when someone is building faulty statistical models, especially when their intentions are good, and their results look pretty! Results are important, and it's down to you to create models that are sound and robust.  

In this session, we will look at modeling techniques in Predictive Analytics using R, using our boozy day at the Guinness factory as a backdrop to understanding why statistical learning is important for analytics today.
Drinking Guinness is optional, but admittedly might be preferred for this intensive session.
300
BI
Cortana Analytics is a fully managed big data and advanced analytics suite that enables you to transform your data into intelligent action. In this talk, you will get an underpinning of analytical modelling, along with demos of visualizing data and getting insights via Power BI and the challenges of building a SaaS service at scale. Major features of new Power BI will be covered with demos, including natural language query, and others, in the context of Cortana Analytics. Come and learn what model to use when, and how to use data storytelling to display the results meaningfully.

Deep analytics and analytics-based intelligent action is available to everyone in the form of Cortana Analytics, and visualised with Power BI. Join this session to learn about this exciting technology to obtain organisational insights from your data.
300
Car
If you think BI Projects are hard.... try running an Internet of Things (IoT) Project! Confused by architecting an Azure IoT based solution? Join this session, which is aimed at architects.

In this session, Jen shares her experience in running IoT projects from an envisioning perspective, setting strategy, and then executing operationally. Jen will take you through the lifecycle of an IoT project and will provide practical advice on architecting in Azure, managing and delivering your first IoT project.

Jen will take you through the lifecycle of an IoT project and will provide practical advice on what can go wrong, and when - but most of all, to help you to deliver your Azure-based IoT project successfully.
300
DBA
You've learnt the basics of cloud computing and taken a tour of Microsoft Azure.



It's now time to take a deeper look at using SQL Azure.



In this presentation we will have wall to wall demos on creating, configuring, connecting, using, securing, monitoring, uploading, scheduling and syncing your SQL Azure database.



We will specifically cover  
  • Storage Accounts
  • Firewall Rules
  • Linked Servers
  • Mobile Services Scheduler
  • Azure Automation
  • Azure Scheduler
  • DACPACs and BACPACs
  • SQL Azure Data Sync
By the end of this sessions you will some way to becoming an Azure Jedi



preview https://www.youtube.com/watch?v=WC94cTLhFIk
200
BI
Microsoft released a bunch of data-related services on Azure in the last year and grouped them as Cortana Analytics Suite (CAS). This session is a demo-based introduction into CAS. You see Azure Event hubs, Stream analytics, Data factory, Data warehouse, Data lake, HDInsight, Machine learning, Power BI and finally (lady's last this time...) Cortana in action in an end-to-end demo. The key features of each of these services are illustrated, as well as basic setup and configuration.
400
Dev
SQL is based on the relational  algebra, right? Relational algebra is based on set theory, right? So we would expect SQL to  kind-of respect common sense from set theory. Like A union A has the same number of elements as A. Well, this is not always the case with SQL. 

Come and have fun when we explore scenarios of simple SQL statements, where I guarantee you moments of inner WTF. 
400
BI
In many data warehouses we model fact tables with measures based on attributes which we can count and do arithmetic upon. It is more difficult to handle fact tables with measures based on length of intervals of events happening in the source systems. So questions like "What are the typical waiting times in our order process?" are seldomly modelled in the data warehouse. Especially if the event data comes from different source systems. 

In this talk I will show you different techniques and models related to time: process mining, lean, six sigma, process datawarehousing, relational temporal theory and SQL 2016 temporal
200
Car
Want to start a new business? You have the great idea, but what about creating a business plan, website, invoicing, CRM, social media and all the other infrastructure you need tooperate that great startup. Come and spend an hour with me and see how I didit.  It may help you.  Ask all the questions you have. I might evenbe able to answer them.
200
DBA
For far too long, I thought that statistics only contained information on table row counts. While they do contain that information, there is more to it than that. In this beginner session, we’ll go over statistics – how they are created, the different types of statistics that exist, how they’re maintained and how the Query Optimizer uses them. We will also touch on system tables and DMVs that will provide additional information on your statistics. We'll also go over the cardinality estimator changes in 2014. At the end of this session, you should have a better idea of how the query optimizer within SQL Server makes decisions on how to gather data.
100
Car
Over 90% of communication is non-verbal and people are designed to make an opinion about strangers within the first 30 seconds of seeing them.  That doesn't give you much time to get your first impressions right with a customer when you first arrive at their office.
This session discusses what's going on sub-consciously when we meet people and describes the tips and techniques you can practice to help to create the right impressions every time.  We shall also discuss some of the competencies needed to succeed in customer engagements such as the ability to see through the technical problem and identify underlying issues.
Finally we'll have a look at some of the ideas you might like to think about in developing your own brand, and how combining this with the techniques already covered will accelerate your career development and success.
200
Car
Technology moves on at a pace, a pace not all of us can maintain when we try to do everything. So just what will we be working on tomorrow, lets discuss. Drawing on my experience and interpretation of the way the industry is going, I hope to spark some discussion on where we view our work future to be and what we think we will be doing. What tasks will cease to be and what new things will we need to learn in order to be great at what we do. With Microsoft focusing on Cloud First, Mobile First, how will this impact us? What additional technologies outside of what we are doing at the moment should we consider? These and many other questions are things that we should look to decide on sooner rather than later, that way we can ensure that when it comes to getting that job we want we will bit at the head of the line.
300
DBA
SQL Sever 2016 is just around the corner and with it come some great new features and enhancements to existing capabilities, in this session we will look at some of these and understand how they might make us change the way that we look at designing and implementing SQL Server solutions. We will focus on the SQL Server engine, covering topics including Operational Analytics, Temporal & Stretch Tables and Always Encrypted. And we will be having a look at some of the old favorites, including enhancements to In-Memory OLTP (Hekaton) & AlwaysOn Availability Groups. By the end of this session you will have a jump-start on getting ready for the next version of SQL Server, whether you are an application developer, DBA or Business Intelligence developer.
400
Dev
DML operations mean lots of work for the database engine of Microsoft SQL Server. Understanding the details of a transaction may give you great benefits when planning workloads for INSERT, UPDATE and DELETE operations. This session will demonstrate the huge difference of allocation data in a heap and in a clustered index. If you don't know the benefits of correct record size and / or you are wondering how you could release allocated data pages in a professional way this session may give you all the answers. - data page allocation and its difference for heaps and clustered indexes - what is the amount of transaction log when you update data in a heap and/or in a clustered index - delete data from a heap and what is the huge difference to a clustered index? - speed up ETL processes by usage of the right strategy for INSERTS and DELETES - Page Splits vs. Forwarded Records - do you really need a clustered index in a table? Pro's and Con's of clustered indexes for DML operations
300
DBA
If you ask an expert about using a clustered index you will ALWAYS hear: Yes, you need a clustered index for your table. Last but not least Microsoft Azure Databases requires clustered indexes. But clustered indexes are not always a good choice for your database solution because of several different reasons. This session will start a debate about genius and madness of clustered indexes in your application. This session will run several demos which will show in a very clear way why a clustered index isn't the best choice in several workload scenarios. "There's no lunch for free"! Not using clustered indexes will have ONE heavy drawback - this drawback will be demonstrated, too. The session goal is to make decisions easier for you to use a clustered index or not!
400
DBA
With Availability Groups comes a wealth of new HA capability, however it brings with it a number of maintenance headaches. Let me show you some options that can help ease the pain. The old favorites are there, CHECKDB, Backups, Index and Statistics Maintenance. We now welcome Job and Login synchronization, along with stats monitoring to our maintenance routines. So just how do we do all of this easily, simply and with minimal fuss? I will show you a number of techniques that I have used to solve some of the maintenance headaches associated with managing an AlwaysOn Availability Group infrastructure and let you concentrate on the tasks you want to do.
300
Dev
SQL Server 2016 has introduced Always Encrypted along with Dynamic Data Masking and Row Level Security from Azure SQL Database, but how to use them I hear you say. Together we will look at some options available to us. These new capabilities have the potential to really help us secure our database systems and dramatically reduce the scope and risk associated with system compromises and data breach scenarios. But what are the options for using them, on their own or in combination with one another, the possibilities are many. I will show you how you can start implementing these features from scratch, but also some of the options for upgrading an existing system to make use of the new features so that you can give your systems and data an extra level of security. By the end of this session you will be in a position to start planning which features work for you and testing how you will make best use of them.
300
Dev
There’s so much more to running totals than meets the eye. You can apply windowed running aggregate calculations and their variants (windowed ranking calculations) to solve a wide variety of T-SQL querying tasks elegantly and efficiently. This session will show you how. Some of the solutions that rely on running total calculations are downright beautiful and inspirational.
300
BI
Developing a solution for a business problem is usually very exciting for a developer. But how about testing the code or the changes . It is very boring and an unavoidable necessity. But there is a solution to convert the testing phase as a fun part of the development. In this session we will see, how to build a regression test framework for Data warehousing projects. This framework will help to reconcile before and after changes for a data warehouse. This solution is based on the open source FitNesse framework. This doesn’t require any installation and it is very efficient. This session will be useful for Developer, Testers and anyone looking to automate the data reconciliation for the Data Warehouse changes.
300
DBA
There are over 800 wait types in SQL Server, but in my experience there are only 10 that DBAs and developers should be very familiar with… If you can master these 10 wait types, what causes them and their solutions, you will be ahead of the performance analysis game. In this session,Janis Griffin, Database Performance Evangelist, SolarWinds, will share her top10 wait types based on over 25 years of experience and over 500 consulting engagements with customers. Griffin will also provide the key steps needed to identify, prioritize and solve any performance issues that these wait types cause,in order to help attendees hone their performance troubleshooting skills. 
300
DBA
Sometimes bad execution plans happen to good queries. You may have heard that it's better to rewrite complex TSQL-- but is  using a temporary table better than using query hints? In this session you'll learn the pros and cons of using hints, plan guides, and the new Query Store feature in SQL Server 2016 to manipulate your execution plans. You'll take away a checklist of things to do every time you decide to bully an execution plan.
200
DBA
Performance tuning can be complex.It's often hard to know which knob to turn or button to press to get thebiggest performance boost. In this presentation, Janis Griffin, Database PerformanceEvangelist, SolarWinds, will detail 12 steps to quickly identify performanceissues and resolve them. Attendees at this session will learn how to:
  • Quickly fine tune a SQL statement
  • Identify performance inhibitors to help avoid future performance issues
  • Recognize and understand how new SQL Server features can help improve query performance
300
DBA
Did you ever wanted to be a detective? Does
searching for clues and deciphering complex messages sound like a dream come
true? If you answered yes to any of the questions above this session might just
be the thing for you! As a true detective we will analyze clues the SQLOS gives
us in the form of Wait Statistics. Using these Wait Statistics we can start to
unravel the mysteries surrounding some of the worst SQL Server performance
crimes and learn how to solve them!

This session, filled with examples and live
demo’s, will give you an insight on how SQL Server schedules work, why requests
sometimes have to wait and what they are actually waiting for. Most
importantly, you will learn methods of analyzing this information to help you
solve performance problems or bottlenecks faster than ever before. 
300
DBA
With the release of the public preview versions of SQL Server 2016 we were finally able
to play with, in my opinion, one of the most exciting new features in SQL
Server 2016, the Query Store! The Query Store serves as a flight recorder for
your query workload and provides valuable insights into the performance of your
queries. It doesn’t stop there however, using the performance metrics the Query
Store records, we can decide which Execution Plan SQL Server should use when
executing a specific query. If those two features aren’t enough, the Query
Store provides all this information inside easy-to-use reports and Dynamic Management
Views (DMVs) removing a great deal of the complexity of query performance
analysis.

During this session we will take a thorough look at the Query Store, it’s architecture, the
build-in reporting, DMVs and the performance impact of enabling the Query
Store. No matter if you are a DBA or developer, the Query Store has information
that can help you analyze performance issues or write better queries!
300
Dev
Discuss the power and benefits behind Azure Templates and how we can rapidly streamline the provisioning of environments in a standardised and version controlled manor.

What are Azure Templates?
How do they work?
Demonstrate how Templates work


300
DBA
SharePoint is a platform that relies heavily of SQL Server, however it is not always easy to plan for the installation, management and growth of the platform.  It is important for a successful SharePoint implementation for the owners and administrators of SQL Server to understand the impact of SharePoint.

This session provides an introduction to the functionality of SharePoint and the different databases that SharePoint uses, with recommendations for High Availability, Disaster Recovery and configuration settings for SQL Server, including the constraints imposed in a single farm, a stretched farm between data centres and
a separate DR farm.
100
Car
Office 365 plays a key role in the Microsoft Cloud offering.  It combines many different capabilities that have historically been separated into different products. The convergence of the products on one platform is opening up new possibilities for delivering new and exciting ways of collaborating.

This session will explain some of the ways that Office 365 is being used and demonstrate some of the capabilities that convince millions of companies to invest in Office 365 to replace existing products with a single unified environment making the most of the familiar business critical tools from Microsoft.

It will also explain how Office 365 can be extended to accommodate custom functionality and integrate with other systems and solutions, including PowerBI, Dynamics CRM and Azure.
200
DBA
Session will be aimed at Database Administrators\Developers who have not previously implemented partitioning within an OLTP database and is designed to give a overview into the concepts and implementation. 

Session will cover the following:- 
An introduction to partitioning, core concepts and benefits. 
Overview of partitioning functions & schemes. 
Considerations for selecting a partitioning column. 
Creating a partitioned table. 
Explanation of aligned and non-aligned indexes. 
Manually switching a partition. 
Manually merging a partition. 
Manually splitting a partition. 
Demo on partition implementation & maintenance, covering automatic sliding windows.


After the session, attendees will have an insight into partitioning and have a platform on which to be able to investigate further into implementing partitioning in their own environment.
400
Dev
Every SSIS dataflow transformation is a blocker since each one takes a finite amount or resources and time to perform its function.  The performance goal is to minimise the resources and time taken by greedy transformations since a pipeline's overall performance is governed by its worst performing transformation.

This talk covers Script Component techniques, in C#, that can be used to:
- Single pass denormalised data to extract both dimensions and measures
- Processing late arriving dimensions
- Perform alternative lookups whilst maintaining the order of rows being processed
- Reduce the memory resources required by SSIS's sort and aggregate dataflow transformations
- Speed up complex SQL queries that perform expensive cross joins
- Provide a technique for bulk loading partitioned tables within SQL Server.
400
Dev
MDX is a batch script language with neither code reuse capability nor flow control.  This means that either the generic queries have to be created and the client post-process the data or several targeted queries are created with the key MDX elements duplicated.  The latter causes a maintenance nightmare.

Using a C# factory to describe the key MDX elements and create the queries from a library is ideally suited to web servers or automatically creating a set of RDL files for SSRS.

This talk covers the semantics of MDX and the components and how those components are modelled in C#/.Net4.5 (using VS2015) to create a MDX query generator
400
DBA
This session covers tuning what is superficially the simple exercise of getting the maximum throughput out of a concurrent insert workload, from key construction, logging to tuning spinlocks. The session will look at two of the key spinlocks which influence of OLTP workload performance, namely XDESMGR, LOGCACHE_ACCESS and how they influence OLTP workload performance and scalability in great depth, the ramifications of NUMA on spinlock behaviour and SQL OS scheduling. Insights gained from windows performance toolkit will be used to provide a level 4~500 view into how the database engine behaves under extreme OLTP pressure. Finally the session will wrap up with the in memory OLTP engine and the performance of natively compiled stored procedures versus transactions that span the conventional disk based database engine and the in memory OLTP engine.
400
DBA
"In memory" is a hot topic in the database world at present, but how do modern servers CPUs utilise memory, does the story end with main memory ?, what about NUMA and the memory hierarchy on the actual CPU. Do memory access patterns matter ? does the CPU socket certain workloads are executed on matter and how can all of this be leveraged in the database engine to our benefit. All these answers and more will be covered at level 400 including, large memory pages, spinlocks, optimising hash joins for leveraging the CPU cache, the OLTP database engine and the LMAX queuing pattern. During this journey everything a SQL Server professional needs to know and should know about memory will be covered, along with deep insights into the database engine, CPU architectures and the use of windows performance toolkit to quantify the performance related behaviour of the database engine.
400
DBA
CISL (https://github.com/NikoNeugebauer/CISL) is a free and open source Columnstore Indexes Scripts Library that allows any user to get advanced insights over the Columnstore Indexes. With the help of CISL you can discover what are the tables that you should consider converting to Columnstore Indexes, what are the difficulties that prevent you from doing it in a matter of couple of clicks.

Learn how to use Columnstore Indexes maintenance solution in CISL for keeping your Columnstore performance at the maximum speed. The maintenance solution will do all the necessary things for you while providing a way the way you want it to operate.
300
DBA
Columnstore Technology has received one of the biggest number of improvements across all SQL Server 2016 technologies. 
Discover Operational Analytics & InMemory Analytics for the OLTP workloads with updatable Nonclustered Columnstore Indexes,
Data Warehousing with vastly enhanced Clustered Columnstore Index that now can have all the typical functionalities of a well-designed & highly performing database such as Foreign Keys and Secondary Indexes.

Discover all the maintenance improvements that will allow to operate and manipulate the Columnstore structures with more precision and insights.
300
Dev
Use Azure and HPC/Big Data solutions to spin up some large clusters. Tell the narrative around of thousands of cores in a simple deployment approach. Contrast with the limits of on premises private clouds.
500
BI
Tired of Bar Charts? We'll build out a custom PowerBI Visual and show the power of PowerBI whilst going into a deep dive on how this is achieved. We will be exploring web technologies along with data technologies, and seeing how some very powerful constructs are used to produce PowerBI reports. 

We will be covering a variety of content; including: Typescript, Javascript, HTML5, Gulp, Visual Studio Code, the MVVM pattern, D3.js, and without giving the game away too much, Google Maps.
300
BI
Power BI provides a cloud based collaborative platform to manage and share business insights anytime and anywhere. This session will explore how to access the Power BI site, manage integration with on premise data and set the availability of data to individuals within the business. Exploring the integration with Active Directory.
200
BI
The Cortana Analytics Suite provides an array of cloud based technologies that enable the storage, analysis and presentation of data into information. This session will explain:

1. What technologies the Cortana Analytics Suite contains,
2. What a solution architecture will look like,
3. A demo of some of the key technologies that will enable you to create and develop a solution. 

At the end of this session, you will have a clear understanding of what Cortana Analytics is, and how you can implement some of its technologies
200
BI
Want an overview of the cloud? More organisation are currently evaluating the use of the hardware within their business. In the past, this would involve contacting hardware suppliers with replacement kit to continue business operations. The cloud offering from Microsoft is improving all the time, thus providing an alternative consideration as to where an organisation can store its IT assets, including it’s Data. Join Chris for a cloud driven, demo packed session to introduce you to the capabilities that Microsoft has to offer when it comes to the cloud. Focussing on the data properties of Azure and the capabilities of Office 365 and PowerBI. At the end of this session you will have a better understanding of what Microsoft can bring to the table for a cloud ready business. After this session, you may want to attend the dedicated session on Azure topics
200
BI
Do you get involved in the creation of Business Intelligence solutions within your organisation? This session will show you the new features of SQL Server 2016 Business Intelligence components. Ranging from performance improvements in SSAS, to an array of visualisation in Reporting Services, and new features of SSIS, MDS and DQS  Whether you are new or an experienced SQL Server Business Intelligence developer, this session will provide you with the knowledge regarding the new features of SQL Server 2016 BI components  Join Chris for a demo-packed workshop to learn about the world of data quality using Master Data Services, Data Quality Service, Data Warehouse design patterns, and ETL capabilities of Integration Services. During the workshop you will build a Data Warehouse to reinforce your learning.
300
BI
Your tabular model is done, approved and ready to be used by the user. By means of using Excel the user gets very excited about the use of tabular Models. For a while the user uses Excel as a self-service business intelligence tool. Then all of a sudden the user starts asking if they can use the program to extract more and other information from the tabular model by the use of Excel. Now it is up to you to familiarize the user with all the possibilities of working with the tabular model by means of Excel.
Given the small amount of documented knowledge about the use of tabular models by means of Excel, I will show you how to get the best of your tabular models by using Excel as a self-service business intelligence tool. Filters, named sets, and calculations in the pivot table: I will explain it all!
200
BI
Agile BI promises to deliver value much quicker to its end users. But how do you keep track of versions and prioritize all the demands users have?
With Visual Studio Online (cloud version of Team Foundation Server) it is possible to start for free with 5 users, with Version Control, Work Item management and much more.
In my session you will get the directions to a quick start with Visual Studio Online. You will learn the possibilities of Version Control and in which way to implement Scrum work item management with all available tools.
300
Dev
Based on our work with converting an existing application to memory optimized tables and natively compiled stored procedures this session, will take you through this journey and show you the (large) gaps between
what we have in our normal SQL belt and what is possible in Natively compiled stored procedures. I will show how to overcome all the gaps and get all out normal stuff to work in these monster fast procedures –
even the things that the documentation says cannot be done.

You will therefore be taken through the concept of in-memory tables and what to be aware of when considering converting your database tables and code to In-Memory tables and through a life migration demo be given all the tips and tricks I picked up while doing so. After attending this session, you will be able to leverage on the new concepts and work your way around its current limitations to gain enormous speed increase and a lock-free environment.

From this starting point, we will look forward and take a dive into sql server 2016 and see which enhancements to the in-memory tables are in store for us with this coming release.
200
BI
In this BIML introductory session I will take you with me on a the wonderful journey into the world of metadriven BI development and show you WHY you should consider using BIML, WHAT BIML can do for you and HOW it is done.

We will look at both the business and the technical aspects, what is to gain, both for you project in terms of time and money and for you, the developer in terms of getting the fun back in BI-development.
300
DBA

In this Session we will delve into the need to deploy and manage SQL
Server Configurations using PowerShell Desired State Configuration and
the immediate benefits that this can and will bring to your deployment
and administration of this very much vital technology.
Believe me
when I say that I am no SQL Expert but this session will leave those
that attend in a position to go away and look to implement this in their
organisation and really reap the benefits that managing your
Infrastructure as Code brings to a flexible and more maintainable
environment.

This session will also briefly cover the other core
necessities that will be required to get this underway which include the
importance of Source Control for the configurations, Community
Developed Resources & more.
500
Dev
In an enterprise, merging master data, like customer data, from multiple sources is a common problem. Typically, you do not have a single, i.e. the same key identifying a customer in different sources. You have to match data based on similarity of strings, like names and addresses. In this session, we are going to check how different algorithms for comparing strings included in SQL Server 2014 work. We are going to use Soundex Transact-SQL function, four different algorithms that come with Master Data Services (Levenshtein, Jaccard, Jaro-Winkler and Ratcliff-Obershelp), and Fuzzy Lookup transformation from Integration Services. Finally, we are going to introduce how SQL Server Data Quality Services (DQS) help us here. We are also going to tackle the performance problems with string matching merging.
300
BI
In SQL Server and Power BI suites, you can find nearly anything you need for analysing your data. SQL Server 2016 closes one of the last gaps - support for statistics beyond basic aggregate functions and support for other mathematical calculations. This is done with support for the R code inside SQL Server database Engine. This session goes beyond showing the basics, i.e. how to use R in the Database Engine and Reporting Services reports; it shows and explains also some advanced statistical and matrix calculations.
400
BI
Anomaly detection is one of the most advanced data mining and machine learning tasks. There are many statistical procedures and data mining algorithms that can be used for it, including Expectation-Maximization Clustering, and Principal Component Analysis. In this session, you will learn through presentation and demos how to detect the low quality data areas with some basic statistics and with advanced algorithms. You will see how you can use T-SQL queries, R code in SQL Server, SSAS Data Mining, and Azure ML for this complex task.
200
DBA
Microsoft Azure SQL Database came into the picture when nobody was talking about cloud computing. Since that time, Azure SQL Database has known many versions but was always rather limited in functionality e.g. backing up your database was not possible, the size of your database was limited, and heap tables were not allowed … Last year though, Microsoft reached another milestone by introducing near-complete SQL Server engine compatibility and more premium performance on Azure SQL Database. So, let’s get ready for this new release! I will take you on a tour around the new Microsoft Azure SQL Database world. In the first part, you will get an overview of the fundamentals of Microsoft Azure SQL Database administration. In the second part we will focus more on What's new in the latest Update V12 – including Azure SQL Database Auditing, Dynamic Data Masking, Extended Events, Workload Insights and the Index Advisor.
300
DBA
A good DBA performs his/her morning checklist every day to verify if all the databases and SQL Servers are still in a good condition. In larger environments the DBA checklist can become really time consuming and you don’t even have the time for a coffee… In this session you will learn how you can perform your DBA morning checklist while sipping coffee. I will demonstrate how you can use Policy Based Management to evaluate your servers and how I configured my setup. By the end of this session, you can verify your own SQL environment in no time by using this solution and have plenty of time for your morning coffee!
200
Dev
Microsoft Azure Search is a new fully managed full-text search service in Microsoft Azure which provides powerful and sophisticated search capabilities to your applications. In this session we will introduce this great new services from the very beginner and create a full search experience in a standard web application and a mobile App.
300
DBA
In this session we will introduce the main areas of Microsoft Azure the matter for a SQL Server Professional. We will start to identify the main components of Azure that are related to SQL Server - VMs, SQL Database, SQL Data Warehouse, Azure Search, Machine Learning, Data Factory, HDInsight, Stream Analytics. Then we will check how SQL Server and Azure can work together to create a state of the art High Availability solution without the need to create/own a failover Datacenter. In this second part we will check a) Azure storage for backups and storing database files; b) Azure SQL Databases in combination with Stretch Database feature.
300
BI
In this session I will be doing a demo on my article "Populating a Fact Table using SSIS": https://dwbi1.wordpress.com/2012/05/26/how-to-populate-a-fact-table-using-ssis-part1/
by going through it step by step on the screen. Giving the audience plenty of time to understand the mechanics and to ask questions.

When populating a fact table, people often come across issues like these:
 - Where do I populate my fact table from?
 - How do I get the dimension keys to put into my fact table?
 - Where can I get the data for the measures columns?
 - With what do I populate the snapshot date column?
 - What is the primary key of my fact table?
 - The source table doesn’t have a primary key, what should I do?
 - I got duplicate rows in the fact table. How do I handle it?
 - I can’t find the row in the dimension table, what should I do?
 - The rows with that snapshot date already exist. What should I do?
 
As always, the best way to explain is by using an example. So in this session I’m going to do the following, and hopefully by the end of the session the above issues / questions in the audience's mind will be answered.
 - Describe the background on the company and the data warehouse
 - Create the source tables and populate them
 - Create the dimension tables and populate them
 - Create the fact table (empty)
 - Build an SSIS package to populate the fact table, step by step 
300
BI
The usual issue when we test a data warehouse is the sheer volume of data, which makes the normal testing method doesn't work. Over the years, various ways have been developed to provide solutions to this problem, from a manual process to automated process.
In this session I will be demo-ing 3 approaches for testing SQL Server-based data warehouses:
1. Using Manual Comparison
2. Using Excel
3. Using SSIS - SQL Server - SSAS
Along the way I will show the challenges, the mechanics, and the solutions.

Using Excel we will need to sort the data first. And then we use different formula for compare string columns, date column, logical column and numeric columns. We also need to incorporate the tolerance levels. Finally we can present the data quality for each column.

To verify the data in the data warehouse using the 3rd approach, first we need to match the rows to the source system. Then we verify the attribute columns. Then we verify the measure columns. SSIS facilitates the flow of data from both the source system and the warehouse into the test area, which is in SQL Server. A series of stored procedures then do the matching process in SQL Server, and compare the attribute columns and measure columns. The SSAS enables the testers to dissect the compared data, to find out which specific data areas are causing the issue.
300
DBA
Do you know exactly what happens with your Microsoft SQL Server and what happens inside the Microsoft SQL Server? Does your manager report delays in the queries and you have to find the bottleneck quite fast?



The daily business of a dba is focused on the stability of the SQL Server databases and this session will help you to understand and monitor the activity on your Microsoft SQL Server with given tools by SQL Server itself.



- monitor the physical connection from the client to the Microsoft SQL Server

- see, what requests are currently running on your Microsoft SQL Server

- what process is blocking the execution of the management report?

- what command is the application fireing against your SQL Server?

- what tasks have to wait and why do they have to wait?

- how does the scheduler handle multiple requests / tasks and how can you monitor it?



If you are interested in the above scenarios and want to see solutions you should visit this session which will approve your dba skills.
300
BI
Index Fact Tables in SQL Server are different to Oracle or DB2 because of clustered index. Some people say that it is better to create a clustered index on the fact key, then non-clustered on each dimension key. Some say that it is better to use the clustered index on the snapshot date column. Of course the considerations are different between periodic snapshot fact tables, accumulative snapshot fact tables and transaction fact tables.

In this session I will go through the principles in indexing the 3 types of fact tables, including physical ordering, slimness of the clustered index, multi-column clustered index, a PK doesn't have to be a clustered index, which dimension key column to index, when to include a column in an index, and of course partitioning, i.e. indexing partitioned fact tables. As always, it is better to explain by example than theory so I will give an example on each principles so that we can understand how it is applied in practice. For example: the performance comparison. I will also add my own "lessons learned", i.e. mistakes I've made in the past so you can avoid making the same mistakes.

The title is "in SQL Server" because principles I will be explaining in this session are only applicable specifically to SQL Server data warehouses. It does not apply to Oracle, DB2 or Teradata data warehouses.
200
Dev
Do you have ever looked on an execution plan that performs a join between 2
tables, and you have wondered what a "Left Anti Semi Join" is?
Joining 2 tables in SQL Server isn't the easiest part! Join me in this session
where we will deep dive into how join processing happens in SQL Server. In the
first step we lay out the foundation of logical join processing. We will also
further deep dive into physical join processing in the execution plan, where we
will also see the "Left Anti Semi Join". After attending this session
you are well prepared to understand the various join techniques used by SQL
Server. Interpreting joins from an execution plan is now the easiest part for
you.    
400
DBA
You know locking and blocking very well in SQL Server? You know how the isolation
level influences locking? Perfect! Join me in this session to make a further
deep dive into how SQL Server implements physical locking with lightweight
synchronization objects like Latches and Spinlocks. We will cover the
differences between both, and their use-cases in SQL Server. You will learn
about best practices how to analyze and resolve Latch- and Spinlock
contentation for your performance critical workload. At the end we will talk
about lock free data structures, what they are, and how they are used by the
new In-Memory OLTP technology that is part of SQL Server 2014/2016.    
300
DBA
Hekaton is the Greek word for 100 - the goal of In-Memory OLTP in SQL Server 2014 is to
improve query performance up to 100 times. In this session we will look under
the cover of Hekaton and the Multi Version Concurrency Control (MVCC)
principles on which Hekaton is build. We will start the session by looking on
the challenges that can be solved by Hekaton, especially Locking, Blocking, and
Latching within SQL Server. Based on that foundation we move into the
principles of MVCC, and how a Storage Engine and Transaction Manager can be
built based on that concept.    
300
Dev
You know Bookmark Lookups in SQL Server? You like their flexibility to retrieve
data? If yes, you have to know that you are dealing with the most dangerous
concept in SQL Server! Bookmark Lookups can lead to massive performance losses
that blows up your CPU and I/O resources! Join me in this session to get a
basic understanding of Bookmark Lookups, and how they are used by SQL Server.
After laying out the foundation we will talk in more details about the various
performance problems they can introduce. After attending this session you have
a better understanding of Bookmark Lookup and you are finally able to tell if a
specific Bookmark Lookup is a good or bad one.    
300
Dev
UNIQUEIDENTIFIERs as Primary Keys in SQL Server - a good or bad best practice? They have a lot
of pros for DEVs, but DBAs just cry when they see them enforced by default as
unique Clustered Indexes. In this session we will cover the basics about
UNIQUEIDENTIFIERs, why they bad and sometimes even good, and how you can find
out if they affect the performance of your performance critical database. If
they are affecting your database negatively, you will also learn some best
practices how you can resolve those performance limitations without changing
your underlying application.



300
Dev
SQL Server needs its locking mechanism to provide the isolation aspect of
transactions. As a side-affect your workload can run into deadlock situations
- headache for you as a DBA is guaranteed! In this session we will look into
the basics about locking & blocking in SQL Server. Based on that
knowledge you will learn about the various kinds of deadlocks that can occur
in SQL Server, how to troubleshooting them, and how you can resolve them by
changing your queries, your indexing strategy, and your database settings.



300
DBA
Plan Caching is the most powerful concept in SQL Server. But on the other hand
it's also the most dangerous thing that can lead to queries that are executed
with a completely wrong Execution Plan. In this session we will have a more
detailed look into the Plan Cache of SQL Server, in which different ways SQL
Server can cache Execution Plans in the Plan Cache, and how you can
troubleshooting wrong performing queries directly from the Plan Cache.



300
DBA
For most DBAs and DEVs the TempDb is a crystal ball. But the TempDb is the most
critical component in a SQL Server installation and is used by your
applications and also internally by SQL Server. TempDb is also one of the
performance bottlenecks by design, because it is shared across the whole SQL
Server instance. In this session we will take a closer look into the TempDb,
how it is used by SQL Server, and how you can troubleshoot performance problems
inside TempDb and how you can resolve them.    
200
Dev
Application developers now support unprecedented rates of change – functionality must rapidly evolve to meet customer needs and respond to competitive pressures, while user populations can grow and shrink dramatically and unpredictably. To address these realities, developers are increasingly selecting document-oriented databases for schema-agnostic, scalable and high performance data storage. Come here from the team that that built Azure DocumentDB what sets it apart from the other databases out there and see just how cool it is to use.

300
Dev
Let's talk about how you can get the most out of Azure DocumentDB. In this session we will dive deep in to the mechanics of DocumentDB and explain the various levers available to tune performance. From advanced query features (including some amazing new ones) to indexing to using JavaScript integrated transactions - this session will equip you with the best practices and nuggets of information that will become invaluable tools in your toolbox for building blazingly fast large scale applications.
200
Dev
Application developers now support unprecedented rates of change – functionality must rapidly evolve to meet customer needs and respond to competitive pressures. To address these realities, developers are increasingly selecting document-oriented databases (e.g. MongoDB, CouchDB, Azure DocumentDB) for schema-free, scalable and high performance data storage. While schema-free databases make it easy to embrace changes to your data model, you should still spend some time thinking about your data.
In this talk, you will get an overview on what to think about when storing data in a document database. What is data modeling and why should you care? How is modeling data in a document database different to a relational database? How do you express relationships in a document database? 
100
Dev
In this session, we will first take a journey back to the dates when relational concept was created and then slowly progress to present day’s NoSQL. We will explore the triggers that initiated the evolution and the changes to application development process and data management ecosystem. Then we will follow with a discussion on why this matters to you, and the tradeoffs you need to know between different database paradigms.
100
DBA
My SQL Box is sometimes is to slow ... why ? How can we monitor the workload of my Sql Server "box" ? A short presentation of how can we know the workload os a Windows Box
100
DBA
We will present some of the best practices and our experience with Azure virtual machines with SQL Server
300
BI
You have created great cubes and Reporting Services reports but how do you know if it is being used? Learn how to set up the collection of the usage data and how you can use this data in your decision making.

We will talk about how to collect the data, how to build something meaningful from the data and how you can report on top of the data. We will do this for OLAP cubes and for Reporting Services Reports and we will explore ways you can further develop this for your own organization.

At the end of the session all participants will leave with all the code as well as the know how to get started with the collection of usage statistics for their Microsoft BI Solutions
300
DBA
You won't believe what people will do to the transaction log when they get into trouble. The log is full! The application is experiencing strange errors! Do something! In this session, I will apply commonly suggested advice from internet search results and various forums, in search of a fast remedy for explosive log growth. How bad could it be?!

I will illustrate with simple graphics exactly what's in the transaction log, and why it is fundamental to SQL Server's ability to ensure that the associated database is always in a consistent state. I will explain the factors that can prevent reuse of log space and how to tackle the root causes of rapid log growth, rather than simply alleviating the symptoms using dangerous quick fixes. I'll also offer tips on how to avoid excessive logging becoming a bottleneck, and affecting the performance of user transactions. After this session, you won’t be the one to mess up the transaction log ever again.
300
DBA
SQL Server High Availability is normally spoken about with Enterprise Edition features, however not all of us use Enterprise Edition, so what do we have in Standard Edition? In this session we will look at what options we have for building Highly Available systems in Standard Edition. Looking at how to combine the different features such as clustering, mirroring and the new basic Availability Groups so that you can have much of the capability that you see in Enterprise Edition but without the high cost. I will demonstrate some of the ways that we can build solutions that are accessible to all with design patterns that you can upgrade to Enterprise at a later date if you upgrade your systems.
200
Dev
Today’s data world is changing. Relational databases are no longer considered the only option for a data project.

With Azure as its front line, Microsoft pushes many new technologies out to the wild, and it’s important to understand their capabilities, strengths and weaknesses.

Why?
Because when designing a solution, it’s important to choose the right tool for the right job, and SQL Server is not always the best choice.
It’s also important to understand the new technologies in order to be able to explain why SQL Server IS the right tool when it is.
 

Among others, we will talk about Azure SQL Database, SQL Server on Azure VMs, Stream Analytics, DocumentDB, Search, HDInsight, Machine Learning, Data Lake, U-SQL, Data Factory, Cortana Analyrics, and more.

Join this session to know how the DBA's role might be affected, get an understanding of the new technologies, when to use which one, and how to converge them into a robust data solution.
200
BI
This session is not only for people working with Master Data, but also everyone working with Business Intelligence. With Master Data Services 2016 it's now easy to handle all your dimension data, including Type 2 history. In this session you will get a brief introduction to the basic principles of Master Data Services, together with an overview of all the new features brought to you with Master Data Services 2016.
You will learn about features like:
  • New features and better performance in the Excel Add-In
  • Track Type 2 history
  • Many-To-Many hierarchies
  • New security and administrator capabilities
  • New approval flows
If you are using Master Data Services or are thinking about it, - this is the session you cannot miss.
300
DBA
Clustering SQL Server still vexes many in one way or another. For some, it is even worse now that both AlwaysOn features - clustered instances (FCIs) and availability groups (AGs) - require an underlying Windows Server failover cluster. Storage, networking, Active Directory quorum, and more are topics that come up quite often. Learn from one of the world's experts on clustering SQL Server about some of the most important considerations - both good and bad - that you need to do to be successful whether you are creating an FCI, AG, or combining both together in one solution.
200
DBA
In this session different ways of migration will be approached. We will review
different tools and ways to get your database on-premises and move to Azure,
showing how to detect and solve possible problems with non supported features
using tools like SQL Server Data Tools (SSDT), SQL Database Migration Wizard,
SQL Server Management Studio supporting tools and the DAC Framework.
200
DBA
Azure SQL Database is not a very brand new technology, even so this still very
recent. For multiple reasons, most of the data professionals are still
discovering this new world. If you are one of those, this session is for you!
Basic concepts will be explained as well as the structure of the SQL Database.
And yes... there are demos showing how to perform basic tasks in order to ramp
up with this technology.
300
DBA
Released with SQL Server 2014 and improved in 2016, In-memory OLTP (Hekaton) is ready to
boost the performance of your database! In this session this technology will be
briefly introduced and a step-by-step approach will be shown on how to start up
on using this new technology, explaining the basics.
300
DBA
SQL Server 2016 is already here and many new features were announced.
Availability Groups are no exception! Hear all the news about the number one HADR solution for SQL Server on this session.
200
Dev
In 1 hour we will take a legacy database and demonstrate live how to: 

  • Get the database into SSDT
  • Check the database code, schema and reference data into source control
  • Create a build on a build server
  • Check-in changes and see them deployed automatically
In this demo heavy session you will gain the knowledge you need to build your own deployment process using the free development environment from Microsoft "SQL Server Data Tools".

You will see how straightforward it can be to take a legacy database and build a process that means that releases can be streamlined and deployed faster in a dev, test and eventually production environment. 

This is an entry level session for people to learn how to create an automated deployment process for SQL Server databases.
200
Dev
SSMS is the love child of what was Query Analyzer and Enterprise Manager and has ever since been used as the development environment of choice for people creating and modifying database schemas and code. Whilst SSMS is a really good management tool it is not suited to modern development. Instead Microsoft has created SSDT (Sql Server Data Tools) which has had a lot of attention from Microsoft in the last few years which means that it is now ready for us to use full time. In this session we will cover:
  • How to get your databases into SSDT and critically how to get them to compile correctly.
  • A tour of the SSDT features that show how T-SQL development is actually easier and how SSDT makes you more productive
  • The build & deployment process so you can start concentrating on writing useful code instead of upgrade and release scripts.
  • Typical pain points such as references and build times and various ways to mitigate these.
If you write, debug or modify SQL Server database code and haven't yet started using SSDT full time then this session will help you get started and be productive quickly.
300
Dev
SQL Server is a high performance relational engine and provides a highly scalable database platform but due to its complexity (and bad programming practices) can be prone to serious concurrency problems, unexpected behaviors, lost updates and much more! In SQL Server 2005, two optimistic concurrency mechanisms were introduced and touted as the solution to all our problems. Now in SQL Server 2012 and 2014 even more have followed, but many challenges and problems still remain.

Let’s take a long look into the world of SQL Server concurrency and investigate Pessimistic and Optimistic isolation understanding how they work, when you should use them, and more importantly when they can go very wrong. Don't be staring down the wrong end of SQL Server's two Smoking Barrels and join me for this revealing and thought provoking presentation.
300
DBA
SQL Server Failover Clustering has traditionally been deployed on physical architecture for a very long time, and is considered by many to be the optimal architectural deployment even today. Experienced High Availability implementors and pioneers will remember the Microsoft Cluster Hardware Compatibility List with terror and early Virtualization adopters will recoil in disgust at their memories of poorly performing SQL Server VM deployments and will never want to go back there again.

Times are changing and with the introduction of Microsoft's Scale-Out File Server and improvements to Hyper-V Clustering, High Availability will never be the same again.

In this session we will investigate whether times have really changed for the better and discuss how to implement, administrate and benefits of running a SQL Server Cluster ON a Cluster FROM a Cluster!

200
Dev
The SQL Server optimizer is very smart and can generate good execution plans for fairly complex queries. However, it does have its limits, and sometimes we have to adjust our queries in order to help it make better decisions.  

In this session, we will talk about a few problematic query patterns that hurt the optimizer's ability to generate a good execution plan, resulting in bad query performance.

Among other examples, we will see how changing a table structure can allow the optimizer to generate a parallel plan, how breaking down big queries generate much faster queries, and why reuse and encapsulation are a good programmatic habit, but can be bad in terms of performance. 

Keep things simple. The optimizer will like it!
300
DBA
Both SQL Server 2016 and Windows Server 2016 have many new availability features as well as enhancements to existing ones. This session will let you know what's new, what's changed, and how you can take advantage of these as you start to consider these new releases of SQL Server and Windows Server. Whether you already deploy AGs or FCIs and want to know what you will get when deploying a 2016 version, or just want to know how these new versions will increase your availability, this session is for you.
200
DBA
In this Double session, we will see 
=> a huge relief for DBA's in taking Backups, Disaster Recovery solutions, accessing resources no matter whether they are in on-premises, private or public cloud
=> how you can stretch your legs and relax and have an amazing experience to deliver your analytics against hot, warm & stretched cold data. Stretch Database is the new concept, which allows you to stretch from on-prem to cloud easily and this session will enable you to understand enabling / disabling data stretch, accessing data using stretch database, setup remote data archiving, basic concepts on enabling database/table and backup & restore for the stretch enabled databases.
400
Dev
You can stretch your legs and relax and get a quick session on delivering your analytics against hot, warm & cold data. Stretch Database is the new concept, which allows you to stretch from on-prem to cloud easily and this session will enable you to understand enabling / disabling data stretch, accessing data using stretch database, setup remote data archiving, basic concepts on enabling database/table and backup & restore for the stretch enabled databases. Stretch DB also covers the concepts of Shallow & Deep backups. However, Deep backups are not currently supported with SQL Server 2016 CTP2.
300
Dev
You now have answers to these questions and thanks to Microsoft for integrating R Revolution within new SQL Server 2016. We all now have the opportunity to use R packages and see the outcome within SQL 2016 itself.


In this session, you will definitely take away a breakthrough concept of Advanced R Analytics within SQL 2016 and get ready to kick your creative horse, and go beyond your imagination on how you can build advanced analytics and impress your employers/customers.
200
DBA
While you can see how to run through Setup to deploy a clustered instance of SQL Server (FCI) or an availability group (AG) in other places, what ties them together is the underlying Windows failover cluster (WSFC). Anyone who deploys FCIs or AGs needs a solid foundation of the entire clustering stack to truly be able to understand and have success with implementations. This session will demystify what lies underneath SQL Server from a DBA point of view.
200
Dev
Yes, it is now possible to simplify BIG DATA with just T-SQL to access Hadoop clusters or Azure Blob Storage with Polybase and native JSON support. 


In this session, we will take a quick dive, to understand the concepts and take an awesome ride to use simple T-SQL to query these big data and make your dream come true.
200
DBA
SQL 2016 Hyperscale technology gives us a huge relief in Backups, Disaster Recovery solutions, access resources no matter whether they are in on-premises, private or public cloud. New tools within SQL Server 2016 and Microsoft Azure has made every single DBA/Developers life much easier to scale to the cloud. 


This session will enable you to start working on scaling to the cloud and benefit from the SQL Server 2016's advanced new features.
200
DBA
In this session, I will let you have a quick look around the new DB Engine feature enhancements like Stretch Database, Built-in JSON Support, Columnstore Indexes, In-Memory OLTP, Live Query Statistics, Query Store, Temporal Tables, Backup to Microsoft Azure, Managed Backup, enabled Trace flag 4199 behaviors, TempDB Database and PolyBase technology.
200
DBA
Will take you through to most of the new SQL Server 2016 advanced features with CTP releases (if required) and will be mainly covering below areas.
=> Stretch database to the cloud
=> Scale to the cloud with hybrid backups and how to restore in seconds
=> See what data scientists really going to use Advanced R Analytics in SQL 2016.
=> Simplify big data with polybase and native JSON by using simple T-SQL
=> Always Encrypted with both on-prem & cloud
=> Rich visualizations for any platform
100
DBA
In this session, you will learn about three new security-related features that help protect your data: Dynamic Data Masking, Row-Level Security, and Always Encrypted. With plenty of live demos, you will see how the features work, when to choose each one, and potential issues that could prove to be very important to your implementation.
100
Dev
Discover the ins and outs of some of the newest capabilities of our favorite data language. From JSON to COMPRESS/DECOMPRESS, from SESSION_CONTEXT() to DATEDIFF_BIG(), and new query hints like NO_PERFORMANCE_SPOOL and MIN/MAX_GRANT_PERCENT, you’ll walk away with a long list of reasons to consider upgrading to the latest version. 
200
Dev
In this session, you will learn about various anti-patterns and why they can be bad for performance or maintainability. You will also learn about best practices that will help you avoid falling into some of these bad habits. Come learn how these habits develop, what kind of problems they can lead to, and how you can avoid them - leading to more efficient code, a more productive work environment, and, in a lot of cases, both.
200
BI
Consider this a primer for moving your existing warehouse and ETL structure to the cloud.

We'll run through some terminology and reasons why you should consider Azure, spend a little time looking at automation techniques so you can run your existing SSIS packages in a cloud-service style, and make a lightning quick dash through some of the newer platform-as-a-service components to give you an idea of what's out there.

The aim of this session is to give you the confidence and the base knowledge to make that first move onto the cloud for your BI architecture. SSIS in the cloud is actually more possible than you think!

Also there may be some terrible jokes. I make no apologies for this.
300
BI
It's time to face the fact that Data Warehouses, as we know them, are changing.

The increasing power of automatically scaling systems and the rapidly dropping cost of storage are challenging how we decide which data to keep and how we push it out to those who need it.

In this session we'll run through the architecture of the modern warehouse, from our structured/unstructured Azure Data Lake architecture to our platform as a service Azure Data Warehouse and the various tools bringing the two together.

Expect a general overview of these new Azure components, a little BI theory and practical demonstrations of setting up Data Lake, writing U-SQL, linking up Data Warehouse and exposing data to users, all from the ground up (demo gods permitting)
300
Dev
Coding Dojos – Trendy in the development community for honing skills, instilling good practice and sharing techniques. But that’s not something that applies to SQL Developers, because we’re not real developers, right? Wrong – and I’ll show you why.

In this session, I’ll take you through some of accessible dojo challenges (katas) and how they can be achieved in SQL, as well as why you should be running your own dojos either internally or within your networks.

A code dojo sets out a challenge and the participants either take turns driving with others feeding in, or work in pairs/groups to solve the challenge then compare solutions. Done regularly they help build confidence in problem solving and encourage developer collaboration.I’ve also yet to find a kata that could not be achieved in SQL, albeit some are not the prettiest!

To celebrate SQLBits being in my home town, there will be a distinct Scouse theme to the challenges…
300
BI
Making sure they are analyzing the last data available is often critical for your customers. From Excel files on a OneDrive folder to Azure IoT solutions, via enterprise or personal gateway, welcome to a world with many possibilities. Which one is the best-suited for your needs and your existing environment? This session includes many demos and proposes to explain, one by one, these different solutions. We’ll also discuss benefits, constraints, free or pro subscriptions and much more!
200
BI
If you want to accelerate the testing of your BI solutions, the best strategy is the automation of your tests with the help of a dedicated framework. During this session, we’ll take a look to the features of the open-source framework named “NBi” (www.nbi.io). This framework is providing support for automated tests on the fields of databases, cubes, reports and ETLs, without the need of .Net skills. The demos will show us the best approaches to quickly and effectively assert the quality of BI developments. We'll go a step further, generating the tests by an interesting system of templates and test-cases sources.
200
BI
If you want to accelerate the testing of your BI solutions, the best strategy is the automation of your tests with the help of a dedicated framework. During this session, we’ll take a look to the features of the open-source framework named “NBi” (www.nbi.io). This framework is providing support for automated tests on the fields of databases, cubes, reports and ETLs, without the need of .Net skills. The demos will show us the best approaches to quickly and effectively assert the quality of BI developments. We'll go a step further, generating the tests by an interesting system of templates and test-cases sources.
200
BI
SSIS is a powerful tool for extracting, transforming and loading data, but creating and maintaining a large number of SSIS packages can be both tedious and time-consuming. Even if you use templates and follow best practices you often have to repeat the same steps over and over and over again. Handling metadata and schema changes is a manual process, and there are no easy ways to implement new requirements in multiple packages at the same time.

It is time to bring the Don't Repeat Yourself (DRY) software engineering principle to SSIS projects. First learn how to use Biml (Business Intelligence Markup Language) and BimlScript to generate SSIS packages from database metadata and implement changes in all packages with just a few clicks. Then take the DRY principle one step further and learn how to update all packages in multiple projects by separating and reusing common code.

Speed up your SSIS development by using Biml and BimlScript, and see how you can complete in a day what once took more than a week!
200
BI
"Wait, what? Biml is not just for generating SSIS packages?"

Absolutely not! Let's take a look at how we can use Biml to save time and speed up other data warehouse development tasks: T-SQL development, database maintenance, data creation, test and deployment.

Don't Repeat Yourself - Start automating those boring, manual tasks today!
400
BI
You already know how to build a staging environment in an hour, so let's dive straight into some of the more advanced features of Biml. We will start by looking at how to centralize and reuse code, and how to use the built-in CallBimlScript and LINQ methods. Then we will create our own C# classes and methods. Finally, we will put it all together and create a metadata model and take the first step towards a fully-automated data warehouse framework.
300
Dev
A lot of companies have a philosophy of ship early with as much features as possible.
Thoughts about security are an afterthought since it isn't fun to do and no one will attack them anyway.
But the dark side never sleeps and security breaches have always happened. 
Many have left companies severely exposed or even bankrupt.

In this session we'll look at a few attack vectors that can be used against your company,
and what you as a developer can and should do to protect against them.
It will involve good mix of security conscious SQL Server and application development. 
Because you care about your work and nobody messes with you. 
300
Dev
This is a cloud free session, because sometimes you just can't go to the clouds and have to have an on-premise solution to data visualizations.
We're all familiar with reports or dashboards that show you a static snapshot of the data that has to be refreshed on an interval. Although those are very important visualizations, sometimes you just have to have a real time view of your data streams and data snapshots aren't enough.
What if you could monitor multiple servers with SQL Trace or Extended Events or had some other source of streaming data and be able to see it all happening live on a central monitoring website that you can view on any device? And all this in pure real time!
This is a scenario we'll take a detailed look at and build a system for such monitoring. We'll do this by using Extended Events .Net provider to get the live data stream, SignalR to get the live stream from the server to the website and the D3 javaScript library for actual real time visualizations on any device. After seeing all this in action, you'll definitely get a few ideas on where you could use these techniques.
300
BI
With retail businesses operating in multiple sales channels, the velocity of sales data and the quality of that data can vary greatly.

This is a key consideration when designing the operational characteristics of your backroom ETL subsystems, that stage and transform sales data into usable FACTs.

Cloning of key tables and utilising a parallel processing ETL stream, enables cookie cutter software development and controlled, synchronised loading of FACTs into the data warehouse presentation layer. The use of dynamically created table clones can reduce database entity clutter, optimise the development and execution of Fact Creation SSIS packages and improve data quality & auditing.

These elements are crucial to scaling out the data warehouse solution as the business grows.

In this session I will review the design, describe the basic components and demonstrate the core functionality.
100
Car
Incredible focus is now upon the BI arena. The MS BI Stack is a compelling choice, especially for businesses in transition from a small to medium or large enterprise. Technology roadmaps present significant challenges, but what about the people element?


Do you want to simply deliver the next gen reporting system or can you play a part in improving the way your organisation measures business performance and makes decisions.


The session will cover


-pros & cons of basic SSRS deployments
-what are those data analysts actually doing
-what KPI's are, where to get them & the issues that presents
-how deployment of BI solutions can challenge the IT department status quo
-discuss a model for resourcing and how this maps to information deployment
300
DBA
The most coveted features of SQL Server are made available in Enterprise Edition and are sometimes released into Standard Edition a few years later. This often leaves a vast group of users who "window shop" the latest and greatest features and return to the office wishing they never saw those features presented.

This session will show you how you can achieve the same, or at least a similar, outcome to some of those features without having to fork out for Enterprise Edition licenses or breaking any license agreements. You will leave the session with a set of solution concepts covering Partitioning, Data Compression and High Availability that you can build upon or extend and maybe save you and your company a nice pile of cash.
200
DBA
In an AlwaysOn world we focus on entire databases being highly available. However, replication offers another, arguably more powerful, way to make data available on multiple servers/locations that steps outside of "normal" High Availabilty scenarios. This session will explain what database replication is, what the different parts are that make up the replication architecture and when/why you would use replication. You will leave the session with an understanding of how you can leverage this feature to achieve solutions that are not possible using other High Availabilty features. The content will be valid for all versions of SQL Server from 2005 onward.
200
DBA
SQL Server has come a long way in the last few years, with Microsoft investing heavily in High Availability features. This session will discuss these features to enable you to safely upgrade a SQL Server, while ensuring you have a return path if things should go wrong. You will leave the session knowing what features you can use to upgrade either the OS, Hardware or SQL Server version while keeping your maintenance window to a minimum. The session will apply to Standard Edition as well as Enterprise Edition, so doesn't only apply to "High Rollers"!
300
BI
This session is dedicated to the many new DAX features in Power BI Desktop, Analysis Services 2016, and Power Pivot for Excel 2016. You will learn new aggregation and statistical functions, new ways to filter and manipulate tables, new optimized techniques to query data with a table expression, and the syntax to define and use variables in DAX expressions. These new features make DAX easier to read and improve its performance, too.
300
BI
The Tabular model in Power Pivot for Excel, Power BI and SSAS Tabular seems to offer only plain-vanilla one-to-many relationships, based on a single column. In 2015 there was the introduction of
many-to-many relationships, yet the model seems somewhat poor when compared with SSAS Multidimensional. In reality, by leveraging the DAX language, you can handle virtually any kind of relationship, no matter how complex they are. In this session we will analyze and solve several scenarios with calculated relationships, virtual relationships, complex many-to-many. The goal of the session is to show how to solve complex scenarios with the aid of the DAX language to build unconventional data models.
400
BI
Tabular is a great engine that is capable of tremendous performance. That said, when your model gets bigger, you need to use the most sophisticated tools and techniques to obtain the best performance out of it. In this session we will show you how Tabular performs when you are querying a model with many billions rows, conduct a complete analysis of the model searching for optimization ideas and implement them on
the fly, so to look at the effect of using the best practices on large models. This will also give you a realistic idea of what Tabular can do for you when you need to work on large models.
300
DBA
Are you looking at upgrading SQL Server to a newer version? Let's face it, this is a dull process. Take a trace of production, replay it on a test environment, rinse and repeat until you're happy..... This is no fun. What about having a process where a one line PowerShell call could kick off a trace, restore the databases on the test instance (to the second before the test started), and replay it capturing all the errors? Now wouldn't that be nice. Repeat the same test on 2012, 2014, and 2016 and compare the results? Easy.

In this session we'll see how to make a control database to store trace metadata and how to use this in conjunction with PowerShell and Distributed Replay to make an automated harness for instance upgrade testing. Using a 2008 R2 instance, we'll capture a trace, replay it on a 2012 instance, a 2014 instance, and a 2016 instance, capturing all errors introduced. Whilst we're there, we may as well also replay the same trace on 2016 running in 2012 and 2014 compatibility mode. Well why not, cover all the options hey. It'll be so easy that you'll be able to queue up tests and let them run while you go and do something more interesting instead.

If you really need to get cracking and upgrade, come and see how a large amount of pain can be removed from the process. And get a snazzy report to show to your boss to boot. Us DBA's hate boring drudgery, Boss' love snazzy reports, and everyone loves moving to the latest version. Result.
200
BI
Do you know all the different ways to refresh your data in the Power BI service? Attend this session to get the complete overview including a lot of demos and input, so you can choose the right ways for your solution.
  1. Upload
  2. Scheduled refresh from On-prem Sources
  3. Scheduled refresh from Cloud Sources
  4. Automatic refresh from OneDrive
  5. Direct Query
  6. Live Query
  7. Realtime
We will walkthrough the pros, cons and limitation with all the different methods.
300
DBA
The way SQL Server estimates cardinality for a query has been updated in SQL 2014. In this session we will discuss why cardinality matters, the differences between the SQL 2014 cardinality and previous versions, and how to evaluate if your queries will benefit after upgrading from previous versions of SQL Server.
300
DBA
When moving databases to a virtual environment the performance metrics DBAs typically use to troubleshoot performance issues such as O/S metrics, storage configurations, CPU allocation and more become unreliable. DBAs no longer have a clear, reliable view of the factors impacting database performance. Understanding the difference between the physical and virtual server environment adds a new dimension to the DBA tasks. This presentation describes the changes that DBAs need to make in their performance and monitoring practices.

Attendees at this session will learn:
  • Proper configuration considerations for virtual servers running SQL Server
  • How to identify performance bottlenecks in a virtual environment
  • How to properly troubleshoot issues related to virtualized workloads
300
DBA
Great database performance starts with great database design. During the database design process it is important to select your datatypes wisely. The wrong choices will often lead to wasted space, increased response times, and less stability. Additionally you run the risk of having your design not scale as well as it should. Leave this session armed with the knowledge you need to help your databases perform at their peak efficiency.

Attendees of this session will learn:
  • How to properly select the correct datatypes
  • How to identify poorly chosen datatypes
  • How to mitigate performance issues due to bad datatypes
300
BI
This session is all about putting big data to work. To do that we need to be clear on objectives and to transform a proof of concept into a reliable, resilient and repeatable production process.  To make that work we'll show you how to hook up Data Lake to a variety of sources using Data Factory using Visual Studio so that we also get source control.  The trick with all of this is to use the scalability of the cloud to minimise the costs involved by turning on and off the services we need.  Inevitably things will go wrong with our demo and that is part of the plan as we want to show you how to diagnose problems and fix them.   
100
BI
How do you assess the use of your data given that laws are local and data is global? Not only that but laws take time to be passed and take even longer to be tested in court to be established as case law. Having established you use of data how do you try to enforce that across your organisation in a self service data word. These are the kinds of problems any data driven business faces everyday and in this session I won't be telling what to do but rather to focus on how to do data governance correctly based on best practice at Microsoft. I'll be using a lot of real world examples (with some necessary name changes!), but I'll also need your help to make this a great session. So please bring your concerns and your comments to make this more of a discussion than a lecture. 
200
BI
While there a numerous tools to interrogate big data they are traditionally low level java script based, and so it can take a long time to develop and set up processes to derive real value form your big data.    With Azure data lake you can still do this by laying an HD Insight (Hadoop on Azure)  cluster over the Data Lake storage engine as Data Lake is HDFS Compatible. However if you are like me and grew up in a world of SQL it can be too much of stretch to use and you spend to much tine worrying about the cluster size and configuration.  Azure Data Lake analytics addresses this by letting you focus on the query and process in a familiar combination of C# (to define the schema) and SQL to conduct the analysis itself. In this session we'll assume you know SQL but not too much C#, and walk you through how this amazing service is both familiar and very different from the SQL you know and love.
300
BI
Delivering eye-catching insight has now become almost ludicrously easy. You just fire up Power BI and … Wait a second. Yes it is has never been easier to add the “wow” factor to presentations based on data. However there are still techniques, tricks and traps that you need to know if you are going to leave your audience impressed with your insights rather than drowning in your data. This session takes you through the use and limitations of all the visualizations that come out-of-the-box with Power BI, as well as a quick tour of many of the custom visuals that are now available. You will see which type of delivery is best suited to which kind of data. You will also learn how best to structure and adapt the Power BI data model so that it can feed into the different visualizations for maximum effect.
300
Car
This session will enlighten a regular data geek like you, to speak out in the crowd as a successful speaker and help every other individuals. This session will also highlight and remove the bad practises from your and add more best practises in delivering a speech in more meaningful way to you technically and conveying a useful message to your audience.

Take this as an opportunity by not only helping a technical community but also non-technical and illiterates around the globe. This program delivers a unique requirement and better understanding of individual / organization’s day-to-day data and further analyse by collecting simple data and produce powerful visualization which further help to take better decisions. 
200
DBA
Audit. The very word strikes fear in the bravest of hearts. But as a DBA, the need to know who is doing what in your production databases is critical. In SQL Server 2008, Microsoft finally gave us a true auditing tool. But how does it work, what exactly can it track, and how can you handle its output?

In this session, we cover SQL Server Audit from the ground up. We go from the basics of which events can be audited to a look at how SQL Server Audit works "under the covers", and what that means for performance. While implementing server and database audits, we discuss audit granularity and filtering, as well as the pros and cons of Audit's output options.

Whether it's through the SQL Server Management Studio (SSMS) interface, via T-SQL, or using PowerShell and Server Management Objects (SMO), at the end of this session you'll be able to deploy SQL Server Audit across your enterprise and manage its output, fearlessly.
300
Dev
Parameters are a fundamental part of T-SQL programming, whether they are used in stored procedures, in dynamic statements or in ad-hoc queries. Although widely used, most people aren't aware of the crucial influence they have on query performance. In fact, wrong use of parameters is one of the common reasons for poor application performance.

In this session we will learn about plan caching and how the query optimizer handles parameters. We will talk about the pros and cons of parameter sniffing as well as about simple vs. forced parameterization. But most important – we will learn how to identify performance problems caused by poor parameter handling, and we will also learn many techniques for solving these problems and boosting your application performance.
400
Dev
You identified a query that is causing performance issues, and your mission is to optimize it and boost performance. You looked at the execution plan and created all the relevant indexes. You even updated statistics, but performance is still bad. Now what?

Put your mask on and get ready to dive into query tuning and optimization. In this session we will analyze common use cases of poorly performing queries, such as improper use of scalar functions, inaccurate statistics and bad impact of parameter sniffing. We will learn through extensive demos how to troubleshoot these use cases and how to boost performance using advanced and practical techniques. By the end of this session, you'll have a list of tips and techniques to apply on your environment.

This session is 100% demo, no slides!
300
DBA
A common use case in many databases is a very large table, which serves as some kind of activity log, with an ever increasing date/time column. This table is usually partitioned, and it suffers from heavy load of reads and writes. Such a table presents a challenge in terms of maintenance and performance. Activities such as loading data into the table, querying the table, rebuilding indexes or updating statistics become quite challenging.

SQL Server 2016 and SQL Server 2014 offer several new features that can make all these challenges go away. In this session we will analyze a use case involving such a large table. We will examine features such as Incremental Statistics, New Cardinality Estimation, Delayed Durability and Stretch Database, and we will apply them on our challenging table and see what happens...
300
DBA
Extended Events is a highly scalable and highly configurable monitoring platform, which every DBA must be familiar with. It has many advantages over alternative tools, such as Profiler or Dynamic Management Views. In some cases, it is the only tool that can provide the desired monitoring solution.

In this session we will demonstrate several common use cases, such as monitoring query waits, troubleshooting deadlocks and monitoring page splits. We will demonstrate how to set up an event session for
each use case and how to analyze the collected data in a meaningful way. By the end of this session, you'll have several practical monitoring and troubleshooting solutions to apply on your environment.
300
Dev
In this session we will present and demonstrate many tips & tricks that can help developers take the best out
of SQL Server. The tips you'll learn in this session will help you increase productivity and improve performance of your application. We picked the topics that are less known and yet have a great impact.

Here are some examples:
  • The Problem with Functions (and the Solution)
  • Managing Nested Transactions
  • The Problem with Local Variables (and the Solution)
  • Managing Hierarchies Efficiently
The session is based on SQL Server 2016, and includes some of the new features in the new version, but it is relevant also for older versions of SQL Server.
300
DBA
In this session we will present and demonstrate many tips & tricks that can help database administrators take the best out of SQL Server. The tips you'll learn in this session will help you improve productivity, availability, security and performance of your system. We picked the topics that are less known and yet have a great impact.

Here are a few examples:
  • File Sizing
  • Simple vs. Forced Parameterization
  • SQL Server Error Log Management
  • Transaction Log Internals
The session is based on SQL Server 2016, and includes some of the new features in the new version, but it is relevant also for older versions of SQL Server.
300
DBA
Knowing the limits and maximums of your SQL Server implementation is critical for success. The maximums drive capacity planning, scalability and performance conversations. This informative session will show you how to eliminate the subjectivity in the performance of your mission-critical SQL Servers and replace it with objective performance metrics. Benchmarks and baselines can help you quickly find any performance anomalies in the environment, plus can help you project performance and capacity into the future. Tools, techniques, and scripts will be shared and demonstrated.
200
DBA
Companies are creating more and more technology service offerings but they are often throttled by IT. To address this, many IT departments are adapting a new concept called DevOps. DevOps is a concept which has quickly risen from niche to mainstream within the IT Enterprise. At its core, DevOps represents a way to
leverage people, processes, and tools to remove barriers and improve service delivery. An understanding of scripting and automation platforms is becoming a critical skill for SQL Server professionals. This session will discuss how technologies such as virtualization, the cloud, and software-defined data centers have blurred the lines between developers and sysdmins. We will also talk about the competencies and tools required to succeed in an increasingly data-driven workplace.
200
DBA
You were just anointed DBA by the head of development because you knew how to create a database and add some tables. Or maybe you were the network engineer that had installed SQL Server and added SQL logins.
Now what do you do and how to you keep your companies data protected and the SQL Servers up and running? This session will provide you with a road map for succeeding as a DBA. We will cover all of the basics that the typical DBA needs to know and focus on Day 1 and Year 1. By the end of this session, you will
know what is important and should be on your daily task list and what you can ignore. With data is growing exponentially and you have lucked out if you truly like being a DBA because the sky is the limit.
200
BI
This session is taken from my article "Using a Data Warehouse for CRM", which I wrote with Amar Gaddam: https://dwbi1.wordpress.com/2010/01/28/using-data-warehouse-for-crm/

A data warehouse is not only for BI. It can also be used from Customer Relationship Management (CRM). In fact, a DW is the best platform for doing CRM. In this session I will show how to use a data warehouse built in SQL Server for doing core CRM activities such as: 1. Single Customer View, 2. Permission management, 3. Campaign segmentation, 4. Campaign Results, 5. Customer Analysis, 6. Personalisation, 7. Customer Insight, 8. Customer Loyalty Scheme.

If you don't work in CRM and not familiar with the above concepts, don't worry. I will explain them one by one during this sessions. Although it would be help a lot if you read my article above, before coming to this session. 

For each point I will try to show the table design in SQL Server DW so we can all see how they are implemented, not just the theory. Due to time limit I may not be able to cover all the above points, but I promise I will try to cover as much as possible.
200
BI
BPM = Business Performance Management. A Data Warehouse is not only used for BI. It is also used for CRM and BPM. In this session I will show how to use a Data Warehouse for BPM using Balanced Scorecard. Before the session I will have built a Data Warehouse on SQL Server, for BPM, and during the session I will show the design of this DW. 

The Data Warehouse will contain Sales data, Financial data, Customer Service data, and Order Processing data. Each of this data will form part of the Balanced Scorecard. In addition to Fact and Dimension tables, a Data Warehouse used for BPM contains one additional area which stores the KPI scores. I will show how this area is designed, and how the KPI values and score are calculated.

As a take away, I hope the audience can learn how a DW is used outside BI, how the additional area are designed and built, and how it is all implemented on a SQL Server platform. I will also show how the SSIS packages that populate the Data Warehouse from the source system, and the SSRS reports which shows the KPIs, the Balanced Scorecard and Performance Scoring calculation.
200
Dev
In this session we will be looking at the best and worse practices for indexing tables within your SQL Server 2000-2016 databases.  We will also be looking into the new indexing features that are available in SQL Server 2012/2016 (and SQL Server 2005/2008) and how you the .NET developer can make the best use of them to get your code running its best.
300
Dev
So you are a developer or a systems admin and you've just been handed a SQL Server database and you've got no idea what to do with it.  I've got some of the answers here in this session for you.  During this session we will cover a variety of topics including backup and restore, recovery models, database maintenance, compression, data corruption, database compatibility levels and indexing. While this session won't teach you everything you need to know, it will give you some insights into the SQL Server database engine and give you the ability to better know what to look for.
300
DBA
In this session we'll look over some of the things which you should be looking at within your virtual environment to ensure that you are getting the performance out of it that you should be.  This will include how to look for CPU performance issues at the host level.  We will also be discussing the Memory Balloon drivers and what they actually do, and how you should be configuring them, and why.  We'll discuss some of the memory sharing technologies which are built into vSphere and Hyper-V and how they relate to SQL Server.  Then we will finish up with some storage configuration options to look at.
300
Dev
In this session we will review the new enhancement to SQL Server security available in SQL Server 2016 and Azure SQL DB.  These include Always Encrypted, Row-Level Security and Dynamic Data Masking as well as whatever else Microsoft has released since I've written this abstract. We'll look at how to set these features up, how to use them, and most importantly when to use them.
300
DBA
One of the biggest issues in database performance centers around storage.  It’s also one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not speak the same language.  In this session, we’ll be looking at storage from both the database and storage perspectives.   We’ll be digging into LUNs, HBAs, the fabric, as well as RAID Groups.  In addition to theory, we’ll be looking at an actual EMC SAN so that we can translate what we see in the Storage Array with what we see on the actual server.
200
DBA
In this fun session we'll review a bunch of problem implementations that have been seen in the real world.  Most importantly we will look at why these implementations went horribly wrong so that we can learn from them and never repeat these mistakes again.
500
Dev
The SQL Server Query Optimizer makes its plan choices based on estimated rowcounts. If those estimates are wrong, the optimizer will very likely produce a poor plan. And there's nothing you can do about it. Or is there?

In this session, you will learn exactly where these estimates come from. You will gain intimate knowledge of how statistics are used to estimate row counts, and how filters and joins further influence those estimates.

Though the focus of this session is on understanding the cause of bad estimates, you will also learn some ways to fix the problems and get better estimates - and hence, better performing queries.
300
Dev
User-defined functions in SQL Server are very much like custom methods and properties in .Net languages. At first sight, they seem to be the perfect tool to introduce code encapsulation and reuse in T-SQL. So why is this feature mostly avoided by all T-SQL gurus?

The reason is performance. In this session, you will learn how user-defined functions feed the optimizer with misleading and insufficient information, how the optimizer fails to use even what little information it has, and how this can lead to shocking query performance.

However, you will also see that there is a way to avoid the problems. With just a little extra effort, you can reap the benefits of code encapsulation and reuse, and still get good performance.
300
DBA
Failover clustering is no longer the default option for making database servers highly available.  The benefits of Availability Groups coupled with the limitations of cloud platforms mean some systems can’t use failover cluster instances or need more than they can offer.  To complicate designing a solution even more, SQL Server 2016 introduces Basic Availability Groups in the standard edition that arguably could see failover clusters heading for extinction.   This session will compare Microsoft’s options for deploying highly available SQL Server data platforms in mid-2016.  It will use examples of solution designs to help data professionals understand each feature’s strengths and capabilities whether they’re deployed in on-premises data centres or using Microsoft Azure services.
400
Dev
Do you believe the myths that “Third Normal Form is good enough”, or that “Higher Normal Forms are hard to understand”?
Do you believe the people who claim that these statements are myths?
Or do you prefer to form your own opinion?

If you take database design seriously, you cannot afford to miss this session. You will get a clear and easy to understand overview of all the higher Normal Forms: what they are, how to check if they are met, and what consequences their violations can have. This will arm you with the knowledge to reject the myths about higher Normal Forms. But, more important: it will make you a better designer!
300
DBA
For DBAs who haven’t yet started using cloud services, knowing how to start seems to get harder every day.  While there are familiar product names and concepts, there are as many new ones always being announced.  Fortunately, DBAs normally only need to know about a few of them to deploy and operate their first database server in the cloud.   This session introduces database administrators to Microsoft Azure.   It uses terminology and examples recognisable to DBAs to present the different formats of cloud services and their availability, scalability and capabilities.  The session aims to give DBAs the confidence to discover more about deploying and operating their database servers in Microsoft Azure and using its supporting storage, networking and authentication services.
300
Dev
The Azure SQL Database service now provides the same core functionality as the SQL Server 2016 database engine yet is arguably easier to deploy, manage and scale.  However, it was never designed to meet every database server requirement so solutions need to play to its strengths rather than suffer from its limitations.    This session reviews the Azure SQL Database’s strengths and limitations, and shows how organisations are making it part of their data strategy.  It considers other Platform as a Service format services within Microsoft Azure that are commonly used with it to create integrated solutions.  The session also gives examples of workarounds for some of the features not provided by the Azure SQL Database service.
300
Dev
We’ve all dealt with nightmare queries: huge, twisted monsters that somehow work, despite being ugly and unmanageable. The time has come to tame these beasts, and the solution is available now, in SQL Server 2012.

New T-SQL functions offer out-of-the-box solutions for many problems that previously required complex workarounds. Paging, Running totals, Moving aggregates, YTD, and much more comes at the power of your fingertips in SQL Server 2012. The only thing you need to do is learn the syntax. And that is exactly what this session is all about: a thorough description and explanation of the syntax, and loads of demos to demonstrate how you can use all these new features.

Attend this session to boldly take SQL Server where it has never gone before!
300
Car
Cloud adoption has happened and almost every organisation now uses some form of public cloud services.  This means we no longer have to listen to how vendors want us to use their cloud services, instead we can see how those around us are actually are.  For those new to working with the cloud, this makes it easier to know what we need to learn so we can design, deploy and operate in the cloud era.  However, adapting your skills isn’t just about understanding how a few new technologies work.  Today’s IT professional also needs to know how trends like DevOps, the API economy and containers are going to affect them.   This session aims to give clarity and share real world experience with those getting ready or getting used to working with public cloud services.  Whether you’re a developer, an administrator or an architect, it will help you turn a complex world of cloud jargon into an actionable learning plan. Its agenda will include:
  • The relationship between private, public and hybrid clouds – and why most are jumping straight to the public cloud







  • The emerging differences between IaaS, PaaS and SaaS – and why PaaS has caught a lot by surprise    







  • The rise of DevOps – and why it’s risen
   
200
Car
The certification needs of Microsoft data professionals have changed.  Having deep knowledge of a single product area often isn’t enough anymore.  Instead, having a broad knowledge about a range of subjects can be required.  Any Microsoft data professional certification path now needs to consider products hosted both on-premises and in the cloud, and using open source as well as Microsoft technologies and languages.    For people about to start their technical certification path, this can make it difficult to know where to start.  How can Microsoft data professionals balance their short term education needs with their long term career development?  Part of this session suggests that everyone still needs to build their career around a single specialist subject and then understand more about the technologies related to it.    For experienced professionals, this session suggests possible answers to the “what next?” question.  Microsoft’s retirement of its advanced certification programmes allows data professionals to broaden their knowledge – but how broad?  At what point does it become irrelevant?  Just as importantly, when should they step back from technical certifications and deepen their non-technical skills with knowledge about solution architecture or strategy?
300
BI
Many data professionals in finance will be familiar with the massive volume and variety of data generated by banks to measure the risks arising from their trading and lending.  Every day, these activities generates millions of valuations of trades and positions and risk measures, and prices of foreign exchange, shares and bonds.  The bank's Risk Managers need to make fast decisions every day on this data and they need powerful interactive dashboards to help them make sense of it and data analysts can use Power BI to create these.  This talk will show lots of  examples of useful visualisations of financial risks.  For example tree-maps show the composition of risk within the bank and across countries, asset classes. Bar charts coupled with slicers can drill down from the firm wide big picture to trading details.  Bullet charts show usage against limits and daily changes. These examples will highlight some of the capabilities of Power BI including  custom visualisations, integration with R, various ways of loading and editing data, calculations and tailoring the interaction between visual elements.
300
BI
Banks use risk management to reduce the chance of a large loss due to the price movement of financial assets that they hold (e.g. bonds, equities) or if a client defaults on a loan or payment. Good risk management depends on many factors, independent oversight, good models, proper audit, well formulated policy – but at its core, it is a business analytics problem. The challenge is to take a very large and varied set of data (trades, sensitivities, prices) from a myriad of different sources inside and outside the bank), calculate the risks and provide the useful timely reports and dashboards to enable risk managers and board committees to take informed action. This talk explains why risk management is essential, why it is a hard problem, and describes some approaches to data, analytics and reporting to meet the challenge. This talk aimed at people who do not have knowledge of risk management.  I’ll explain the background, and explain the basics of the data and calculations involved.
200
BI
Abstract - Tools For Exploratory Data Analysis The first thing we need to do with a new set of data is to explore it.  We want to get an idea of the usefulness, relevance, completeness and quality of the data for our purposes and to explore patterns, trends and outliers to gain some insight.   In this session we’ll explore a public dataset - the Titanic passenger list. Most people know the of the tragic events. The RMS Titanic sailed from Southampton on 10th April 1912, hit an iceberg in the Atlantic 4 days later and sank. Of the 2000 people on board, only 700 survived.   We'll build some rough and ready visualisations and crunch some basic statistics on this dataset using 4 different tools, Excel & Power Query, Power BI Desktop, R and Azure Machine Learning.  The purpose of the session is to introduce you to some tools that you may not be familiar with so that you can decide which tools you prefer for exploring your data.
300
BI
For data-related applications, one thing worse than having bad data is having no data.  What happens if you have a 6 month project and your data will only properly be available at the end of month 5, yet your team of BAs, developers and QA are getting started now? One option is to build a fictional set of data that resembles as closely as possible what you expect your real data set to look like (warts and all).  This talk offers some advice how to do this.  We'll look at an example of creating realistic data for a market risk application and show possible implementations using Excel, R and SQL.   There are some benefits of this approach especially for analysis and QA that may mean you want to consider doing this even if you are lucky enough to have good data from the start.
300
Dev
With the advent of SQL Server Integration Services Catalog (SSISDB) a new place to store, execute, and monitor SSIS packages came into existence.

This session shows the different aspects of programmability in the context of SSISDB. Beginning with a short overview of the underlying database objects, a deeper look at SSISDB's stored procedures follows. A side-step from T-SQL to C# and the available SSIS SDK illustrates a different view of SSISDB access. In conclusion, the analysis and reporting aspects of SSISDB programmability are shown with some exemplified SQL Server Reporting Services (SSRS) reports. The different examples are based on industry-based project experiences.

After this session, you will have a deeper knowledge about SSISDB's content and programming interfaces, and you will know how to start SSIS packages using T-SQL and C#. The pros and cons of these programming techniques will also be discussed.
300
BI
A long time ago, Integration Services (SSIS) got part of SQL Server. During their lifetime many enhancements got added to the product but in the last SQL Server releases it got quiet around SSIS.

But with SQL Server 2016 SSIS got revamped and new features were added.

In this session, Wolfgang will guide you through the new SSIS features: incremental package deployment, ErrorColumnName, custom logging levels, new connectivity possibilities and Control Flow templates. Get some insights from features already used in production data-integration projects!
400
Dev
The Internet of Things (IOT) gets more and more attraction - not only on the business but also on the customer side. Connected fridges, cars and smart watches - always and everywhere connected!

In this session Wolfgang will show you some possibilities of the Microsoft Band 2 SDK: how-to connect and read sensor data out of this device.

But what should be done with that data? Power BI seems to be an ideal candidate for analyzing and presenting those kind of data. The different types of real-time analytics (Stream Analytics, Power BI API, ..) will be presented and their pros and cons will be envisioned.
The challenge: Let's prepare a real-time dashboard of Band2 data in Power BI in 60 minutes!
300
DBA
Maintaining a solid set of information about our servers and their performance is critical when issues arise, and often help us see a problem before it occurs.  Building a baseline of performance metrics allows us to know when something is wrong and help us to track it down and fix the problem.  This session will walk you through a series of PowerShell scripts you can schedule which will capture the most important data and a set of reports to show you how to use that data to keep your server running smoothly.
300
DBA
- In-memory OLTP Enhancements
- Native JSON
- Always Encrypted
- Row Level Security
- Dynamic Data Masking
- Enhanced Always On
- Enterprise Grade Analysis Services
- Enhanced MDS
- Enhanced Reporting Services
- Built in Advanced Analytics with R 
200
DBA
With new regulatory requirements coming in and given the publicity around data breaches this session covers both how to encrypt your data and how to audit who has been accessing your data.

This shows what is possible, how to implement and the additional requirements this places on your environment.

Transparent Database Encryption
- Encrypting with a Pass Phrase vs Transparent Database Encryption
- Symmetric Keys
- Asymmetric Keys
- Certificates
- Database Master Key
- Certificate creation
- Database Encyption Keys (DEK)
- Enabling Encryption
- Backup/Restore for TDE Encrypted Databases
- SQL Server 2016 Always On Encryption

SQL Server Auditing
- DDL Triggers
- Audit Specifications
- C2 Auditing
400
DBA
SQL Server 2016 Always On Encryption
Given the publicity around data breaches encryption is become seen as more important. SQL Server 2016 now has Always On Encryption which is client side encryption so even server adminstrators cannot access your data!

Understand
- How Always On Encryption Works including internals
- How to deploy Always On Encryption
- How to use Powershell to automate the setup of Certificates and the required Encrypted Values needed to setup Column Encryption Keys
- How to use Powershell/Microsoft Management Console to view Certificates in the Windows Certificate Store
- Using Always On Encryption in C# with demo
- How to deploy the required Certificates to a client machine,not the server machine!
- How to rotate Encryption Keys
- How to create a custom Certificate store with demo 
300
Dev
Failing to design an application with concurrency in mind, and failure to test an application with the maximum number of expected simultaneous users is one of the main causes of poor application performance.   Locking and blocking is SQL Server’s default method of managing concurrency in a multi-user environment.  In this session we’ll look at the three main aspects of locking: type of lock, duration of lock and unit of locking. We’ll also look at when locks cause blocking and examine various ways to minimize blocking.   In addition to looking at the aspects of locking, in this session, you will learn:  
  • What metadata is available to show you:
The locks that have been acquired. The processes that are blocked and who is blocking them. The tables have had the most problems due to locking and blocking.  
  • What other tools are available to track down other locking and blocking issues.
300
Dev
What exactly does it mean to have optimistic concurrency? What is the alternative? Is SQL Server 2012's SNAPSHOT Isolation optimistic?  How can SQL Server 2014's In-Memory OLTP provide truly optimistic concurrency? In this session, we'll look at what guarantees the various isolation levels provide, the difference between pessimistic and optimistic concurrency, and the new data structures in SQL Server 2014 that allow the enormous benefits of having totally in-memory storage with no waiting!
300
Dev
Failing to design an application with concurrency in mind, and failure to test an application with the maximum number of expected simultaneous users is one of the main causes of poor application performance.   Locking and blocking is SQL Server’s default method of managing concurrency in a multi-user environment.  In this session we’ll look at the three main aspects of locking: type of lock, duration of lock and unit of locking. We’ll also look at when locks cause blocking and examine various ways to minimize blocking.   In addition to looking at the aspects of locking, in this session, you will learn:  
  • What metadata is available to show you:
The locks that have been acquired. The processes that are blocked and who is blocking them. The tables have had the most problems due to locking and blocking.  
  • What other tools are available to track down other locking and blocking issues.
300
BI
With the changing landscape of Power BI features it is essential to get hold of configuration and deployment practices within your data platform that will ensure you are on-par with compliance & security practices.  In this session we will overview from the basics leading into advanced tricks on this landscape:How to deploy Power BI?How to implement configuration parameters and package BI features as a part of Office 365 roll out in your organisation?What are newest features and enhancements on this Power BI landscape?How to manage on-premise vs on-cloud connectivity?How can you help and support the Power BI community as well?Having said that within the objectives of this session, cloud computing is another aspect of this technology made is possible to get data within few clicks and ticks to the end-user. Let us review how to manage & connect on-premise data to cloud capabilities that can offer full advantage of data catalogue capabilities by keeping data secure as per Information Governance standards. Not just with nuts and bolts, performance is another aspect that every Admin is keeping up, let us look into few settings on how to maximize performance to optimize access to data as required. Gain understanding and insight into number of tools that are available for your Business Intelligence needs. There will be a showcase of events to demonstrate where to begin and how to proceed in BI world.
400
BI
Machine learning service is Microsoft Azure drag and drop tool for building,testing and deploying any kind of predictive model on your data-set. Finalized solution is published and used by daily business in larger stack of your Microsoft Azure services. With easy and interactive creation of models, algoritms and decisions do not tend to be that simple! Especially when one has to make business decision on results.

Focus on this session will be mathematical and graphical explanation of algorithms available for predictive analytics in Azure Machine Learning service. Algorithms - grouped by learning type - will be examined and crossed referenced through all available and ready-to-use. Understanding the the basics - data inference, data splitting, data stratification, to sweeping, SMOTH, to logic and theory of algorithms:  regression, decision trees/forest/jungle, Clustering and Naive Bayes.

This session will clarify the confusion over algorithms, which data is suitable for which algorithm and what kind of empirical problem can be tackled with.
300
BI
With SQL Server 2016  R Language for statistical programming is now supported with native T-SQL. With this extension we can now integrate powerful R Language with transactional data directly using SSMS. With this feature data stewards and data analysts can now run from simple uni-variate to multi-variate statistics in SSMS. Implementation of R in SQL Server 2016 is one of major parts of BI Landscape.

In this session we will go through:
1) Installation needed (R and RRO by Revolution Analytics (now Microsoft)
2) Exploring the usage of RRO Engine (multi-threated usage and parallel multi-core usage of CPU,...)
3) Using T-SQL for Data analysis with importing and exporting data to SQL Tables
4) Demos with using R in Reporting services (SSRS) and Power BI
5) Using R Engine for enhancing your daily work as DBA or BI Analyst
6) Exploring prediction engine on datasets in your daily business work
7) Use cases how and where can powerful duo make your daily business easier

Session is useful for BI analysts as well as for DEV and DBA as we can easily make server monitoring and use of predictions for server monitoring (predicting when disk will be full and any other extended events).
300
Dev
Internet of Things (IoT) is gainining its popularity with usage of small and cheap computers and ability to store large amounts of data for further future data analysis. In this session we will examplery show usage of Raspberry Pi 2 device with its constructors and how to program them in C# and receiving data from Raspberry and storing it into Azure IoT cloud. With event hubs and stream analytics in Azure we will see how data are being collected and later presented and visualized.
400
Dev
Session will focus on data types that are available in SQL Server. From selecting correct data type when
creating objects (variable, expression, table, etc.) to providing the knowledge on building queries using special data types and giving overview on best practices of data types.

Generally, data type is an attribute that specifies type of data
object (variable, expression, table, etc.) can hold. This creates many
different rules on understanding how different data types are working
internally, which are preferred operations, selecting precedence, conversions
and many other functions.

Focus of session will be
1) selecting correct data types when creating data warehouse objects
2) building queries using proper data types
3) predicate behavior using correct data types
4) how data types influence execution plans  
5) exploring best practices and daily mistakes.
300
BI
While more and more workloads have moved to virtual environments, the data warehouse is often one of the last physical server in the data center. However, that doesn’t need to be the case, in this webinar you will learn about the challenges of implementing your BI environments in a virtual environment.


This includes your data warehouse, and its ancillary systems like Analysis Services, Integration Services and Reporting Services. Finally, we’ll talk a little bit about how Azure VMs can change this guidance.
300
DBA
Being a DBA is tough job--there are on call situations to deal with, and managing a large amount of servers with less resources is a constant challenge in corporate environments. In this session you will learn techniques to reduce the amount of manual effort in your job, and keep you three steps ahead of your users.
Learn techniques such as:

• Fully Automating SQL Server Installations
• Dynamically Adding Databases to an Availability Group
• Syncing jobs and logins between Availability Group members
• Patching SQL Servers automatically
• Other techniques for process automation 

This session will benefit both new and Senior DBAs, as well as anyone who wants to automate themselves into a promotion.
200
BI
Finally, the wait is almost over. After few years of silence, Microsoft has finally made exciting announcements about SQL Server on premises enhancements.

In this session, I will cover top new features in SQL Server 2016 from a BI professional perspective. You will get an up to date overview of key improvements of MDS, SSIS, SSRS and SSAS
200
BI
Data Mining and Machine Learning are not new. The fact that they are now within the reach of all and every organization is. Every organization in every branch can benefit from Machine Learning. In this session I will introduce the concepts behind Machine Learning and some of the algorithms and their use cases. I will than show you how to build and use a Machine Learning solution using Azure ML.
300
BI
My first experience with MDX was: good, it looks like SQL: SELECT .. FROM .. WHERE. On my second glance it turned out to be completely different than SQL. But after a while SQL and MDX started to feel similar. In this session I bring you to the point of seeing the similarities instead of the differences. We will use your SQL experience to give you a head start with MDX. The session is also good for those with a bit of experience who want to know a bit more about the background. Of course all theory is backed by demo’s.
300
BI
In this demo only session we will start out with a blank Excel Workbook. Using Power Pivot we will build a powerful model to analyze our data. Next we will use Power Query to add data to the model, but not before we cleansed to data to make it useful. With a sound model in place we will visualize the data using Power View. I will shortly demo the same functionality using the PowerBI desktop. We end with uploading out solution to PowerBI.com and creating a dashboard.
300
BI
Getting the right data in the right format is critical for all data analytics. PowerQuery helps you to achieve this. Using the GUI to do this is one thing. To use the tool to its full potential you want to use M, the language behind PowerQuery. In this demo rich session we will start simple by changing some GUI generated functions. We will quickly dive into M to get a good understanding of how M was designed so we are able to use it to its full extend.
300
BI
If you have worked with SQL Server Integration Services (SSIS) 2012 or above for few years then you already familiar with SSIS Catalog reports that help you to monitor projects that have been deployed to the server. By using these reports, we can quickly find information related to executions, logging and other interesting things with few mouse clicks.

But have you ever wondered how to design an SSIS Project so we can take maximum benefit of SSIS Catalog and SSIS DB?  How to troubleshoot SSIS Project executions from Catalog? How to navigate your way around the built-in catalog reports?This session will focus on these scenarios by diving into SSISDB catalog views and discuss how we can extend metadata rich SSISDB by creating a Power BI Model.In Power BI Model, I will show how to extract data related to SSIS Error codes and Lookup values, how to develop a dashboard and how to use Power BI Q&A functionality to answer any question related to SSISDB Catalog.
300
BI
Our plan is to give you enough of a knowledge and interest in statistics to make more sense of how machine learning works.  we'll be using various props and volunteers to do this and although there will be a deck this is mainly for reference after my session rather than an excuse for not rehearsing properly.

This is an experimental session as this stuff has not been seen at any other event prior to this 
100
Car
The world is constantly changing and every Developer and DBA needs to keep up with the latest technology.

Employers value ongoing certification over university degrees.

Microsoft Certifications are recognised worldwide and considered preferential.

In this INTERACTIVE session we'll talk about Microsoft Official Courses, Exams and Certifications (MCSA SQL Server, MCSE Data Platform, MCSE Business Intelligence, etc.) and try to answer all your questions.

Session suited for Developers, DBAs, Students, IT professionals and IT Trainers.

Disclaimer: Microsoft Certification is not accepted in Jabba The Hutt's Cartel.
200
DBA
You don't need to look in an entire planet to find two droids! You just need to find the right kubaz informant.

Indexes are the kubaz informants of database engines.

If you never heard of indexes or are afraid to use them, this session is for you!

We'll cover indexing basics, introduce all SQL Server 2016 indexing options and look at some typical optimisation scenarios.

Session suited for Developers, DBAs and Students (padawan level).

Disclaimer: This session might be a trap.
200
Dev
Even the most experienced Doctor once needed to learn how to operate the TARDIS.

Learn how to turn business logic into database objects, learn about SQL Server's security mechanisms (inc. SQL2016), implement multi-tiered data access with business rules and roles, make your app/db immune to infrastructure changes (like scaling from single server into clustered or cloud) and Bad Wolf's moods in the future, avoid SQL injection and other security/coding bugs using stored procedures and DBMS side logic.

Session suited for Developers and Students.

Disclaimer: No actual secrets will be revealed. Please do not tell Daleks where I live again.
200
DBA
What it I told you could query anything in your life using SQL?

Your e-mail, your contacts, your computer hardware and software, your social networks, your girlfriend/boyfriend's head, the Internet itself?

Find out how to join the SQL side of life!

Session suited for Developers, DBAs, Students and non-IT users.

Disclaimer: No secrets about black Monoliths will be disclosed.
300
DBA
Have you ever wondered why SQL Server eats so much memory? Why zero cost operator in the query plan can be a performance killer? Why your query, which was as fast as a hell one minute ago, is running 20 min now? Why are things going not that way that we expected? Magic? I think not. In this session, we will look behind the scenes of SQL Server and discover the most confusing parts of its magic.
300
Dev
Yes, this is very wide problem: you expect something, but reality is a bit different. You expect that query will run 1 sec, but it runs (Oh, my God!) 1 hour. You expect that your query will perform index seek, but it performs index scan instead. You expect your query doesn't use locks, but it uses them. So this session will be focused on understanding the internals of such situations and making our expectations more close to reality.
200
BI
DBAs, developers or analysts are often asked to get involved in the process of designing and implementing a data strategy where there is no dedicated BI resource. Data modelling doesn't sound too difficult but it is something you can struggle with in the beginning.

In this session we will talk through the process of gathering user requirements to provide a good starting point for your data model. We will use the Sun Modelling technique to aid us in this.
200
DBA
This session will give a high level overview of the options available to DBAs for keeping SQL Server available and recovering after a data loss. We will use our real world experience to advise the benefits and pitfalls
of each approach and what you need to think about before choosing one.
200
DBA
Find out some simple steps you can do as a DBA to help protect your data. This doesn't assume any in depth security knowledge but walks through what a DBA can do and the questions they should be asking. We will also briefly talk about some of the tools available to help with this from SQL Server and whats new in 2016.
300
DBA
Mr. Bertucci will present the architecture, implementation, and provide a live demo of how he plugged the MDS product family for Data Quality and Master Data Management into a Hadoop based Big Data Platform at one of the largest Silicon Valley chip manufacturers in the world. This will include a step by step explanation of the complex  architecture consisting of the Hadoop/Cloudera big data platform, Data Quality as a Service (DQaaS) web service capabilities,  use of Profisee's Maestro tools, and the MDS Master Data Management capabilities from Microsoft.  Will also include a live demo of how big data is cleansed (Mastered) and the lessons learned along the way.  Mr. Bertucci is one of the world' leading database and master data management authorities, the author of the SQL Server Unleashed series of books and a frequent speaker at industry database and data quality conferences around the world.
200
BI
We can load a dimension table in SSIS using a) SCD Transformation, b) Merge Join + Conditional Split, c) Change Data Capture, d) Merge command, and e) Upsert using Execute SQL. In this session I will be showing/demo-ing these 5 approaches on the screen one by one, then compare them in terms of efficiency/performance, code clarity/maintenance, and ease of build. It is based on my article: https://dwbi1.wordpress.com/2015/09/09/loading-a-dimension-table-using-ssis/

SCD Transformation and Merge Join + Conditional Split are both using row-by-row operation hence not efficient compared to Upsert. CDC is a mechanism to extract the data changes. To load a dimension table we need to read the CDC output table, and update or insert into the dimension table based on the _$Operation column. The Merge command is buggy, has concurrency issues, requires an index to support performance, and does the Insert twice.

In every data warehouse project, we need to load many dimension tables. So this is a fundamental knowledge to know, for those of us who uses SQL Server and SSIS for your warehouse.
100
Dev
In SQL Server we usually look forward to working with the latest features, such as SQL 2016, Power BI, ML, Azure, etc. So much so that we often forget the basics such as creating/modifying a constraint, PK, FK, index, trigger, partitioned table, synonym; joins, correlated subquery, update from, output an SP into a table, cast, NULLIF, variables, temp tables, table variables, cross apply, while, case; string/date functions, row number, transaction, except, rank, find duplicate rows, etc.

This session is intended to be a refresher, i.e. we know all the above but we forget. We will go back to basic. I will cover 3 sections: a) creating database objects, b) database development, c) database administration. This session is based on my article: https://dwbi1.wordpress.com/2014/12/27/sql-server-scripts/. So this session will consists of many short SQL scripts (T-SQL) and only T-SQL, i.e. I won't be using the GUI. I won't be able to cover every single script in that article (there are over 250 in total!), but I will pick the important ones, and avoid the ones which are similar to the others.
300
Dev
It's no secret that a deadlock - it's not very good. This is an exceptional situation, when two concurrent queries request same resources, but in a different order. Classic deadlock can occur when two concurrent transactions modifying data from the two tables in a different order. Unfortunately in a real life deadlocks can be more complex and unobvious. One of the rules, which I always keep in mind, sounds: " You can not design a database, in which the occurrence of deadlock is impossible". And we should deal with them. The algorithm is simple: catch, analyze, fix. But in practice the process can be challenging and can require different types of analysis. In this session we will recap the basics and solve as many deadlock as we can.
300
Dev
SQL Server 2014 is full of new features and improvements. Some of them are "Killer" features like InMemory OLTP, Clustered Columnstore Indexes, Buffer Pool Extensions, etc., which are discussed a lot and we always can get a lot of information about them. And in the same time, SQL Server 2014 have several fantastic features and improvements, which are more hidden from our sight. In this session, we will talk about these features and enhancements. Query Fingerprints, Cardinality Estimator, Tempdb improvements, and more features will be covered in this session.
300
BI
Different methods of querying I intended to cover are; -Pivot Table, DAX Query and Cube Member.Advantages / Disadvantages of the above in terms of; -Performance, skills required, maintainability, readability, stability.Consider this presentation a ‘shallow dive’ for existing Business Intelligence developers. As an advanced Excel user you may need to refer to this content multiple times to get the most benefit from it.



Download slides from; - http://www.slideshare.net/KieranWood/comparing-and-contrasting-different-methods-of-querying-a-powerbi-model-using-a-tabular-format



Performance

▪Why is performance a big deal?

▪If the user loses patience with the speed of response of the solution, the solution will not be used.

▪In practice unresolved performance issues result in loss of stability and loss of data.







Skills Required

▪What are the skills required to create a worksheet using this method of querying data.







Maintainability

▪How easy it is to extend an existing worksheet using this method of querying data.







Readability

▪How readable is the design / code which uses this method of querying data.



Stability


▪How likely is this method of querying data to return unreliable results or even cause Excel to crash.



<main content will be covered here>







Conclusions

For large data sets / complex models.



Pivot Tables; - Only use for ad hoc analysis on a sub set of the data. Quick to implement, requires only intermediate knowledge.



DAX Queries; - Suitable for very large data sets with lots of columns. Requires basic DAX knowledge.



Cube Members; - Good for aggregate reports  such as dashboards which have a small number of cells populated. The initial report can be generated form Pivot Table, requires a lot of maintenance should business requirements change.
200
DBA
With partitioning, we can break a table or index into smaller more manageable chunks. We can then perform maintenance on just part of a table or index. We can even move data in and out of tables with quick and easy metadata only operations. We’ll go over basic partitioning concepts and techniques like partitioned views and full blown table partitioning. We’ll look at how partitioning affects things under the hood. Finally you'll see some cool demos/tricks around index maintenance and data movement. At the end of this session you’ll have a firm understanding of how partitioning works and know how and when to implement it.

You'll learn…

• The components of table partitioning and how they fit together

• How to make your index maintenance partition aware

• How Partition elimination can help your queries

• How to split different parts of tables over different storage tiers

• How to manage partitions. We'll demo this by implementing the sliding window technique. 
400
DBA
Understanding how SQL Server stores your data can seem like a daunting task. In this session we'll learn how objects such as tables and indexes are stored in a data file. We’ll also look at how these concepts tie in to your work. We’ll see these concepts in action using demos and see how we can use this knowledge to better design solutions.

We’ll start off by looking at the structure of a row and then move onto the concept of a data page. From there we’ll cover a few special page types like the index allocation map and GAM and SGAM pages. Then we’ll look at index structures and talk about the differences between heaps and clustered indexes.
300
BI
Even the briefest Bing search for 'data visualisation' will turn up the work of Stephen Few, who, with other legends such as Tufte, have brought research and reason to the flashy and fancy buzzword world of data visualisation. Stephen Few's latest book, Signal', is 'clear case for data visualization with believable business examples that show how to find signals in noisy data' (Ben Schneiderman). This session will take some of the key statistical concepts of this book and will use some of the takeaway commentary, which will be illuminated by using Power BI as a conduit for the ideas. Let's see where Power BI succeeds or fails to express the data visualisation ideas expressed in the book.

Note: This isn't endorsed by Stephen Few in any way. I am just a massive fan of his work and I wanted to bring it to an audience, whilst giving practical takeaways in Power BI. I expect a healthy debate from the audience!
300
BI
Even the briefest Bing search for 'data visualisation' will turn up the work of Stephen Few, who, with other legends such as Tufte, have brought research and reason to the flashy and fancy buzzword world of data visualisation. Stephen Few's latest book, Signal', is 'clear case for data visualization with believable business examples that show how to find signals in noisy data' (Ben Schneiderman). This session will take some of the key statistical concepts of this book and will use some of the takeaway commentary, which will be illuminated by using SSRS & Datazen as a conduit for the ideas.

We will look at where SSRS and Datazen succeed, and where they fail, to express these ideas.

Note: This isn't endorsed by Stephen Few in any way. I am just a massive fan of his work and I wanted to bring it to an audience, whilst giving practical takeaways in Power BI. I expect a healthy debate from the audience!
200
BI
Would you like to learn if Data Vault really works? Come and share my own experience of building a Data Vault for a large banking application: you will learn some useful insights into what it is, what it can do for you and what it will not.
300
BI
Microsoft offering has seen many improvements lately in terms of providing suitable tools for Managing the entire Lifecycle (ALM) of Business Intelligence (BI) Applications. However, for many reasons, uptake of these tools have not been particularly great. One of such reasons is a steep learning curve for many BI professionals, who came from analytical or admin backgrounds with little experience of .Net development with Team Foundation Server (TFS).

This presentation aims to provide a learning framework. It gives an overview of end-to-end architecture of MS BI ALM and practical tips on how to make it happen with TFS toolkit. The presentation will also cover unit testing using MS Test, continuous integration with MS Build and a demo of TFS Deployment Manager for a typical BI application. This will include a database project, SSIS, and SSAS projects.

The material does not assume prior knowledge of TFS administration, but some experience using TFS source control and general TFS terminology will be helpful.
300
DBA
Kerberos configuration and troubleshooting has always been notoriously difficult which led many DBAs and SQL Developers to resort to SQL Authentication. Official sources present a highly complex description of Kerberos protocol that put people right off. I’d like to offer an understanding in simple terms and present common design patterns to make it easier to get it to work.  I will also show a demo of how to troubleshoot common problems and put them right.
300
BI
Configuring Kerberos can be easy. Indeed, with favourable conditions and some preparation,  the whole thing can be over in minutes. However, if hours later it still does not work, troubleshooting can take many days even with help of experts.

I would like to present easy to follow principles of Kerberos constrained delegation and protocol transition with handy tips and templates to get this right the first time for your particular environment. The goal is to explain the meaning of the settings in terms of the role in the Kerberos constrained delegation authentication rather than simply presenting another example of a particular scenario.

This presentation covers some useful resources to help you tame your three-headed monster and make it behave in case it decides to go on a strop. I will also mention some useful tips and resources on dealing with Claims To Windows Token services. It plays a very important part in delegating authentication for services requiring protocol transition (Claims -> Windows), such as Excel Services, Performance Point and Power View.
300
BI
Come and learn my experience of generating ETL code: what is good about it, what is bad and what is downright ugly.



I shall share various methods and technology that can be used, show some examples and share my experience of what works, what does not and what makes you hate you guts in the long run. Technology examples will include Kimball spread-sheet, BIML, Visual Studio Text Templates and some proprietary tools (Power Designer and WhereScape)
300
BI
With the release of Azure Data Lake Analytics (still in Preview) Microsoft added a new language to the SQLish pool: U-SQL. U-SQL 'that blends the declarative nature of SQL with the expressive power of C#', is the language to process large 'raw' datasets stored in Azure Data Lake to extract insights that provide the business value. But with all new languages/Azure services the big question is: how can it be used and how does it integrate in our current Analytics pipeline.
300
BI
Microsoft created Power BI with (at least) one important thing in mind: make it extendable, so everyone can use it and extend it to make it the best solution of them all. In this session I will look at the different options to extend Power BI and to embed it in an enterprise solution. Subjects will be: pushing realtime dashboard data via the REST API, enlarge it via Stream Analytics and Azure IoT hub and create your own custom visualization (live demo)  and use it with any kind of data.
300
BI
As part of the Cortana Analytics Suite Hadoop (HDInsight) is ideal to process large amounts of real-time data. With Storm, HBase and Spark the Hadoop ecosystem has components to process and store large amounts of data. In this session I will look at the different possibilities of the three components and how they can be used together and how it integrate and fits in the Cortana Analytic Suite.
300
DBA
Everyone knows that Azure SQL Database only supports a small subset of SQL Server functionality, small databases, and has really bad performance. Except, everyone is wrong. In fact, Azure SQL Server Database is ready to support many, if not most, databases within your enterprise. This session reintroduces Azure SQL Database and shows the high degree of functionality and improved performance that is now available. You’ll leave this session with a more thorough understanding of the strengths and weaknesses of Azure SQL Database so that you can make a more informed choice over when or if you should use it within your environment.
300
DBA
For the most part, query tuning in one version of SQL Server is pretty much like query tuning in the next. SQL Server 2016 introduces a number of new functions and methods that directly impact how you’re going to do query tuning in the future. The most important change is the introduction of the Query Store. This session will explore how the Query Store works and how it’s going to change how you tune and troubleshoot performance. With the information in this session, not only will you understand how the Query Store works, but you’ll know everything you need to apply it to your own SQL Server 2016 tuning efforts as well as your Azure SQL Databases.
200
DBA
Very few companies are going to move everything into Azure, but just about every company is beginning to explore the offerings available for data management. If you're ready to begin the process of creating the hybrid environment necessary to begin using Azure, this session is for you. We'll talk about the types of data management platforms offered, from Infrastructure as a Service through virtual machines, to NOSQL solutions through DocumentDB, to Platform as a Service offerings through Azure SQL Database. You'll learn the first steps necessary to start the process of integrating your existing environment with all the new technology available in the cloud.
200
Dev
T-SQL provides many different ways to accomplish the same task, and as you might expect, some ways are better than others. In this session, you will learn specific techniques, that when followed make you a better T-SQL developer. The session is jam-packed with practical examples and is designed for administrators and developers who want to bring their T-SQL skills to the next level. You'll write clearer and easier to read T-SQL as well as write better performing T-SQL. So useful, you can implement these tips immediately!
200
Dev
An introduction to Big Data aimed mainly at architectural or developer roles. This session will cover a lot of technologies at a basic to intermediate level and give a foundation in what is available and why you would use them.
200
BI
This session will introduce you to one of the most exciting Azure data services - SQL Data Warehouse! SQLDW is a distributed database engine that delivers high scale to your data warehouse projects. Understand how it works in this 60 minute session.

Topics covered include:
  • Key Concepts
  • Scaling Storage
  • Scaling Compute
  • Data Loading
300
BI
This session covers the more advanced aspects of development for Azure SQL Data Warehouse. Areas such as data movement, workload concurrency and resource management will all be covered during this intense 60 minute session.

Topics covered include:
  • Data Movement
  • Workload concurrency
  • Resource Management
  • Statistics
    300
    DBA
    In this session, you will learn how to deal with the most common pattern issues about availability groups in simple or more complex environments. With plenty of demos based on real customer cases, you will discover which tools you have at your disposal, such as Xevents, diagnostic files, DMVs and when to use them.
    200
    BI
    Machine Learning is the next natural step for business intelligence. For the first time, large amounts of data are now available and with decades of algorithm optimization having already taken place, the full potential of machine learning can now be unleashed!

    In this talk you will be given an overview of the three most prominent areas of Machine Learning today: classification, regression and clustering. Before delving into more detail on clustering to give a bit more context and understanding behind the ideas. Finally, a short demo with Azure ML and R, so you can see how it all fits together and how you can apply Machine Learning right now and integrate it into your current BI projects.

    At the end of the session you will understand the potential Machine Learning has in the world of BI and more importantly how you can harness these benefits.
    300
    DBA
    Ever wondered what the capacity of your SQL Server as actually is? This session will give you a "dummies guide" of how to go about doing your own (non-authoritative) TPCC testing, using freely available software. As with all things there are "gotchas" and this session, in keeping with the mission of SQLBITs will aim to give you real-world experiences of attempting TPCC-style testing in a non-research environment.
    300
    DBA
    Has your boss ever asked you "How's our SQL Server doing?". Here's a 60 minute guide on how to give a meaningful answer.  Dave McMahon has spent the last 18 months reviewing many different types of systems from SQL 7.0 through to SQL 2012 and has attempted to answer this very question succinctly in a way that non-techies understand. This session outlines how he does it.   To do a full SQL Server review nowadays takes more than just SQL knowledge, you need a basic understanding of the Windows Platform be it Physical or Virtual, and a knowledge of hardware and storage area networks.  There is a lot to cover, but after this session you'll have the basic armoury to be able to look at your Data Platform estate and determine whether it is functioning in the most efficient way for you.
    200
    Dev
    The Internet of Things (IoT) starts with your things—the things that matter most to your business.   IoT is at an inflection point where the right technologies are coming together and we are able to connect devices to the cloud and leverage streams of data that were previously out of reach. It's a great time to take a look at game changing technologies you can use today to make your Internet of Things (IoT) ideas stand out from the rest using Microsoft Azure.     In this session we will look at an end-to-end example and demo of an IoT Architecture built using Microsoft Azure with real-time data using services like IoT Hubs, Stream Analytics and Power BI.     Welcome to the Internet of Your Intelligent Things!
    200
    Dev
    Protecting our data from unauthorized access becomes more and more important all the time, however it has been difficult to ensure sensitive data is encrypted in SQL Server. The new Always Encrypted feature in SQL Server 2016 makes this much simpler for developers and DBAs with a framework for protecting data from the client, across networks, and inside of the database. This new feature allows for limiting access to the data, even from the DBAs and sysadmins that may control the database instance itself. Learn how to implement and use Always Encrypted in your applications.
    200
    Car
    Everyone wants a dream job that they enjoy going to each week.
    However finding that job, and getting yourself hired can be hard for
    most people. Steve Jones will give you practical tips and suggestions in
    this session that show you how to better market yourself, how to get
    the attention of employers, and help improve the chances that the job
    you want will get offered to you. Learn about networking, blogging, and
    more. Learn practical tips on
    • Networking
    • Blogging
    • Volunteering
    • Speaking
    • Authoring
    • Leadership
    200
    DBA
    "Could you just tell me……….?" This may cost 10 seconds of time or it may take several hours but it will definitely stop the current task and impact its completion   A SQL DBA will be required to provide a myriad of information in many different ways to many different types of people, answering questions from technical teams, technology teams, other parts of the business as well as directors and external parties.   In this session you will learn the why, what and how of automating the gathering, storing and displaying of information enabling self-service and reducing the interruptive calls on your time whilst ensuring that the data is correct and trustworthy. I will also show you how you can use this to enable consistency across your estate. This session will be of benefit to the "Accidental DBAs" as well as DBAs looking after large estates   The majority of the session will concentrate on the way I use PowerShell to gather the information and store it. I will also show you how to enable self-service with natural language query using PowerBi.   You will leave the session with all of the tools you need to return to work and convince your boss this is a worthwhile use of your time and to implement this solution to provide a modern way of providing accurate information about your estate
    300
    Dev
    Everyone tests that code, but most people run a query, execute a procedure and run another query. This ad hoc, non-repeatable testing isn't reliable, and encourages regression bugs. In this session you will learn how to begin introducing testing into your development process using the proven tSQLt framework. You'll run tests with a click of a button, using an ever growing test suite that improves code quality. You will see how to handle test data, exceptions, and edge cases.
    200
    BI
    This talk is about creating a tabular model. It guides you through the process of creating a tabular model. The session will be packed with very practical tips and tricks and the steps you should do to create a proper model. The steps are based on projects that I have done "in real life", and backed with a little bit of theory. After this hour you will understand how to optimize for memory usage and speed, enhance the user experience, use some DAX expressions and to use the right tools for the job.
    200
    BI
    SQL Server 2016 offers a cool new feature: Temporal Tables. This new functionality allows you to have the system "automagically" record all changes that happen to the data. Querying the data is also much easier than it was before with the workarounds that we had to use. I will also make a comparison between other SQL Server versioning mechanisms, Change Tracking and  CDC. After this hour you will understand the concept, learn how to create temporal tables and how to query them, and what the pros and cons are.
    300
    Dev
    SQL Server’s plan cache is one of the largest regions of memory and is used to store SQL and T-SQL code for quick execution. It is largely self-maintaining and self-tuning. However, the kind of code you write
    and the way you invoke that code can have an enormous impact on how the plan cache is maintained, tuned, and optimized. If you don’t do things right, you could end up shooting yourself in the foot and making performance much worse. Attend this demo-loaded session to learn about 4 harmful anti-patterns that developers frequently use without knowing their drawbacks.  

    This session will answer questions like:

    - What’s currently in the plan cache?
    - How often is the code in my plan cache being reused?
    - Where are the big opportunities to save space in the plan cache?
    - What coding techniques are most likely to make my object recompile unnecessarily?

    There are a short list of mistakes that, if you know of them in advance, will make your life much easier.  Learn to avoid the four harmful anti-patterns which can slow down the plan cache and the key SQL troubleshooting techniques so that you can see what is in your SQL Server’s plan cache and how it is behaving. 
    200
    DBA
    Let’s face it.  You can effectively do many IT jobs related to SQL Server without knowing the internals
    of how SQL Server works.  Many great developers, DBAs, and designers get their day-to-day work completed on time and with reasonable quality while never really knowing what’s happening behind the
    scenes.  But if you want to take your skills to the next level, it’s critical to know SQL Server’s internal processes and architecture.  This session will answer questions like:

    -       What are the various areas of memory inside of SQL Server?
    -       How are queries handled behind the scenes?
    -       What does SQL Server do with procedural code, like functions, procedures, and triggers?
    -       What happens during checkpoints? Lazywrites?
    -       How are IOs handled with regards to transaction logs and database?
    -       What happens when transaction logs and databases grow or shrinks?

    This fast paced session will take you through many aspects of the internal operations of SQL Server and, for those topics we don’t cover, will point you to resources where you can get more information.  So strap on your silly, as we cover all these topics and more at speed with tongue planted firmly in cheek! 
    300
    DBA
    Learning how to detect, diagnose and resolve performance problems in SQL Server is tough.  Often, years are spent learning how to use the tools and techniques that help you detect when a problem is occurring, diagnose the root-cause of the problem, and then resolve the problem. 

    In this session, attendees will see all new demos of native tools and techniques which make difficult troubleshooting scenarios faster and easier, including:

    •           XEvents,Profiler/Traces, and PerfMon
    •           Using Dynamic Management Views (DMVs)
    •           Identifying bottlenecks using Wait Stats

    Every DBA needs to know how to keep their SQL Server in tip-top condition. If you don't already have troubleshooting skills, this session covers everything you need to know to get started.
    100
    Car
    Every SQLBits attendee learns new techniques and best practices to use at work. But if you’re only interested in gaining a handful of small tactical advantages, then you’ll miss out on the most exciting and empowering trend in the IT industry – our data-driven future. Share insights and inspirations in this talk with Kevin Kline, a founder and president emeritus of PASS, to discover the broader cultural transformations that are pushing data professionals into prominence, and the strategies you can use to become the most respected, influential, and credible member of your organization’s technical staff. 
    200
    Dev
    This session is aimed at Developers  interested in using SQL Server Data Tools (SSDT) to manage their database development. It will cover the benefits of declarative over procedural development, the techniques involved in creating, modifying, and deploying a SQL Server database with SSDT, the use of command line tools for Continuous Integration, and the facilities provided by Visual Studio for database unit testing.
    200
    Dev
    If you want to mix traditional relational data with semi-structured data stored on Azure or Hadoop, then the out-of-the-box Polybase functionality in SQL Server 2016 is one of the easiest ways to get started with this.

    In this session we first introduce the Polybase architecture, then show how to setup and query Polybase. This demo-rich session helps both developers and DBAs in understanding the potential and practical use of Polybase.
    200
    Dev
    Database Unit Testing is gaining in popularity, but what are the characteristics of a "good" unit test? This session will explore this question by examining a range of scenarios encountered in database development.

    The majority of examples will use the popular tsqlt unit testing framework for SQL Server, but the techniques described should be applicable to most testing frameworks and indeed most database platforms.
    200
    DBA
    TempDB is a system database that resides on every SQL Server. Furthermore, this system database can be critical to the performance of every instance.

    In this introductory session you will gain a better understanding of TempDB, learn how SQL Server uses it, as well as learn some basic architectural changes that you can make to improve performance for your entire instance.
    300
    Dev
    Learning SQL is easy, mastering it is hard. In this session you’ll learn simple but effective tricks to design your database objects better and write more optimized code.

    As an attendee you will gain a deeper understanding of common database development and administration mistakes, and how you can avoid them.

    Ever thought that you were adhering to best practices but still seeing performance problems? You might well be.

    In this session I will be covering why the optimizer isn’t using all available processors, when the database engine fails to report all the resources a query has used and why the optimizer doesn’t always use the best plan.You will leave this session with a list of things that you can check for in your environment to improve performance for your users.
    300
    DBA
    Does your application suffer from performance problems even though you followed best practices on schema design? Have you looked at your transaction log?
    There’s no doubt about it, the transaction log is treated like a poor cousin. The poor thing does not receive much love. The transaction log however is a very essential and misunderstood part of your database. There will be a team of developers creating an absolutely awesome elegant design the likes of which have never been seen before, but then leave the transaction log using default settings. It’s as if it doesn’t matter, an afterthought, a relic of the platform architecture.
    In this session you will learn to appreciate how the transaction log works and how you can improve the performance of your applications by making the right architectural choices.
    200
    DBA
    Indexing presents daunting challenges for even the most seasoned professionals, as it offers countless options to choose from. With a little help you’ll see how to simplify indexing in your environment and improve the overall performance of your SQL Server applications. In this session you will learn all about the different architectures of indexes, from that how to make the right choices when designing your indexes so that both the database engine and your DBA will love you for it.

    The session will also cover how to find missing and unused indexes, the cause and how to resolve fragmentation issues as well as how to maintain your indexes after they have been deployed.After attending this session you will have a much better understanding of how to create the right indexes for your entire environment, not just for that one troublesome query.
    300
    DBA
    Based on my core DBA experience (20 years and counting)I believe it is important for companies (act now) becoming te standard way of handle their data. This is the time for DBAs to step out and expand their skillset to incorporate Big Data administration practices within their DBA fort. As a DBA you may be well versed with managing data storage, networking, data access governance, security and ETL, a little more of understanding on Big Data can add major value to your skills for becoming a key player in Big Data Administration. Come to this session to develop your data skills within BI, DW and BD areas. Based on my experience and industry best practices there will be a showcase offew new skill areas that are emerging within Big Data arena. Let's show real meaning of DBA to the world!
    300
    DBA
    You know what the transaction log does, but do you know how it does it? This session will look to further your knowledge of how the log works, both in theory and practice. How the nature of the transaction log can affect you and some myths, and misconceptions, around the transaction log and logging in general. If you're looking to gain a deeper understanding than the basics then this is the session for you. If you working with really deep internal t-log issues and you know your VLFs from log blocks and are used to digging around with fn_dblog then this might not satisfy your hunger. If you are the latter feel free seek me out anyway. I'd love to hear about what you’re doing and might have a future session for you.
    300
    DBA
    Considering that SARGable predicates are those predicates in a query for which SQL Server (or other DBMS) can take advantage of an index to speed up the execution, this presentation describes some common reasons for which the execution plan includes scan data access operators instead of seek. For every presented case:
    1. implicit/explicit conversions between data types
    2. wrong settings at session level or during sql module creation/alteration 
    3. filtered indexes and parameters
    4. hints (ex. FORCESCAN, INDEX)
    5. parameter sniffing
    6. conversions between collations (CS vs CI, SQL vs Win collations)
    7. outdated statistics 
    8. indexed views and simple queries 
     will be described the cause and the solutions.
    200
    Dev
    The Internet of Things (IoT) starts with your things—the things that matter most to your business.   IoT is at an inflection point where the right technologies are coming together and we are able to connect devices to the cloud and leverage streams of data that were previously out of reach. It's a great time to take a look at game changing technologies you can use today to make your Internet of Things (IoT) ideas stand out from the rest using Microsoft Azure.     In this session we will look at an end-to-end example and demo of an IoT Architecture built using Microsoft Azure with real-time data using services like IoT Hubs, Stream Analytics and Power BI.     Welcome to the Internet of Your Intelligent Things!
    400
    BI
    Data Warehouses are heavily in use nowadays in most of businesses around the world. Data Warehousing brings higher performance, faster and more insightful data analysis out of operational databases. By the way there  are some challenges in design and implementing Data Warehouses which needs robust and reliable ETL implementation. In this session you will learn an ETL Architecture implemented by SSIS and MDS to solve couple of most challenge-able Data Warehousing scenarios which are Slowly Changing Dimension and Inferred Dimension Member. There will be many demos through this session which helps you to understand design and
    implementation of the architecture.
    300
    BI
    SSIS is a well known ETL tool on premisses. Azure Data Factory is a managed service on cloud which provides ability to extract data from different sources, transform it with data driven pipelines, and process the data. in this session you will see many demos comparing ADF (Azure Data Factory) with SSIS in different aspects. you will also learn features that are available in ADF but not in SSIS with many demos.
    300
    BI
    SQL Server 2016 is coming with many new features in SSIS, MDS, SSAS, and SSRS. This would be a major release of SQL Server with great features in most BI and DW components. In this session you will learn many of new features in every service of SQL Server which is related to BI and DW, and you will see many demos illustrating these features. SSRS comes with new look and feel, SSAS with great new set of functions, SSIS with package template and new logging levels, and MDS with performance boost, new features in Excel Add-in, and many other new features. There are also some features in database engine which is related to data warehousing such as temporal tables which will be covered in this session. It is tough job to fit all of these new features in one session with demos, so prepare to be amazed!
    300
    BI
    Incremental Load is always a big challenge in Data Warehouse and ETL implementation. In enterprise world you face millions, billions and even more of records in fact tables. It won’t be a practical practice to
    load those records every night, as it would have many downsides such as;
    • ETL process will slow down significantly, and can’t be scheduled to run on small periods.
    • Performance of the source, and destination server will be affected badly, downtime of these systems would be longer.
    • More resourcesill be required to maintain the process. such as better processors,more RAMs… and adding these won’t help so much at the end, because theamount of data is increasing as times passes.
    and many other issues.
    So what would be the solution? Solution is Incremental Load approach. In this approach data will be loaded partially, preferably only part of the data that has been changed. A change set will be much smaller than the total amount of data. As an example in a 200 million records fact table which stored data for 10 years, only 10% percent of that data might be related to the current year and changes frequently, so you won’t usually required to re-load the rest 180 million records.
    In this session I will show you how to implement Incremental load with SSIS through different methods with demos. You will learn pros and cons of each method and best practice to use them in appropriate scenario.
    300
    BI
    Prepare to be amazed with what you can achieve using Power Query in this demo-heavy session. Power Query is the data extraction, transformation, and mash up tool. It can be accessed through Excel or from Power BI Desktop. There are numerous data transformation features that can be used to solve real-world data preparation challenges.

    We will use customer case studies to go through scenarios of preparing data for modelling, and visualization. You will learn how to use features of Power Query such as generators, custom functions, and lots of built-in functions to solve those scenarios. You will also learn some data preparation techniques through Excel itself.
    300
    BI
    While building of Tabular Models may be straight forward it rapidly becomes more complex when you steer off-piste and into more complex design pattern such as: 
    - Many to Many relationships
    - Semi-Additive Measures
    - Last Ever Non Empty
    - Events in Progress
     - Versioned Fact Tables 
    - Almost real time processing requirements 
    - Huge volume 
    - Non Star Schema Data Warehouses 
    - measure explosion and lack of calculated dimensions 
    - Selecting Hardware 


    We will walk through an actual case study of a tabular model and discuss the pain points we encountered on the way, along with solutions and workarounds.
    300
    BI
    Ever deployed reports that that worked perfectly well with one user on the development server, only to find that it doesn’t scale on production?


    This session focuses on tools and methodology to load test reporting services in highly concurrent environments. Using free and commercial tools.


    Topics covered are:
    - How to capture and mine reporting services logs
    - Creating Unit Tests for reporting Services
    - Creating Load Tests for reporting services
    - Running and Analyzing load tests
    - Improving and Tuning SSRS scalability
    300
    BI
    Data lake with no fixed limits on sizes, provisioned with high throughput and native integration with Hadoop ecosystem. Having no cap on capabilities, cost is essential to consider along with data governance and compliance in terms of where your data is managed (spread across and moved). This means your data platform is not limited to SQL Server alone, but you have ability to transform business requirements into BI & analytics with few clicks. Come to this session to understand HOW, WHAT and WHEN drivers to manage on-premise data plaform with a brirdge to data lake. As a DBA it is your responsibility to understand data lake capabilities on architecture concepts and managebility purpose. In this session we will overview: - What technologies Azure data lake can support? - How to understand different types of concepts on Azure - HDFS, AzureDB, blob and data lake? - How to integrate/manage HDInsight with Data lake? - How can we leverage core capabilties of Hadoop ?
    200
    BI
    It’s Monday morning and reports are running slow. How do you start to identify bottlenecks, slow running reports and ultimately recommend tuning for report developers


    This session will step you through how to analyse report logs to locate performance issues, long running reports and ultimately fix reporting services issues.


    Areas Covered are:
    - Reporting services architecture
    - Tools and processes for analyzing logs
    - Replaying logs
    - Common reporting services performance issues
    200
    BI
    With Microsoft’s acquisition of DataZen and subsequent integration into SQL 2016, dashboard and mobile enterprise MSBI is finally here. Come along and see how to get started with mobile reports.


    We will be showing an end to end demo of creating and publishing dashboards for mobile and tablet devices. Bring along your tablet or phone to play with the demo reports.
    300
    Dev
    Have you got the XFactor? Can Machine Learning tell us?   The high level aim of this project was to investigate whether text sentiment analysis of social media data can be used to predict a voting process, specifically using Twitter data to predict the outcome of XFactor voting statistics   For millions of people, social networking has become a significant part of everyday life. Whether it’s personal or professional use, it has been said that “Social networking accounts for 1 of every 6 minutes spent online” (Nelms, D. 2011). This produces a large volume of textual information containing user’s opinions and experiences. This form of data is extremely valuable to businesses and organisations.   By using the power of Cloud Computing, this is no longer just a piece of academic research. In this session I will show how I built on this idea to create an end-to-end real time solution using a variety of Microsoft Azure Data Services including Azure Machine Learning, Event Hubs and Stream Analytics.
    200
    DBA
    A real DBA don’t need a GUI -A Guided Tour of SQL Server Management Studio



    SQL Server Management Studio is at the heart of any SQL Server DBA or developer’s day. We take it for granted  but rarely do we take a look at how we can customise or improve it to make our day to day work easier and more productive. 



    This presentation will show you how to use SSMS and will look at many of the hidden features and shortcuts that you had forgotten about or didn’t know were there.



    At the end of this session you will have learnt at least one new feature of SSMS that you can use to improve your productivity.
    300
    DBA
    There's lots more to SQL Server than meets the eye. If you like to peek under the covers, this session is for you. We'll cover documented features and find some interesting but little-known tidbits, then move on to undocumented and semi-documented objects and procedures. We'll also look into the guts of SQL Server to find clues about new features, and use SQL techniques to do it! You'll learn about system objects, internals, and other utilities to peel away all the layers and discover all the hidden treasure inside.
    100
    Dev
    Never given time or care, never forming good relationships, becoming bloated, corrupt and rife with indistinguishable copies, and all so horrifyingly pervasive in society. But enough about the Kardashians, what about YOUR DATA? If you want to straighten it out and prevent it from going too far in the first place, this session is for you. We will cover constraint basics (not null, check, primary key/unique, foreign keys), provide standard use cases, and address misconceptions about constraint use and performance. We will also look at triggers and application logic and why these are NOT substitutes for (but can effectively complement) good constraint usage. Attendees will enjoy learning how to keep THEIR data off the tabloid page!
    300
    DBA
    You are considering using Azure with your on-premises SQL Server, but would like to know what capabilities the next version of SQL Server has when it comes to hybrid cloud solutions.

    Join this session with Microsoft Senior Premier Field Engineer David Peter Hansen, where we will go through some of the new features and the capabilities in combining SQL Server on-premises and Azure. We will take a look at features like backup to Azure, Managed Backups, AlwaysOn Availability Group replicas in Azure, data files in Azure, stretch database, and much more.
    300
    DBA
    SQL Server 2016 (vNext) is out very soon and theres some really cool features coming. 
    In this session we will use the force and run through some of the many new features including;

    * Always Encrypted
    * Multiple TempDB Database Files
    * Query Store
    * Stretch Database
    * Temporal Tables

    Attend this session or don’t attend this session, there is no try.
    300
    BI
    Power BI and Power Query make it easy to load data from a variety of different data sources and shape it according to your needs. However, in the real world, things can very quickly get out of hand and your data load queries can become overly complicated, slow and difficult to maintain. In this session you'll learn some best practices for loading data with Power BI, Power Query and the M language that will make your life easier in the long run. Topics covered will include:
    * How to avoid connecting to the same data source multiple times
    * Chaining queries so that logic can be shared
    * Using parameterised queries and functions
    * Using parameter tables to let users control what data is loaded
    * Ensuring query folding is taking place so that you get the best possible data load performance

    300
    DBA
    Powershell Desired State Configuration (DSC) is a declarative configuration management system.

    Most DBAs are probably using unattended
    installs followed by a bunch of post configuration scripts make sure
    our SQL Server instances are production ready.

    We will take a look at how Powershell DSC works, and how this can be used by DBAs to install, configure and manage SQL Servers.

    We will try to answer questions like:

    How does Powershell DSC work?
    How would a DBA use Powershell DSC?
    Are the resources to manage SQL Server mature enough?

    After this session you are better equiped to decide if Powershell DSC is useful for you, now or in the near future.
    300
    Dev
    The quickest way to migrate your on-premises OLTP database to Azure is to simply "Lift & Shift".
    You create a vm in Azure and size it to match your local system and move your database into it.
    This might not be the most cost effective way though and also, you still have to do all the database maintenance yourself.

    In this session we will investigate how we could use more of the cloud features like SQL Database, Redis Cache, Search, etc. in order to truly scale our system. And we'll see if this increases or lowers the total cost of ownership.

    This excercise is about an OLTP system but we will also look at how it affects loading our DWH from this new setup.
    300
    DBA
    The most mundane task of the DBA that we all thought we could do with our eyes closed has become complicated again.

    There is no single, simple backup/restore (or import/export) story for on-premises databases, databases in SQL Server on an Azure VM and Azure SQL Databases.
    In this session we will look at the various ways to archive and move databases around with the Azure (hybrid) cloud.

    Learn how to:

    Move data between SQL Server and Azure SQL Database
    Move data between your datacenter and Azure
    Create backups for long term storage to comply with your corporate data retention policy
    Create (automated) database copies for your developers

    We will discuss the various options and their respective pros and cons.
    This session will restore (pun intended) your confidence in your backup/restore skills.
    100
    Car
    "It is much easier to trick someone into giving a password for a system than to spend the effort to crack into the system."
    This is a common line of thought in today's world of increased cyber-security dangers.
    In this interactive session we'll take a look at how social engineering works, 
    the psychology behind it and why is it still the most effective way to gain access to your company's secrets.
    The best attacks happen when people don't even realize they are being attacked and 
    in this session we're going to try to fix that and educate you on how to realize when someone is trying to hack you.
    200
    Dev
    Do you want to know about the various types of index SQL Server allows you to create? How to prove your indexes are being used? How indexes affect your database? This session has the answers. You'll find out what an index is, how to determine which columns to index, and how indexes work.

    Once indexes have been introduced, we'll look at how SQL Server makes them work. The session then moves on to some practical examples, showing how analysing a SQL statement can help to identify useful indexes for our databases. With a first index created, Mike then takes a look at how you can prove your indexes are being used via execution plans.

    The session finishes up by looking at the various types of index SQL Server supports, such as clustered/non-clustered indexes, filtered indexes, and unique indexes. There may even be time to determine useful ways of maintaining indexes too!
    300
    DBA
    Attackers have gained access to your company network and are roaming around for valuable information.
    Unfortunately nobody noticed it yet. For companies such scenarios have resulted in everything from 
    malicious use of their data to outright bankruptcy.
    But as a DBA you know you can rest easy because your most sensitive data is protected. 
    How can I do that, you ask? That's what we'll take a look at in this session by entering the world of 
    security keys, encoding, decoding, hashing, encryption, searching and more.
    Because your data is important and nobody messes with you.
    200
    Car
    Mike McQuillan's first book, Introducing SQL Server, was released in October 2015. Have you ever wanted to write a book about SQL Server? Attend this session to find out how Mike did it - and how you can do it too!

    Mike will explain how to plan the writing of your book. What should you write about? How many chapters should the book contain? What areas of SQL Server should you cover? And how the heck do you have your book published?

    For the answers to the questions above and more, attend this session!
    400
    DBA
    This session is a deep dive into query plans and is presented by a former Microsoft PFE (Field Engineer). Learn how a Microsoft Engineer looks at plans and go beyond the typical! There will be plenty of demos and a lot to learn. Join me as I cover the "noteworthy" query plan patterns that go beyond the normal areas that customer tend to investigate. This session covers SQL Server 2005 forward and includes 

    * SQL Server 2016 Query Store 
    * SQL Server 2016 Live Query Statistics.
    * DMVs
    * Query Plans
    * The engine
    * ... and much more!
    300
    DBA
    A vital component inside the query optimizer is the cardinalityestimator,  these algorithms calculate the estimated numberof rows that will be outputted from each operator.  In SQL Server 2014, therehave been many changes aimed at giving a more accurate number of rows, andtherefore better plans.This session will be a look at these changes, comparing andcontrasting with SQL2012/2008 to see how they help.
    500
    DBA
    The query optimizer is at the heart of SQL Server. Without it SQL Server would be a vastly inferior product, queries would have to be manually tuned at each and every turn, and generally speaking, the optimizer protects us from the complexities and mechanics involved. Much of the optimizer's internal workings are hidden from the user, but can be revealed by using a selection of undocumented trace flags to gain further knowledge and insight into how your queries and data are processed to create a plan. 

    This session will be a deep dive into the optimizers' internals and not for the
    faint of heart. 
    400
    DBA
    At the heart of SQL Server is the cost based optimizer. Stop and think about that a minute, the optimizer attempts to give the “best plan” based on the cost of the work undertaken.

    How does it know the cost of the work before its done the work ? This isn’t a conundrum, it doesn’t. It estimates! How does it estimate ? That is statistics.

    This will be a deep dive into how the optimizer makes its decisions to give you a plan, the things that can go wrong and how you can have influence over these choices.
    100
    BI
    SSRS is a complex and often times awkward beast, and being handed the reigns and told to get to work can be a bit daunting. To make matters worse there's often a general lack of knowledge in any given team, and training is hard to get; we end up spending most of our time franticly Googling for an answer every time he hit a roadblock. In this overview session we’ll go from "What Is SSRS?" to managing completed reports in Report Manager. We'll also look at a number of the issues you may run into when working with SSRS, and how to resolve them.



    This session aims to get you familiar with SSRS, and aware of common issues. It is aimed at people with little to no experience with SSRS.
    200
    BI
    You’ve just been given access to an Analysis Services cube and told to build reports on it, but SSAS uses MDX for querying, not SQL. Time to learn how to query data in a cube using MDX! In this session we’ll cover the basics, and a few useful extras, to help get you comfortable with querying using MDX.

    This session aims to make you familiar with using MDX to get data from an Analysis Services cube. It is aimed at people with little to no experience with MDX.
    200
    BI
    The R is out there! It's invading databases and reporting engines near you. This session shows you the motive, means, and opportunities behind this really cool development for SQL Server 2016.

    We won't be focusing on how to write R code. We'll be focusing the platform as a whole, with a view to showing you when you can utilise R effectively.
    300
    BI
    You’ve mastered the basics of SSRS: creating charts and tables, adding datasets and data sources, publishing reports, but you know there’s more to it. How can you make your reports more interactive? How can you control the look and feel more freely? How can you stop it from looking awful in Excel and PDF? This session aims to answer these questions and more.

    This session aims to build upon existing knowledge of SSRS, to enable users to get more from the platform. It is aimed at people with little to moderate experience with SSRS. 
    100
    BI
    Many departments don’t have a solid workflow for report development. Maybe you work in one of these departments, or maybe you’re just new to report development. This session will take you through a tried and tested workflow, end to end, so you can implement your own.

    This session to provides an an example of an effective report development workflow, and is aimed at anyone who currently does not have one, or feels theirs could be improved.
    300
    BI
    Captain’s log: Their data integration speed is astonishing, their reports are correct, they have an uber-ticket clearance rate. This newly discovered race of BI Analysts are doing something called DataOps…

    I wouldn't say that no man has gone boldly before us on this one, but it's definitely not a densely populated bit of the BI space. This session gives you an overview of how you can build robust  development pipelines for your databases, ETL packages, cubes, reports, and analytics.
    200
    Car
    Argh, the maths aliens are coming! Don’t shout EXTERMINATE, they come in peace - learn how you can peacefully coexist with them. A session for those who want to understand more about this data science and machine learning stuff and how it effects today's IT & BI teams, without the scary statistics bits.
    200
    BI
    R is a powerful language to add to the BI, analytics and data science technologies you may already be using. This session circumvents the painful experience of on-boarding a new technology and will give you the foundation needed to use R effectively.Topics covered will include effective R coding, development best-practices, using R as a reporting tool, and how to build and administer a solid platform for analysis.
    200
    Dev
    Whether it's M, R, Python or C#, the world seems to be be conspiring against me, my SQL, and my GUI. What should I be learning and how do I go about it? How do I do things like source control, scripted builds, devops, unit testing, and all those other things those developers are talking about?
    300
    Dev
    Badly performing queries are a problem.
    Erratically performing queries are a worse problem, as it’s often hard to
    identify the problematic queries in the first place, and harder still to
    identify the cause of the problems.

    In this session we’ll look at some of the
    things that can cause erratic performance, parameter sniffing, catch-all
    queries and others. We’ll look at ways to identify the problems, using both SQL
    2016’s QueryStore and extended events and we’ll look at alternate query
    patterns that don’t have the identified performance problems.
    100
    DBA
    That’s never something you want to hear. Unfortunately it tends to be heard far too often. Usually after something has gone severely wrong.

    In this introductory session, we’ll look at backups, backups and more backups (because there’s no such thing as too many backups)

    We’ll look at full backups, what they are and how often they should be run.

    We’ll look at differential backups, how they fit into backup strategies and the pitfalls you may encounter when using them.

    We’ll look at transaction log backups, at why they are an essential part of the backup strategy for important databases and at what happens when transaction log backups go wrong.

    And then, because backups aren't done for the fun of it, we'll look at restore strategies and what options you have for restoring the DB with various combinations of backups.
    400
    DBA
    One of the hardest things to do in SQL is to identify the cause of a sudden degradation in performance. The DMVs don’t persist information over a restart of the instance so, unless there was already some query benchmarking (and there almost never is), answering the question of how the queries behaved last week needs a time machine.

    Up until now, that is. The addition of the QueryStore to SQL Server 2016 makes identifying and resolving performance regressions a breeze.

    In this session we’ll take a look at what the QueryStore is and how it works, before diving into a scenario where overall performance suddenly degraded, and we’ll see why QueryStore is the best new
    feature in SQL Server 2016, bar none.
    300
    DBA
    Distributed replay is one of those features that it seems almost no one’s heard about, and yet it’s incredibly useful. It's the evolution of Profiler's replay trace feature, but far less troublesome than
    its predecessor.

    In this session, We'll look at how to set distributed replay up, some of the common pitfalls that trip people up and how Distributed Replay can be used to do a before and after performance comparison. We'll be using a SQL 2012-SQL 2014 upgrade as an example, but the methodology would apply to any before-and-after performance comparison, such as when deploying major changes to a database.
    300
    DBA
    Performance troubleshooting is a complex subject with many factors under consideration when you find poorly performing SQL statements, using proven methodologies, and evaluating performance data available in the Dynamic Management Views and Functions.

    In this session, we’ll go over a foundation of how and which DMVs to use to identify those problematic statements for versions of SQL Server from 2005 – 2014. We’ll be demonstrating using practical examples, including code that can be taken away and used on attendees’ own SQL Servers. We’ll also discuss how to identify common causes of performance issues, and learn how to quickly review and understand the wealth of performance data available.
    300
    Dev
    We all know that correct indexing is king when it comes to achieving high levels of performance in SQL Server. When indexing combines with the enterprise features partitioning and compression, you can find substantial performance gains.

    In this session, use scripts to query dynamic management views (DMVs) to identify the right objects on which to implement strategy, measure performance gains, and identify the impact on memory and other resources. Devise a sliding-window, data-loading strategy by using partition switching. Track fragmentation at the partition level and minimize index maintenance windows. Discover partitioning improvements in SQL Server 2014. Take home an advanced script for tracking usage and details on fragmentation, memory caching, compression levels, and partitioned objects.
    300
    DBA
    Extended Events is the profiler tool replacement, it is the premium tool for capturing diagnostic information for SQL Server, with advanced capabilities unlike we've had available before, using new features can take time to learn

    This session is focused to make you Effective with Extended Events, find out how to be up and running with using EE in just minutes not hours or days, join us to take a look what EE can do and discover what's happening inside your SQL Servers in ways no possible before.
    200
    DBA
    Migrating SQL Server can quickly become a daunting project with little or no prior experience, learn the approach taken by leading Microsoft consultancy. Discover what important tasks to perform in a typical modernisation engagement, what challenges arise and how to tackle them, use free tools like PowerBI and MAP Toolkit to get the project started.
    300
    Dev
    We will see the life cycle of a query in the SQL Server and what happen in all the steps until the results are returned to the client. We will see also what are wait stats when they happen on the life cycle of a query and more details related to the life cycle of a query.
    200
    Dev
    We will see if the SQL DataBase Elastic Scale is really elastic or it as some rigid aspects. Starting from fundamentals, passing throw the tools available and ending in the management Services. We will see some examples from the developer prespective and also from the managing side.
    300
    BI

    How to get a better insight about the current situation of organization by analysing the Data? How we can predict the next step? Machine Learning is a subfield of computer science, which is so pervasive today that you probably use it dozens of times a day without knowing it. Azure Machine Learning is a valuable tool that can be employed by data scientists with different skill levels. Azure ML also supports R custom code, which can be dropped directly into your workspace. In this session, I will show a demo of interaction of R language with Azure Machine Learning, uploading R package into Azure ML, preparation of the data in R, and  How to publish a R code into Azure ML.
    400
    BI
    Wittgenstein wrote that "The limits of my language means the limits of my world." R and SQL will be essential for business intelligence professionals and budding data scientists alike.

    This session is aimed at teaching R scripting for SQL professionals. Let's look at where R and SQL can complement each other. We shall look at:

    - when R and SQL use the same terms to mean the same thing
    - when R and SQL use different terms to mean the same thing
    - key concepts in each language which show fundamental similarities and differences in how they differ.

    Wittgenstein also wrote that 'I don't know why we are here, but I'm pretty sure that it is not in order to enjoy ourselves.' Hopefully that will not be true of this session, and you can be assured that Wittgenstein's Poker will most definitely be left behind! The R and SQL scripts will be made available before the session, so you can join in the fun as you go along.
     
    Come and remove the limits of your programming world by learning R, using skills you already have in SQL.
    We will look at R from the SQL approach, and you'll see that R is definitely not as puzzling as the Tractatus Logico-Philosophicus.
    200
    Car
    Business Intelligence is hot topics still after years of first appearance of the world in the job market. Term changed many times; Data Warehouse, Business Intelligence, Data Analysis, and Cloud BI…. BI related jobs are still high paid jobs in the market. Demand for BI is pretty high, and number of BI professionals is not that high. So it creates a niche market section for high demand jobs with high salaries. On the other hand BI is not something out of the blue for DBAs, Database Developers, Software Developers, and many IT professionals. Most of software developers build reports and analysis elements on their everyday activities. However that experience in report writing and analysis isn’t that much helpful for them if they want to step into BI jobs. For getting into BI market, you don’t only need experience in report writing, but you also need conceptual and architectural points of BI and DW systems, understanding components of BI, and learning tools of it.

    In this session I will explain steps to learn BI, So you be able to prepare yourself for this high demand market. This learning path is a walk-through that I’ve found it in many years and I believe this is the best path to learn BI, and it will lead you to whatever ends; if you want to be a BI professional, or Architect, or Consultant. I should mention this walk-through is for Microsoft BI career path, however some steps are generic such as BI Fundamentals.
    Here are steps:
    • Prerequisites
    • Fundamentals, Data Warehouse and ETL
    • Modelling with BISM
    • Data Governance
    • Data Visualization
    • Power BI
    • Data Mining
    • Azure
    200
    BI
    Intelligence system will help us to find out issues before it happened. Getting insight about what is happening and what will happened in future will help us to react properly to changes. In this session, I will show how combination of the Power BI desktop and Azure ML can be helpful in prediction analysis. These tools are complementary of each other. Power BI gathered features of Power Query, Power View, and Power Pivot all in one, so building dashboards and effective visualization items with this tool is much easier nowadays. Azure Machin Learning is a valuable tool that is helpful for prediction analysis. You will learn how to analysis data first by Power BI, load data from different resources, transforming collected data, how to model with Power BI desktop. In the next step, I will show how the analysed data can be useful for prediction analysis. Moreover, in this session I will demonstrate how to using Azure ML for creating prediction models, creating web service, using the created web service in a .Net project, calling the web service from Excel files.
    200
    BI
    In the most marketing departments, the tactical question is about who are going to buy our products.  It is more cost effective to identify and spend money on highly potential customers (than those who are not likely to purchase).Hence, prediction does mater, finding the potential customers and analysing their behaviour can be achieved via predictive analysis. In this session I will show some demos about three main different analysis tools proposed by Microsoft: SSAS, Excel and Azure ML. A comparison of SSAS and Azure ML will presented as well.
    300
    DBA
    You want to migrate to Azure SQL Database?
    You want to move your data from on-premises SQL Server or SQL Server in Azure VMs to Azure SQL Database?
    You want to do this without stopping your applications?
    You want to do this easily and with no learning curve?
    You want to do this with a proven technology that you know well already? Come and see how enhancements in Transactional Replication solves this problem for you. We will show you how One-Transactional Replication works the same for on-premises, virtualized and cloud scenarios.
    300
    DBA
    Learn about the AlwaysOn Availability Groups feature which enables very high reliability and availability configurations and services.  Understand what the configuration considerations are, as well as what to watch out for.  If you did not attend Pass Summit 2015, this session is a must attend to discover and see in action what is coming in the next version of SQL Server in this space.
    200
    DBA
    In this session we will illustrate how some major companies use SQL Server technologies for their scenarios. Be it for read scale out, collaboration, high availability, disaster recovery, upgrade and migration… we'll tell you about customers the SQL Server Product Group worked with and explain how they deliver on their scenarios with SQL Server technologies.
    400
    DBA
    If you have an AlwaysOn Availability Group setup, then come learn about the new enhancements shipped with SQL Server 2012 Service Pack 3 and above which allows you to:
    1. Troubleshoot failovers in your Availability Group easily
    2. Determine the reason for connectivity loss and timeouts
    3. Understand which part of your topology is the reason for latency
    300
    DBA
    This session will showcase several improvements in the Database Engine for SQL Server (2012 through 2016) that address some of the most common customer pain points involving tempdb, new CE, memory management, partitioning, alter column as well as diagnostics for troubleshooting query plans, memory grants, and backup/restore. Come see this demo filled session to understand these changes in performance and scale of the database engine, new and improved diagnostics for faster troubleshooting and mitigation. Learn how you can use these features to entice your customers to upgrade and run SQL Server workloads with screaming performance.   Objectives   1. Learn about performance, scale and diagnostics enhancements in SQL Server database engine. 2. Evangelize these enhancements to get an out-of-box performance.
    200
    BI
    Data science has been the buzz du jour for a while now. And of course Microsoft has been making waves with the Cortana Analytics Suite.
    But the full Microsoft advanced analytics stack is much larger than just the Cortana Analytics Suite.

    Do you know the possibilities and all the scenario’s in which it can benefit your company or even you as a DBA or BI developer?
    Do you know that Microsoft’s advanced analytics offering comprises not only Azure but also on-premise?

    In this session we’ll explore what the impact of data science is for a DBA or BI developer.
    We'll then look at the different parts in Azure and on-premise and how they can fit together to form an advanced analytics solution..
    In the last part, we’ll look at a simple scenario that most people can easily start implementing.
    200
    DBA
    Resource Governor can be used to manage SQL Server workloads and system resource consumption. We can specify limits on the amount of CPU, physical IO (since SQL Server 2014), and memory that incoming requests can use. It provides us with a way to deal with rogue and runaway queries which could affect SQL Server’s performance and impact all other users.

    By leveraging resource governor in SQL Server, you can achieve Predictable Performance especially in consolidated or shared SQL Server instances. Those rogue queries can be throttled to prevent them affecting everybody else. Therefore, you can scale up and consolidate various applications and environments onto a SQL Server instance while not having to worry about how
    to manage and balance the various processes and how much resources they can use.

    Come to learn (with the help of some demos) how to get started with Resource Governor and how to use it to limit the resources which particular users/applications can use or how to ensure some of them get guaranteed amount of resources.

    This should help you to come up with a framework and solution to achieve scalability, consolidate your servers and use resources efficiently.
    200
    BI
    Prescriptive analytics are the “final frontier” for any BI
    environment. In short, it gives you decision options based on
    predictions.
    In this session we’ll guide you through building a simple
    prescriptive solution using Power BI and R.

    This session explains the basics you need to know to get started to
    build your own prescriptive solutions.
    No prior knowledge necessary. One hour session is introductory, not in-depth.
    200
    BI
    Microsoft states that Power BI is “a cloud-based business analytics service that enables anyone to visualize and analyze data with greater speed, efficiency, and understanding”
    Does this mean Power BI Desktop can also be used for more than just building pretty dashboards?

    In this session we’ll explore how Power BI can not only be used to democratize BI across the enterprise, but also to democratize Data Science across the enterprise.
    An introduction to the general Data Science process is included, so no previous knowledge is required.
    200
    BI
    There are dozens and dozens of great books, videos and sessions with incredible tips on data visualization. But why are we still seeing bad data visualizations being made?
    We’ll start off with widely accepted best practices and we’ll discuss less accepted practices.
    Next we’ll go over recent real world data visualizations, discuss what went wrong with them and what they should have looked like.


    After you sit down in this session, I expect you to laugh, shout at disbelief and freely discuss your opinion together with everyone else as your participation is part of the fun.
    200
    Dev
    So many of us have learned data modeling and database design approaches from working with one database or data technology. We may have used only one design tool.  That means our vocabularies around identifiers and keys tend to be product specfic.  Do you know the difference between a unique index and a unique key? What about the difference between RI, FK and AK? Do you know if your surrogate keys have their companion alternate keys?

    In this session we’ll look at the generic and proprietary terms for these concepts, as well as where they fit in the data modeling and database design process.  We’ll also look at implementation options across a few commercial DBMSs and datastores. These concepts span data activities and it’s important that your team understand each other and where they, their tools and approaches fit to produce a succesful database design. 
    300
    DBA
    After this session you will:
    1. Understand the diagnostics enhancements available in SQL Server database engine in SQL Server 2012 Service Pack 3 and above
    2. Leverage the diagnostics to troubleshoot and mitigate issues quickly in mission-critical environments
    3. Simplify troubleshooting experience for common SQL Server scenarios
    300
    Dev
    Never, ever, let the code defeat you, there is always a way to code your way out.  This 1 hour session showcases a subset of  T-SQL tricks it took over 10000 hours to learn.  

    Oftentimes T-SQL code sits in symbiosis with a host application, this can be agonizing.  This highly creative selection of T-SQL tricks get you where you need to go without disturbing the front end developers.

    For some time, some years back; I worked in “tandem” with a
    software house. I say, in tandem, I worked they, erm, didn't. This meant that
    any change I needed to make had to be done in SQL alone. This is difficult but
    not impossible (if you're SQL Sneaky). Here, then, I present (a handful of) the
    SQL survival tricks I used.
    200
    BI
    In this session let us overview the key trends in Data Platform with existing Business Intelligence, Data Warehouse and Relational databases, on top of how this new technologies from Big Data space are evolving. To my knowledge the biggest challenge is to marry BI/DW and Big Data which presents integration challenge, the big question is how can existing technology help enabling  new forms of analytics and applications to use both. It is not a question of purchasing the new technology, rather choosing the right technology for better integration on data platform. In this session we will overview key steps with the technology to build a repository that can handle huge volumes of data, analyzing data streams which is known as Internet of Things (IoT). Keen eye of methods in getting up with new techniques of exploration and analysis.

    300
    BI
    The analysis of raw data requires us to find and understand complex patterns in that data.  We all have a toolbox of techniques and methodologies that we use; the more tools we have, the better we are at the job of analysis.  Some of these tools are well known, data mining for example. This talk covers some of the less well-known techniques that are still directly applicable to this kind of analytics.    Last year at Sqlbits I gave a two hour session on four such topics:
    • Monte Carlo simulations (MCS)
    • Nyquist’s Theorem
    • Benford’s Law
    • Simpson’s paradox    
    I will not be assuming that you attended last year’s talk; although if you did and enjoyed it then it is highly likely that you will enjoy this one!  This session will focus on more of these invaluable techniques.  For example, we’ll talk about:
    • Dark Data
    • Probability calculations
    • RFI    
    In each case I try to give you an understanding, not of the maths behind these techniques, but of how they work, why they work and (most importantly) why it is to your advantage to know about them.  I have genuinely chosen only techniques that I have found invaluable in my commercial work. 
    300
    Dev
    Nowadays many companies don't have dedicated developer positions. Therefore the most of the SQL code has been written by application developers. And they use only a subset of SQL Server features and usually in a suboptimal manner.
    I spent last ten years working with application developers and have collected common mistakes and misunderstandings between them and DBAs that increase development, test and deployment costs and reduce the overall quality.
    In this session we will cover the most important things they need to know about SQL Server and that cannot be easily or cheap fixed by DBAs or consultants.
    200
    Dev
    Machine Learning can solve all your problems, it can tell you what to do better and how to improve your business processes, increase revenue, reduce waste etc.   Well, not really. Machine Learning is not magic. You don't just apply machine learning in your organisation and intelligent, innovative solutions come out of nowhere. Machine Learning has its limitations and its beauty, but it all comes down to data and questions. You need good data and the right questions and then you are good to go.   In this session, we are going to look at a typical machine learning process and how to apply it to some real world data.  We are going to use Azure Machine Learning to transform data and ideas into models that are production ready in minutes, all of this while keeping the real world in mind.
    200
    BI
    Power BI can reach on-premises data sources as well as cloud based. With Power BI Desktop you simply connect to on-premises data stores, however when the report published into Power BI website there should be a bus connection between Power BI (on cloud) and on-premises data stores (such as SQL Server, Oracle, SSAS Multi-Dimensional and so on). Here is where Power BI Gateways come to play its role. Personal and Enterprise Gateways create the connection path from the data set in Power BI on cloud to the
    data store on -premises (on your organization server, or even your laptop!). Gateways create that connection line through Azure Service Bus.
    There are two types of gateways for Power BI: Personal, and Enterprise. They build for different purposes. There is also a SSAS Connector for DirectQuery Connection to SSAS Tabular. In this session you will learn about all of these gateways and the connector. You will learn it through many live demos.
    In this session you will learn
    • Difference between Personal Gateway, Enterprise Gateway, and SSAS Connector
    • In which scenarios components above would be helpful?
    • What are limitations of each gateway?
    • Myths and Misconceptions about these services
    300
    BI
    Temporal tables are new type of database tables introduced in SQL Server 2016, these tables are system-versioned and keep history of changes (insert, delete, update) of everything happened on data rows. Retrieving change log from these tables are easy. These tables can simply tell you what was the data at specific point of the time in the table. These tables works with datetime2 columns to keep FROM DATE and TO DATE information of each change. This means these tables can be used for implementing changes in dimensions, yes you know what it called; Slowly Changing Dimension!
    In this session you will learn how to use Temporal tables for SCD implementation in your data warehouse. You will also learn about some challenges that this method might bring in. You will see with demos how to implement the solution and also about challenges such as Inferred Dimension Member.