Platinum Sponsor

Sessions Title

Filter Sessions:
200
DBA
Sessions at SQL Bits deal with interesting and exciting features, innovations and real world experiences, but none deal with an area unfortunately relevant to us all - licensing. SQL Server licensing is complex and a clear understanding is vital to managers, architects, consultants, DBAs and even developers. Join us and we will walk you through the critical areas of ensuring licensing compliance, CPU core licensing changes with 2012+, Software Assurance, Edition impact, HA/DR solutions and virtualisation. We will also provide real-world examples of customer licensing challenges and solutions.
300
DBA
With every SQL Server versions we have many performance related tools to tackle the performance related problems. What kind of tools can we use for various performance related issues? Is the commonly used SQL Server performance tools provide full support to our day to day performance issues?

There are many questions that needs an answer to help every DBA & Developers to stop scratching their heads. This session throws plates full of platter and you can decide what to choose for each different scenarios that you face in your work place.
300
DBA
Have you ever had the need to access documents in your
database as if they were files in the file system? SQL Server 2012 introduces a
brand new method for managing large data objects (BLOBs) in a database.
FileTables provide access to data using Transact-SQL - just like any other table
inside the database, while at the same time also provide access to the data
using the operating system File I/O API - i.e. just like any other folder in the
file system.

In this session you will learn how to upgrade your
document management solutions by migrating your large data to FileTables. The
session covers two most typical migration scenarios: migrating from a
distributed data store, where files are stored outside the database, and from a
homogeneous database, where files are stored inside the database, but need to be
accessible from the file system as well.
300
BI
Ever wondered what actually happens when a cube is
“processed”, why it takes so long, how you can configure and optimise cube
processing, or what strategies people commonly use for improving processing?

This session provides a deep dive into cube processing
for MOLAP and Tabular cubes, to help understand how it works and what you can
do to work with it.

Come to this session for a better understanding of how
to configure, optimise and tune cube processing.

Included in the session is case studies from our performance
lab and some sample tools to analyse processing logs.
300
DBA
One of the first questions I usually ask while working on physical design is about estimated data growth and archiving strategy. This aspect is often overlooked and as a result data professionals burn countless
hours massaging fat and ugly databases. Archiving data is complex task affected by multiple business and technical decisions. Patterns and best practices can be not enough - generic solution offers temporary peace of mind and suboptimal performance, which is not sufficient in hyper-competitive, data-driven world. I'm going to show real world examples - from quick wins to black swans and perfect storms. What to expect:
  • Governance, Risk Management, Compliance
  • SQL Server 2005 to 2014
  • Internals
  • Answers
300
BI
On this session, we will walk through the best practices learned in the world of huge amount of data. Imagine 150M rows per day (!!!) and you must save all the history of data. Partition, Server configuration, Cache data...
300
DBA
Kerberos configuration and troubleshooting has always been notoriously difficult which led many DBAs and SQL Developers to resort to SQL Authentication. Official sources present a highly complex description of Kerberos protocol that put people right off. I’d like to offer an understanding in simple terms and present common design patterns to make it easier to get it to work.  I will also show a demo of how to troubleshoot common problems and put them right.
300
DBA
Pop quiz DBA: Your developers are running rampant in production. Logic, reason, and threats have all failed. You're on the edge. What do you do? WHAT DO YOU DO?

Hint: You attend Revenge: The SQL!

This session will show you how to "correct" all those bad practices. Everyone logging in as sa? Running huge cursors? Using SELECT * and ad-hoc SQL? Stop them dead, without actually killing them. Ever dropped a table, or database, or WHERE clause? You can prevent that! And if you’re tired of folks ignoring your naming conventions, make them behave with Unicode…and take your revenge!

Revenge: The SQL! is fun and educational and may even have some practical use, but you’ll want to attend simply to indulge your Dark Side. Revenge: The SQL! assumes no liability and is not available in all 50 states. Do not taunt Revenge: The SQL! or Happy Fun Ball.
300
BI
If you want to conduct advanced, business-oriented Business Intelligence development and Business Analysis, then it is useful to understand data visualisation and have R as a tool in your toolset. In this intensive one hour session, we will look at the latest suite of Microsoft Business Intelligence tools - Microsoft Power BI in conjunction with R. The demos will be implemented in both tools in an end-to-end hour long session so that delegates can see when to use each technology in order to determine 'what to use' and 'when to use it', as well as seeing where the technologies complement each other. We will visualise the data according to data visualisation principles set out by Stephen Few amongst others.
300
BI
Parallel Data Warehouse (PDW) isn't SQL Server - it's different. It's bigger than that. It's a massively parallel-processing, distributed-database platform that leverages SQL Server as part of its architecture. But it has so much more to offer...

In this 1-hour session, you will learn how to "embrace the change" and accelerate your migration to the data warehouse storage engine for the Microsoft Data Platform. This session will be packed with advice and recommendations from actual customer deployments and real-world experiences. Get on board the PDW train, and transform your data warehouse with the Microsoft Big Data Analytics Appliance!
200
BI
SQL Server 2014 receives a very new & important feature - Clustered Columnstore Indexes. Using a xVelocity compression and a batch processing mode this type of indexes which was introduced in SQL Server 2012 is greatly enhanced by making it a Clustered and making it a default recommend solution for the DataWarehouse solutions. Join to discover the Clustered Columnstore Indexes by understanding on which principles they are built and what should you do get the best out of them. This new feature is targeted at the OLAP installations and it is already available for the PDW (Parallel DataWarehouse) v2. 
400
BI
This Deep technical session is about applying advanced transformation through M language. M is the formula language of the Power Query. M is much more powerful than the Power Query GUI functions. In this session you will learn more about top functionality of M that does not exists in the GUI, but they are very helpful in ETL implementation. features such as error handling, working with generators, custom functions... would be discussed in this session. You will also see lots of real world live demos. as a sample a date dimension with fiscal calendar attributes and public holidays fetched live would be discussed deeply in this session.
300
BI
How to do those little things that make your SSRS reports look a whole lot better. Covering improved charts and labeling, alternating row and columns colours in tables, and rendering in different formats, this session will show you to add a bit of extra polish to an otherwise normal SSRS report and most importantly explain *how* they work.
400
DBA
Most Enterprise-scale deployments feature SAN-based storage. As the SQL DBA we rarely get our hands on these cool toys BUT more and more senior roles require at least some familiarity with SAN technologies - so how do you get this required experience? Answer: come to this session, of course (!) -a '90% practical' session - featuring the return of Hue's popular 'no bullets' slides - and gain this exposure.

Hue will bring 2× different training 'SAN in a box'es - one based-on Microsoft and one with an interface more akin to what you'd find in a typical datacentre. SAN storage will be provisioned in-front of your eyes and you'll see technologies like iSCSI in action and explore uniform and mixed storage pools and LUNs aplenty!

The second part of the session will focus entirely on troubleshooting SAN issues - where we'll explore some of the SAN 'dark speak' such as IOPS, queue lengths, I/O size and use Microsoft's SQLIO to test the SAN before we even deploy a DB - OR troubleshoot performance problems.

A '400 Advanced' session in parts BUT frequently returning to basic concepts before 'deep diving' again - come along, I'll guarantee you'll pick-up some new tricks and much needed Enterprise technology exposure.
200
DBA
What is scale? You manage one database, or 10 databases what do they have in common? Simply they are of the same scale. The strategies and techniques that you use to manage a handful of databases all straightforward and well known.



Now what happens if we have to manage 100 databases? Can you use the same different strategies and techniques? How about 500? Is this still the same? Now let’s start scaling, let’s jump to 1000 databases. Do your strategies and techniques stay the same?



In this session we will explore the differences between managing one database and managing 1000 databases. This is based upon real world practical experience from someone who has been there and done that. We will discuss different strategies and techniques, along with the necessary automation.



Leave the session feeling grateful that you don’t have a large number of databases to tend.
200
Dev
You've got a piece of code that you want to implement as a trigger, but you've read that triggers can grind your performance to a halt. Developers everywhere write triggers to implement business logic, to enforce certain types of constraints, or to avoid changing database schema or external code. Many triggers are inefficient and violate best practices - but it doesn't have to be this way. Come learn how to improve your triggers and keep them off the list of potential scapegoats for your SQL Server performance woes.
300
DBA
AlwaysON AVG is today's most comprehensive SQL Server solution for High Availability, Disaster Recovery and workload Scale-out. In the past year I've had the opportunity to substantially use AlwaysON AVG under a production environment. In this session we'll go over the steps to correctly build, configure and use AlwaysON AVG. More importantly, we'll go over the various tips and best-practices that are crucial to fully gain the advantage this feature encapsulates.
300
DBA
So you want to store credit card details in SQL Server what do you need to do?
The holding of card details means that you need to achieve PCI certification what does this mean from SQL Server perspective.
This session covers the following for SQL Server
* Protect the data
* Secure access to the data
* Detect and audit access to the data
We will do an overview of all the areas and drill down into the storage of data and how that can be achieved seamlessly from the application.
At the end of the session you will know what you need to do, and how to do it, to store card details in SQL Server and achieve PCI certification.
200
Dev
How do you deploy your SQL scripts across environments? If you run them individually or have no process this is just the session for you. Primarily aimed at developers but also useful for DBAs, this session will explain ways in which the SQLCMD utility can be used to improve the SQL script deployment process across environments. The type of script doesn't matter – table, stored procedure, function, DDL – you can package them all up and execute them with a single command.

If you are interested in learning more about the SQLCMD utility or just want some ideas on how your deployment process can be improved, this could the session you've been looking for.
200
BI
Are you confused by the fact, that there is not one end-user tool for BI, but there are many from Microsoft? In spite of the fact that Microsoft has decided in 2009 not to build further reporting-tools, but to consolidate the actual portfolio instead, a whole bunch of tools are available and are leading to some confusion through the users: SQL Server Reporting Services, SharePoint Server, Performance Point Services, PowerPivot for Excel and SharePoint, Power View, Report Builder, Excel and Excel Services, Visio, …

You will take a lap around through all of these tools and you will learn in which case to use which. In this heavy-demo session, the very same report will be built with each of these tools so you can see the strengths and weaknesses of each of them with your own eyes. Back in the office, you will be armed with the knowledge to decide which of the tools fits your end-users the best.
300
DBA
The way SQL Server estimates cardinality for a query has been updated for SQL 2014. In this session we will discuss why cardinality matters, the differences between the SQL 2014 cardinality and previous versions, and how to evaluate if your queries will benefit after upgrading to SQL 2014.
400
BI
Data Warehouses are heavily in use nowadays in most of businesses around the world. Data Warehousing brings higher performance, faster and more insightful data analysis out of operational databases. By the way there are some challenges in design and implementing Data Warehouses which needs robust and reliable ETL implementation. In this session you will learn an ETL Architecture implemented by SSIS and MDS to solve couple of most challengeable Data Warehousing scenarios which are Slowly Changing Dimension and Inferred
Dimension Member.
Slowly Dimension Member is about the action to take when a change happens in one of the attributes in a dimension. Inferred Dimension Member is a dimension member that appears first time in the source of fact table with only a single business key that does not exists in the dimension. In this session MDS would be used as a single version of truth as a source. 
There will be many demos through this session which helps you to understand design and implementation of the architecture.
400
Dev
You identified a query that is causing performance issues, and your mission is to optimize it and boost performance. You looked at the execution plan and created all the relevant indexes. You even updated statistics, but performance is still bad. Now what?
In this session we will analyze common use cases of poorly performing queries, such as improper use of scalar functions, inaccurate statistics and bad impact of parameter sniffing. We will learn through extensive demos how to troubleshoot these use cases and how to boost performance using advanced and practical techniques. By the end of this session, you'll have a list of tips and techniques to apply on your environment.
300
Dev
In this session, we’ll explore the good and bad about locking and blocking – essential mechanisms inside SQL Server that every database developer and administrator needs to understand thoroughly. Locking and blocking affects performance and data integrity, and we’ll see how we can influence that functionality under pessimistic concurrency control as well as how snapshot isolation changes the game.

This session will focus on reading data (while being blocked). We’ll consider table & index design and query hints, why we should rarely if ever use NOLOCK, and what alternatives we have. Be prepared for a demo-& code-intensive session – no “GUI-action.”

300
DBA
Clustering, log shipping, mirroring, AlwaysOn Availability Groups, replication - database administrators have so many confusing choices. Microsoft Certified Master Brent Ozar will cut through all of it and make it seem effortless.

You'll leave with simple, easy-to-use worksheets that you can use with the business, project managers, and customers to quickly work out the right RPO and RTO numbers, cost ranges, and timelines. These worksheets cover:
  • How to get the business to get real about data loss
  • How to set realistic expectations of what databases can (and can't) do
  • How to get the right hardware for your next project
400
Dev
Everybody knows that use of stored procedures offers a number of benefits over issuing T-SQL code directly from an application. In this demo heavy session I am going to explain to you: 1. Why plan caching and reuse is a good thing. 2. How to avoid unnecessary recompilations for plan stability-related reasons. 3. How to avoid unnecessary recompilations for plan optimality-related reasons. 4. When plan reusing is not a good thing and how you can deal with this “Parameter Sniffing” problem. 5. Why you should evade conditional logic inside stored procedures that do data access.
300
BI
Corporate Business Intelligence does not have to be long and boring lists. Nor need it be usurped by smoke and mirrors solutions which lure IT departments into complex, unsuitable and expensive presentation layers for enterprise data. Most IT departments already have in place a powerful solution which they might not be using to its full potential – SQL Server Reporting Services. This session explains how to push SSRS to a new level and deliver:
  • Compelling Dashboards
  • Arresting KPIs
  • Stunning gauges
  • Descriptive Scorecards
  • Maps
  • Enhanced interactivity
  • Tablet and SmartPhone output
Delivering reports which can rival the output of competitive products does not require anything other than an appreciation of a series of tips and tricks which extend your SSRS knowledge and current skills. This session will teach you how to:
  • Extend the use of parameters in new and unexpected ways.
  • Push SSRS functions to their limits.
  • Use Images to powerful effect.
  • Use tables in ways you never imagined possible to align and present data, sparklines and trends.
  • Control positioning and layout using little-known options and properties.
Then you will see how revamp the SSRS interface to create:
  • Touch-enabled buttons and tabs to create reports better adapted to tablets and smartphones.
  • “Power-View” styling for a more modern look and feel.
  • Imitate popup menus in an SSRS report.
Finally you will see a few essential data techniques to:
  • Create reusable functions to calculate and return maximum values for gauges.
  • Link datasets.
  • Cache datasets.
  • Link an SSAS server to return data to T-SQL complex queries.
  • Page data sets.
300
Dev
Cloud computing is distributed computing which requires thoughtful planning and delivery – regardless of the platform choice. Because of the inherent complexity of running services at scale, it is important to understand the behavior and performance of the cloud platform in which a database is hosted to match business performance and scale expectations and requirements. This session will look at different scenarios, patterns, and best practices for building scalable databases in Windows Azure SQL Database. We’ll also look at scale-up vs. scale-out models, horizontal partitioning, and some tips for dealing with some of the complexities of cloud database scalability including identity generation, working with relationships, and fan-out queries.
300
DBA
When moving databases to a virtual environment the performance metrics DBAs typically use to troubleshoot performance issues such as O/S metrics, storage configurations, CPU allocation and more become unreliable. DBAs no longer have a clear, reliable view of the factors impacting database performance. Understanding the difference between the physical and virtual server environment adds a new dimension to the DBA tasks. 
300
DBA
Encryption is becoming required in more and more environments, but implementing encryption can dramatically affect performance. Learn how you can maintain high performance while still protecting your data with encryption. This session will examine communications, Transparent Data Encryption and a technique for using Symmetric Key encryption without a high performance penalty.
300
BI
Unit testing is a widely accepted best practice, yet it is
difficult to do well with SSIS packages. This session will focus on a brief
introduction to unit testing principles, and then focus on examples of testing
your SSIS packages using ssisUnit. It will cover in depth how to set up your
development environment to support test driven development and what techniques
can be used to make this practical for SSIS. We’ll show how to handle test
data, how to set up the environment for multiple developers, how to make tests
flexible, and how to ensure that the tests remain valuable over time.
300
DBA
Extended Events is a highly scalable and highly configurable monitoring platform, which every DBA must be familiar with. It has many advantages over alternative tools, such as SQL Trace or Dynamic Management Views. In some cases, it is the only tool that can provide the desired monitoring solution.
In this session we will demonstrate several common use cases, such as monitoring query waits, troubleshooting deadlocks and monitoring page splits. We will demonstrate how to setup an event session for each use case, and how to analyze the collected data in a meaningful way. By the end of this session, you'll have several practical monitoring and troubleshooting solutions to apply on your environment.
200
BI
PDW is constantly evolving and adapting to the new world of data. New features are always being added to increase innovation, enhance productivity and increase adoption.

Join me as I share with you the new features of PDW 2012 in Appliance Update 1 – the next update to PDWs functionality. In one short hour I will highlight all of PDW’s new features paying special attention to my favourite PDW feature : Polybase.
 
400
DBA
  Tempdb unleashed - why the tempdb is so important? Why should we care about this database? Why it behaves differently than user databases? What are implications of that?
200
DBA
Much like the cars of the 1970s sacrificed gas mileage for better performance, database technology has also made its share of sacrifices for efficiency. Fortunately, times have changed significantly since then. Just as adding a turbocharger to a car delivers more power while saving fuel, the addition of compression to a database accelerates read performance while saving disk space.

Come learn how, why, and when compression is the solution to your database performance problems. This session will discuss the basics of how compression and deduplication reduce your data volume. We’ll review the three different types of compression in SQL Server 2012, including the overhead and benefits of each and the situations for which each is appropriate, and examine the special type of compression used for columnstore indexes to help your data warehouse queries fly. As with turbo, data compression also has drawbacks, which we’ll cover as well.
300
Dev
What has the biggest impact on SQL Server overall performance? Hardware, SQL Server configuration or maybe query tuning? No, none of those frequently discussed options. They are important, but the single, most important factor is the database design. A third normal form is great, but sometime a database really should be denormalized to gain a performance boost. This is where indices shine. During this session you will see: how important a proper table design can be, when to use indices on computed columns and why they are way better than triggers, how to create useful indexed views and how to asses their costs and benefits.
300
Dev
Do you have ever looked on an execution plan that performs a join between 2 tables, and you have wondered what a "Left Anti Semi Join" is? Joining 2 tables in SQL Server isn't the easiest part! Join me in this session where we will deep dive into how join processing happens in SQL Server. In the first step we lay out the foundation of logical join processing. We will also further deep dive into physical join processing in the execution plan, where we will also see the "Left Anti Semi Join". After attending this session you are well prepared to understand the various join techniques used by SQL Server. Interpreting joins from an execution plan is now the easiest part for you.
200
BI
Are multidimensional models dead on arrival? Are tabular models the future of analysis? Come to this session to learn the similarities and differences between these two approaches to analytical modeling available in SQL Server 2012 Analysis Services. We’ll explore the pros and cons of each type of model and review how to select the appropriate model to your analytical requirements.
300
DBA
Microsoft SQL Server 2014 brings to market new capabilities to simplify cloud adoption and help you unlock new hybrid scenarios. This demo-filled session will highlight these features and show how you can use these features to lower your TCO and help manage your mission-critical applications by leveraging the cloud to provide new disaster recovery and backup solutions. In this session we will look at 5 great hybrid scenarios that are made possible by both great new features as well as enhanced features of SQL Server 2014. We’ll not only look how to use and implement these hybrid features, but also look at and discuss scenarios around why you would use these hybrid cloud features to expand your on-premises options without adding complexity. We’ll look at how you can easily move workloads from your data center to Windows Azure while still maintaining a complete view of your infrastructure with increased proficiency and reduced cost.
300
BI
Do you want to load your data warehouse fast and convenient? SQL Server gives you plenty of opportunities how to get that task done in an appropriate way. You will have a look at tools and commands like bcp, BULK-Insert, OPENROWSET, MERGE, and SSIS and see how to tune them for performance and how to secure the quality of the process and the quality of the data itself. In addition, you will see how to handle changes in the data source (slowly changing dimensions) to guarantee a data warehouse your users are happy to work with.
300
Dev
Parameter sniffing is a good thing: it is used by the Query Optimizer to produce an execution plan tailored to the current parameters of a query. However, due to the way that the plan cache stores these plans in memory, sometimes can also be a performance problem. This session will show you how parameter sniffing works and in which cases could be a problem. How to diagnose and troubleshoot parameter sniffing problems and their solutions will be discussed as well. The session will also include details on how the Query Optimizer uses the histogram and density components of the statistics object and some other advanced topics.
300
BI
Wondering how all that big data in Hadoop is going to be
moved around? Or how you are going to move any of it into your SQL Server
environment? Come to this session, and learn about some of the familiar and not
so similar tools you can use for moving big data. We’ll compare the options,
discuss how they can work with your existing Microsoft technologies, and
provide some guidance on when to use each of the tools.
200
BI
SQOOP, PIG and HIVE are three tools in the Hadoop Ecosystem which are both powerful and useful.  This session which is demo heavy will look at each of these tools and explore how you use each tool and why. Come to this session to get a good introduction to these Big Data Tools.
300
BI
There are five data visualization tools for Microsoft BI; SQL Server Reporting Services, Excel (Pivot Table and Pivot Chart), Performance Point Services, Power View, and Power Map. In this session you will learn about features supported in any of these tools, and you will understand pros and cons of each tool. you will see with tool perform best in which environment. These tools will be compared on different categories from the UI to development efforts, from environment to features supported. There will be lots of demos and bench marking in this session at the end of this session you will learn how to find the best Data visualization tools that fits and suits for your business (or your customer business).
200
DBA
The new version of Microsoft data platform, SQL Server 2014, offers many exciting new features, which allow almost any business requirement to be satisfied easily. SQL Server 2014 is completely cloud-ready, and it's fully integrated with Windows Azure. It offers outstanding performance and scalability through in-memory technologies, enhanced query performance and more. It also offers first class high availability solutions under the title "AlwaysOn". And, finally, SQL Server 2014, together with Microsoft Excel and other Microsoft BI tools, offers a complete BI solution out of the box.
In this session we will present all the exciting new features in SQL Server 2014 and talk about the benefits we can gain from upgrading to the new version.
300
DBA
In this session, I'll walk you through the configuration changes we make by default to SQL Server builds: what we switch on, what we switch off, and how you can decide what to do in your organisation.

I'll also explain some specialist trace flags that we don't use by default but that you should be comfortable enough with to make decisions about.
200
Dev
When you pass in a query, how does SQL Server build the results? We'll role play: Brent Ozar will be an end user sending in queries, and you'll be the SQL Server engine. Using simple spreadsheets as your tables, you'll learn how SQL Server builds execution plans, uses indexes, performs joins, and considers statistics. This session is for DBAs and developers who are comfortable writing queries, but not so comfortable when it comes to explaining nonclustered indexes, lookups, and sargability.
300
DBA
The session starts off by demonstrating how statistics directly affect execution plans and how those plans affect performance in order to establish immediately the importance of statistics. We then go over how statistics are stored and generated including a detailed breakdown of the information contained within DBCC SHOW_STATISTICS so that the attendee will be able to accurately interpret their own statistics. From there we walk through how statistics are created, both automatically and manually, in support of better execution plans. We'll also cover various mechanisms used to maintain statistics since having accurate statistics is one of the most important things you can do from a performance tuning aspect. Throughout I'll demonstrate mechanisms for monitoring statistics, observe behavior of the creation and update of statistics, all using up to date tools. We'll even cover some of the new information available through SQL Server 2014 monitoring as well as changes to cardinality estimations within SQL Server 2014.
300
Dev
Building software is hard, and we often find that fixing bugs is expensive if they are not caught early. Continuous Integration has proven to be a valuable technique in improving software quality by finding problems quickly and letting developers know immediately they have introduced a bug.This session demonstrates how you can implement CI for SQL Server databases, execute automated tests against your code and inform developers that they have caused a problem immediately.
300
BI
How secure is your BI environment? The Microsoft business intelligence stack contains multiple tools which each have different security configuration options and interdependencies.   This session starts with a review of the security architecture of each component in the BI stack and highlights vulnerabilities in the architecture that must be addressed to properly secure your BI environment. In this session, you'll also learn about the relationship across the security settings in the BI tools, backend databases, and the Windows operating system. Building on this foundation, you'll learn what steps are necessary to apply security best practices in each component of the Microsoft BI stack.
200
DBA
The protection of your data is very important to many organizations these days. More and more companies and governments are calling for some type of encryption to be used on all systems, but especially in databases. This session will highlight the options in SQL Server for enabling encryption to protect your data. We will examine ways to secure your communication link with IPSec and SSL. This sessions will show how Transparent Data Encryption (TDE) can be used to protect your data files, log files, and backup files while explaining the potential pitfalls of relying on this feature. The talk will also demonstrate practically how symmetric keys, asymmetric keys, and hashing can be used to protect your data at the column level.
400
DBA
You know locking and blocking very well in SQL Server? You know how the isolation level influences locking? Perfect! Join me in this session to make a further deep dive into how SQL Server implements physical locking with lightweight synchronization objects like Latches and Spinlocks. We will cover the differences between both, and their use-cases in SQL Server. You will learn about best practices how to analyze and resolve Latch- and Spinlock contentation for your performance critical workload. At the end we will talk about lock free data structures, what they are, and how they are used by the new In-Memory OLTP technology that is part of SQL Server 2014.
300
BI
This session describes how to prepare an Excel workbook for publication to a Power BI site to fully exploit supported Excel reporting capabilities, and those of Power View, Power Q&A and Power Map.
 
Topics will include defining an intuitive Power Pivot data model with friendly names, synonyms, comments and data formats; enabling images; defining "automatic" calculated fields (measures) to provide end-user flexibility to aggregate data; and how to appropriately configure the data model reporting properties. Numerous demonstrations will show the before-and-after effects in Power View and Power Q&A when applying configurations to the data model.
 
In addition, supported native Excel reporting capabilities that will function in Excel Online (via a web browser) will be described, together with an explanation of how to appropriately configure the Browser View Options.
 
This session is relevant for Power Pivot data model developers and information workers interacting with Power View reports.
300
BI
Ever deployed an Analysis Services cube that worked perfectly
well with one user on the development server, only to find that it doesn’t meet
the required volumes of user concurrency?

This session focuses on tools and methodology to load test Analysis Services in
highly concurrent environments. We have a case study with some shocking results
as to what you may have to do to configure and scale analysis services.

As bonus content we also do the same case study on load
testing the tabular model. How to compare and optimise its concurrency and scalability.

Sample source code and configuration notes will be supplied to help you load
test analysis services. We will be discuss both the MOLAP and tabular model.
300
BI
Data mining is a technique used to derive previously unknown information from large amounts of data. The process of this knowledge discovery can help uncover new patterns within data and help analysts better understand the large data sets. Additionally, once these patterns have been defined, you can use them as part of predictive modeling estimating the likelihood of some event occurring.

SQL Server Analysis Services includes a data mining engine that can be used at various levels within an organization, from analyst to developer. This session will look at data mining as a method of data investigation for the business analyst and developer.  We examine the business case, requirements and outcomes of data mining so that it can enhance our understanding of data sets. We will look at both simple and complex implementations of data mining that allow new information to be extracted from
data and used in novel ways.
300
BI
SQL Server Analysis Services Tabular Model allows rapid and simplified data modeling of Business Intelligence solutions. However, an important aspect that is often overlooked when building a tabular model, is meeting real-time requirements. In this session you will learn how to tackle low latency (or close to real time requirements) using the different types of Data Access and Query Modes of Tabular Model and how your choice impacts your reporting options in the Microsoft BI stack.

Using live demo, this session will outline advantages and disadvantages between DirectQuery and In-Memory modes. We will also highlight impersonation and partitioning differences between the two modes. Finally, we will also cover the definition of a Hybrid Tabular Model solution 
400
BI
Join me for an hour of playing with different ETL patterns by using Clustered Columnstore Indexes.

Using different Hardware might lead you to different conclusions,and the size of the workload is always the paramount of your performance.

Loading data first and then creating a Clustered Columnstore or creating Clustered Columnstore and than loading - join me to find the answers!
300
Dev
The prevailing opinion is that the optimizations for SQL Server are usually only done on the server itself. But is there anything we can do on the client to gain more speed? Because there are always at least two sides of the performance coin, in this session aimed at .Net and SQL developers, we'll dive into the workings of the .Net SqlClient and give you insight into way more than just SqlCommand.ExecuteReader() and SqlCommand.ExecuteNonQuery().
200
DBA
The pace of business accelerates fairly continuously and application development moves right with it. But we’re still trying to deploy databases the same way we did 10 years ago. This session addresses the need for changes in organizational structure, process and technology necessary to arrive at a nimble, fast, automatable and continuous database deployment process. We’ll use actual customer case studies to illustrate both the common methods and the unique context that led to a continuous delivery process that is best described as a pipeline. You will learn how to customize common practices and tool sets to build a database deployment pipeline unique to your environment in order to speed your own database delivery while still protecting your organization’s most valuable asset, it’s data.
300
BI
Give your queries a multidimensional makeover. In this session we'll look at the structure and basics of MDX, the Multi Dimensional query eXpression language for querying Analysis Services OLAP cubes.

We'll start at the beginning, so you need no previous MDX experience, but it does help to understand what a cube is.

We'll walk through the difference between a member, measure, tuple, set and dimension. Well describe how to decipher a [] from a {} or (). We'll look at examples of each, and show how you can easily access the immense power of cubes with relative ease.

300
DBA
UNIQUEIDENTIFIERs as Primary Keys in SQL Server - a good or bad best practice? They have a lot of pros for DEVs, but DBAs just cry when they see them enforced by default as unique Clustered Indexes. In this session we will cover the basics about UNIQUEIDENTIFIERs, why they are bad and sometimes even good, and how you can find out if they affect the performance of your performance critical database. If they are affecting your database negatively, you will also learn some best practices how you can resolve those performance limitations without changing your underlying application.
300
Dev
This session will show you how a better understanding of how the Query Optimizer works can help you to improve the performance of your queries. I will show you the top 10 Query Optimizer topics that can give you the most benefit by focusing both on concepts and practical solutions. Several areas of the query processor will be covered, everything from troubleshooting query performance problems and identifying what information the Query Optimizer needs to do a better job to the extreme cases where, because of the its limitations, the Query Optimizer may not give you a good plan and you may need to take a different approach.
200
BI
In that session we will discuss about Data Governance, mainly around that fantastic platform Power BI (but also around on-prem concerns). How to avoid dataset-hell ? What are the best practices for sharing queries ? Who is the famous Data Steward and what is its role in a department or in the whole company ? How do you choose the right person ? Keywords : Power Query, Data Management Gateway, Power BI Admin Center, Datastewardship, SharePoint 2013, eDiscovery
300
DBA
Windows Azure Virtual Machines is a robust infrastructure for SQL Server, providing the full benefits of infrastructure-as-a-service offering in Microsoft data centers. SQL Server in a Windows Azure Virtual Machine enable both low overall TCO and an efficient platform for enterprise level workloads. This session will examine the critical facets beyond the mere provision of the SQL VM, and look at the characteristics and considerations for tuning, optimizing, and the key indicators for monitoring performance. We’ll look at special consideration for High-Availability and Disaster Recovery, and pay a little special attention on scalability.
300
BI
This session describes how SQL Server 2012 Master Data Services can be used to implement Master Data Management (MDM). It introduces the discipline of MDM and maps common processes to the feature set of Master Data Services.
 
Topics include defining models, entities, attributes and hierarchies, to store and manage master data. Additionally, administrative tasks including business rules to validate data, the import and export of master data, model versioning, and permission management will be described. The two Microsoft user interfaces (the Master Data Manager web application, and the Excel add-in) will be covered, in addition to T-SQL scripting opportunities to automate processes.
 
This session will be of interest to data stewards, ETL developers and ETL administrators to appreciate what SQL Server 2012 Master Data Services can achieve.
300
BI
There are several patterns in SSIS that are easy and super useful - once
you know them! In this session, you will learn how to create a performance
testing framework to run multiple versions of your package, how to run packages
in parallel, and a hashing algorithm based on FNV1a to vastly improve your load
times

Detail

In this session, you will learn a very basic pattern
for performance testing different techniques, and Mark will demonstrate the
performance impact of some such basics of sorted and unsorted recordsets, fast
parse and others using this pattern, and pull the results into PowerPivot.

Then, an alternate mechanism for creating IDs other
than using an identity will shown, based on using the FNV1a hashing mechanism
in a script component, and the various methods of handling collisions will be
detailed.

Prerequisites

Basic SSIS and T-SQL skills are required to follow the
session. Having used a script component previously will be advantageous, but
not required.
300
Dev
Learning SQL is easy, mastering it is hard. In this session you’ll learn simple but effective tricks to design your database objects better and write more optimized code. As an attendee you will gain a deeper understanding of common database development and administration mistakes, and how you can avoid them.

Ever thought that you were adhering to best practices but still seeing performance problems? You might
well be. In this session I will be covering why the optimizer isn’t using all available processors, when the database engine fails to report all the resources a query has used and why the optimizer doesn't always use the best plan.

You will leave this session with a list of things that you can check for in your environment to improve
performance for your users.

300
BI
Collecting your geographical information might be fun - or it might actually serve as an alibi, or even prove your innocence. Join this session for a hour of data exploration around favorite bars and coffee shops you check into on FourSquare.
400
Dev
DML is used in most cases without thinking about the multiple operations for the db engine. This session will do a deep dive into the internal storage engine down to record level.

After finishing the theory (and inside the theory) the differen DML commands and their tremendous operational tasks for the db engine will be investigated.
SEE, what a workload will be caused by a "forwarded record". What tremendous workload will occur in a page split. What happens if an existing record will be updated in fixed length attributes

Get aware of the huge amount of transaction log may be generated when a simple page split occurs.
Learn techniques to avoid cost expensive operations
200
DBA
The best way to understand why your query is running slow is to look at the execution plan. But, knowing how to get started in execution plans, what to look for, what's important, can all be terribly confusing. This session will provide you with a simple set of tasks to get you started reading execution plans. You'll learn where to start, what to look for first, and you'll be better prepared to tune your queries. We'll also look at some methods you can use to write queries against the plans themselves in order to more easily and quickly identify potential issues within your plans. The information presented will be immediately applicable on the queries you have running back in the office.
300
Dev
For developers, the database is like a mythical land where data (un)happily lives on green pastures.
For DBA's, the applications are annoying things that make their finely tuned servers loose breath.
Because they are both involved in the development, and to achieve harmony, we must learn about the whole request/query life cycle.
In this session we'll take two queries, a read and a write one, and look at the whole trip from the client 
through the server and back to the client. On this journey we'll try to identify where things might go wrong, 
thus helping to bring friendship between developers and DBA's back.
300
DBA
Quorum is one of the most misunderstood aspects of planning, deploying, and maintaining clustered implementations of SQL Server that use the AlwaysOn features: failover cluster instances (FCIs) and availability groups (AGs). The reality is that quorum is crucial for maintaining uptime, and without it properly configured, you can experience downtime even though you are clustered. This session will cover multiple versions of Windows on how to approach, configure, and manage quorum for your both FCIs and AGs, including how to force quorum if necessary.
400
Dev
Query optimizers are highly complex pieces of software, which even after 40 years of research, still face several technical challenges in some fundamental areas. As a result, there may be cases when even after you've provided the query optimizer with all the information it needs, you still don’t get an efficient execution plan for your queries. This session will show you the current challenges and limitations of query optimizers in general and the SQL Server query optimizer in particular, along with solutions or workarounds to avoid each of these problems.
300
DBA
For many organizations having a second data centre or co-location is not a viable option, either from a financial or logistical perspective. In the past, this limited options for disaster recovery (DR). Windows Azure virtual machines and SQL Server allow you can design and build highly available hybrid solutions, which bridge your data center to the cloud. This half-day session will showcase all of SQL Server’s DR features in this hybrid model, as well as environments built only in Windows Azure. You will learn about off-site direct to Azure backups from your local SQL Server. You will gain an understanding of the networking model within Azure. You will see demos of log shipping, replication, mirroring, and Availability Groups in a hybrid model. You will walk away with a solid understanding of AlwaysOn functionality within Windows Azure VMs, the costs, benefits and limitations of building a DR solution using Azure, and how Azure based backup and recovery works.
300
DBA
I have developed a stored procedure based maintenance solution that has become widely used in the SQL Server community (http://ola.hallengren.com). In this session I will go through how the solution works, how it can be used in different scenarios for backup, integrity check, and index and statistics maintenance in an enterprise environment, and how it fits into the new world of AlwaysOn, Hekaton, Azure, and SQL Server 2014.
200
Dev
All of us should have heard of Normalisation, collectively we agree that a database in 3NF is normalised. But did you know that there is also a fourth, fifth & sixth normal form; that it doesn’t stop there with EKNF, BCNF & DKNF. I won’t promise to make you a master in all; but I do look to broaden your mind, to challenge why Normalisation is so useful to all. Time to bring tranquillity to your database design, harmony between Production DBA’s & Database Developers. In this session I will take you through the first to third normal form & beyond. You may think that with faster hardware & disk architecture it may not be needed as much. But when you scale it is just as important now as it ever was. Bloat your variant types & rows at your peril, just because you can have significant column counts in every table doesn’t mean you should.
200
BI
Come to this session to learn about the next wave of Microsoft’s analytic capabilities. Big data opens up great possibilities but do you have the right skills as a data scientist? Learn what Microsoft is doing the analytics space and how this can help you do complex analysis of your data.
200
DBA
This session talks about the new features for table partitioning in SQL Server 2014, such as Single Partition Online Rebuild, and of course - Incremental Statistics.

This is a session which is built on concrete practical examples of how you can improve your database availability by using those new features.
300
DBA
Experienced DBAs know that SQL Server stores data in data files and transaction log files. What is less commonly known is that the transaction log file is broken up into smaller segments known as Virtual Log Files, or VLFs. Having too many VLFs will cause performance to suffer. And having too few will cause backup performance to suffer. How do you strike the right balance? In this more advanced session, veteran DBA Mike Hillwig will dig into the transaction log and show you what VLFs are, how they're created, how to identify them, and how to strike the right balance between too few and too many.
300
Dev
Many computer systems involve stepping through a Workflow, or State Machine.  This session will look at a number of relational database design patterns, all of which come together in a database model for enforcing Workflow within the database in a way that is fully configurable.  Along the way we will look briefly at several possible designs for recording history, giving a passing nod towards Trees and Hierarchies as well.  After throwing in some advanced relational integrity concepts, we will bring it all together with some ideas for enforcing a State Machine in the database without the use of triggers, CLR assemblies and the like.  Sounds like pure, old fashioned fun for the whole family!
300
DBA
Database corruption is one of the worst things you can encounter as a DBA. It can result in downtime, data loss, and unhappy users. What’s scary about corruption is that it can strike out of the blue and with no warning, and without having some way of telling if a database has become corrupt, it could be that way for months or even years before anything gets done about it.

In this session we’ll look at:

  • Easy maintenance operations you should be running right now to ensure the fastest possible identification and resolution of corruption
  • Best practices for handling a database that you suspect may be corrupted
  • Actions that can worsen the problem
  • Appropriate steps to take and options for recovery
300
DBA
There are queries that we usually play a mouse and cat game with, while trying to catch them! When the query is performing badly every time it executes, it is easy to find and tune it. But it could be difficult to isolate those that have inconsistent execution behavior. What is the problem with those queries, do we need to re-write, to plan guide them or to create yet another index? Well, you need to find them first. In this session you will learn how to find and tune queries with inconsistent execution behavior and how to tune them. Getting a predictable query response time is one of the main goals of performance tuning and optimization and this session will definitely help you achieve that goal.
300
BI
Imagine taking historical stock market data and using data science to more accurately predict future stock values. This is precisely the aim of the Microsoft Time Series data mining algorithm. Of course, your objective doesn't need to be personal profit motivated to attend this session!
 
SQL Server Analysis Services includes the Microsoft Time Series algorithm to provide an approach to intuitive and accurate time series forecasting. The algorithm can be used in scenarios where you have an historic series of data, and where you need to predict a future series of values that is based on more than just your gut instinct.
 
This session will describe how to prepare data, create and query time series data mining models, and interpret query results. Various demonstration data mining models will be created by using Visual Studio, and in self-service scenarios, by using the data mining add-ins available in Excel.
300
DBA
You probably have a few Powershell scripts sitting around, perhaps written from scratch but most likely you borrowed the idea from a blog somewhere, and then used those as templates to achieve your goal. It's time to take those ad-hoc scripts turn them into your very own module. And while we're at it we might as well add proper error handling, parameterization and pipeline support. I will also demonstrate how to build help, force and whatif support.
This is a demo rich session and all demos will be covering practical SQL Server related solutions.
200
DBA
Indexing presents daunting challenges for even the most seasoned professionals, as it offers countless options to choose from. With a little help you’ll see how to simplify indexing in your environment and improve
the overall performance of your SQL Server applications. In this session you will learn all about the different architectures of indexes, from that how to make the right choices when designing your indexes so that both the database engine and your DBA will love you for it. The session will also cover how to find missing and unused indexes, the cause and how to resolve fragmentation issues as well as how to maintain your indexes after they have been deployed.

After attending this session you will have a much better understanding of how to create the right indexes for your entire environment, not just for that one troublesome query.
300
DBA
As DBAs, we have to manage a huge number of servers and deal with a high volumes of changes/releases and implement vendor products which need elevated levels of permission to run. As a result, all the control and scrutiny that we wish to exercise becomes hard and almost impossible.

Very often, this results in undesirable settings, sub-optimal setups or bad implementations creeping in our environment – all against best practices or our mandated guidelines.

Once they are in, it becomes extremely hard to fix or get rid of many of these issues and the reasons are wide ranging.

With this in mind, this session will explore ways for an administrator to implement solutions to prevent these issues from being introduced in the first place. Or if we cannot stop them, we should be notified about them as soon as they happen.

To accomplish this, we will look at Policy Based Management with Central Management Servers, SQL Server Audits and DDL Triggers.
300
DBA
Extended Events are replacing the old SQL Trace & Profiler, and there are many good reasons for that. In this session I want to demonstrate to you some of the coolest features and possibilities of this Tracing Framework, which is actually not so new anymore, having been introduced already in SQL Server 2008 and much earlier in Windows. If you want to find out how to trace in a flexible and lightweight way, get a callstack without running a Debugger, and do advanced analysis directly inside the GUI together with some background information such as why actions are “actions” and not just simply “columns”, this session is just for you.
400
BI
The Session will cover all advanced topics that you need to know about
when developing high complex security solutions for your SSAS database. Those
topics include cross-level dimension security, multiple role combinations and
foremost dynamic security setups. All of them are designed for different
business requirements but no solution fits them all. The different approaches
will be examined on their impact on caching, connection time and also
maintenance and in the end you probably understand why it can sometimes make
sense to have 2000+ dynamic roles in your SSAS cube! A major part of the
session will be dedicated to dynamic security using SSAS assemblies. Especially
for complex requirements this is often the last hope. This session will guide
you through the most common and uncommon pitfalls that you will encounter and
show how to work around them.
200
Dev
Does your data sit around mocking your best attempts to support good data practices? Databases are also bound by the GIGO rule: Garbage In is Garbage Out. In this presentation, Karen shows you examples of the types of mistakes, misunderstandings and outright cheats that lead to poor data quality, mistrust in IT systems and overall smelliness in our IT solutions-- using real-life evidence of her own data in your systems.

You'll have a chance to share your own data quality FAILs, too.
300
Dev
I am sure you all know that troubleshooting problems related to locking and blocking (hey, sometimes there are deadlocks too) can be a real nightmare! In this session, you will be able to see and understand why and how locking actually works, what problems it causes and how can we use isolation levels and various other techniques to resolve them! 
400
BI
DAX is super-fast, you can use it to query billions of rows in less than one second. End of marketing, now let’s go back to the real world. If your query is not performing as advertised, it is time to dive into the details of optimizations, which means catching the query plan with the profiler, understanding what is happening under the cover and rephrase your query, or re-shape your model so to obtain better performance. In this live demo session, Alberto will start with a simple query and he will perform on stage all the necessary steps to optimize it, showing you the tools ant the techniques used to identify the bottleneck and to fix the performance issues. In the meantime, there will be chances to dive into some of the internals of the xVelocity query engine to  grab a better understanding of the optimization techniques in DAX.
500
Dev
Having a SQL Server solution for a problem does not mean the job is done. Of course, the next immediate issue is the performance. Temporal queries that involve intervals are typically very IO and CPU intensive. For example, a test for overlapping intervals was solved with inefficient queries for years. However, a handful of solutions with fast queries was developed lately. This high-level technical session introduces five different methods to get efficient queries that search for overlapping intervals. Of course, these solutions can be implemented on other temporal problems as well. Actually, the test for overlapping intervals is one of the most complex temporal problems.
200
DBA
In this session we will look at the features which are provided with Microsoft SQL Server 2012 and 2014 as part of the "AlwaysOn" features including site to site configurations to allow of a large scale high availability solution without the need for any high end SAN storage solution.



Additionally we will be looking at the ability to have redundant servers which can be used for reporting or for taking your backups reducing the load from the production database.  We will also look a unique use case using SQL Server 2012's Always On feature to scale out the reads to synchronous read only copies.
200
DBA
We're all told to watch our weight and to exercise more, so to flip this on its head I will be showing you exercises that you can do to watch your waits. You could even eat a Danish pastry while watching if you wish (Danish pastry’s not provided).

In this session you will learn all about SQL Server’s wait statistics, these are statistics the database engine stores about the resources it is waiting on. Armed with this information you as a SQL Server professional
can make better informed decisions on which areas of your environment to tune to greater effect.

After attending this session you will be able to know where to find, interpret and use this information to tie down problem areas in your SQL Server estate not to fix that problem query, but improve overall
performance for all of your users.
300
DBA
It's 18:00 o'clock on Friday. You execute a process that usually takes a few minutes and start packing in order to start the weekend. You are already with your bag on your back, but the process doesn't finish. You start cursing yourself for not waiting to Monday, take off your bag and start searching for the problem. What should you do now?
In this hands-on session, we will go over the ways SQL Server gives us for tracking progress of processes and queries, and identifying bottlenecks in real-time. Among other topics, we will talk about the percent_complete column, how the CXPacket wait type can help us, to Rollback or not to Rollback, and how the new sys.dm_exec_query_profiles DMV can help us.
400
DBA
In this session we will start with a domain and build a working 2 node SQL Server 2012 fail over cluster using Windows 8 Hyper-V and a trial download of Windows Server 2012 R2. The build will make use of a number of Powershell scripts and an explanation of the components of the SQL Server cluster will be covered.
300
DBA
It’s often difficult to know how your SQL Servers will perform under different loads. By performing load testing, we can gain these key insights, perform modifications to existing configurations, and understand the impact on performance levels. Come learn about the native tools at our disposal for performing these important load tests and how we can identify when performance levels begin to drop. Using demos of these native tools – including Distributed Replay Utility (DRU), Database Tuning Adviser (DTA), Perfmon, Extended Events, and Profiler We’ll see how to plan and perform a load test project, gain an understanding of SQL Server’s performance under varying load scenarios, and discover which tell-tale indicators can help alert us to performance degradation.
300
DBA
Join Tim as he delves back into the Periodic Table of Dynamic Management Objects (thesqlagentman.com/go/periodic2012/) to show you just how much data-centric chemistry you can conduct with your SQL Server metadata. The Periodic Table of Dynamic Management Objects is a reference tool for these functions and views that have become so critical for today's SQL Server DBA to performance tune and gain insights into their various SQL instances. In navigating the table we will examine key DMVs and DMFs from 2014 as well. 
300
BI
Power BI offers many tools to explore a data model created in Power Pivot, such as Excel, Power View and Q & A. You can improve the user experience and the accessibility of information by creating the data model in a proper way. In this session, you will learn the best practices in data modeling for Power Pivot in order to improve the browsing experience in Excel and Power View. You will see how important is to name tables in the correct way, to define the correct set of relationships and how to use good synonyms so that Q & A can provide the right answer to user’s questions. If you do not follow the best practices, you might end up with an approach that is good for Excel and Power View but that does not perform as well with Q & A. This session will clarify these issues and guide you to the correct set of choices.
200
Dev
Data architects and designers need to understand the logical, physical, and technical differences in designing for Windows Azure SQL Databases (WASDs) and traditional on-premise SQL Server databases.

In this session we will discuss the reasons why some business problems are better suited for cloud-based solutions and why others might still be best hosted at home.

We'll review the concepts that still work in both and the features that need to be tailored to each target environment. You’ll see demonstrations of the database creation and initial design processes and gain best practices for model-driven development for each environment, including tool support. We’ll finish up with 5 tips for designing databases for both WASD and SQL Server.
100
DBA
These days, I've stopped counting how many times I've been approached to help implement feature X because a customer thinks/was told it provides 24x7 for a DB/instance. These decisions often lead to more downtime and less uptime if the technology choice was not right or cannot be administered by the current staff. The secret sauce is not what gets implemented but understanding everything behind the scenes that influences the final architecture based on your actual requirements. It’s less about technology and more about understanding what it will take to keep your business going even when things seems like they are crumbling down around you. This session will cover how to approach achieving business continuity with the right amount of uptime.
300
DBA
The audience will see benefits and disbenefits of usage of FILLFACTOR in different scenarios. The audience will see what deep impact a wrong usage of FILLFACTOR may have to their applications in the same way as the speed up for different workloads if a FILLFACTOR will be chosen for the objects in the database.

We will do a deep dive into the dependencies of indexes and buffer pool and locate impacts of bad indexes and wrong usage of FILLFACTOR to the buffer pool.

This session is a demo driven presentation (75%)
300
BI
My first experience with MDX was: good, it looks like SQL: SELECT .. FROM .. WHERE. On my second glance it turned out to be completely different than SQL. But after a while SQL and MDX started to feel similar. In this session I bring you to the point of seeing the similarities instead of the differences. We will use your SQL experience to give you a head start with MDX. The session is also good for those with a bit of experience who want to know a bit more about the background. Of course all theory is backed by demo’s.
300
BI
This session goes beyond the classical star schema modeling, exploring new techniques to model data with Power Pivot and SSAS Tabular. You will see how brute-force power in DAX allows different data models than those used in SSAS Multidimensional. You will see several practical examples, including creating virtual relationship (without physical relationship in the data model), dynamic warehouse evaluation without snapshot, dynamic currency conversion, number of events in a particular state for a given period, survey, and basket analysis. The goal is showing how to solve classical problems in an unconventional way.
300
DBA
Data Warehouses often struggle with performance due to their large data volumes and large analytic queries, yet modern business demands real-time analytics against growing volumes of data. Columnstore indexes and batch mode query processing were introduced in SQL Server 2012 and were a performance game changer. However, the 2012 implementation of columnstore did not support direct updating and inserting of data, and other restrictions such as limited data types made the feature less valuable. SQL 2014 removes those limitations—you can use a columnstore as a clustered index (saving valuable disk space), and perform DML directly against the index. More query operators support batch mode, which means more queries will see performance benefits. You will understand the changes for 2014—updateable columnstores may cause changes to your ETL process design. You will see the power of columnstores in analytic queries, their challenges, and how to architect them into a DW design.
200
BI
In this demo heavy session you'll learn how Power Query can fill in the day to day requirements for data integration that we're increasingly seeing as data moves to the cloud.

We'll take a typical business scenario and apply Power Query to source and transform data held on premise and in 3rd party web applications.

Outline
1. Use cases for ad-hoc data integration
2. Introduction to Power Query
3. Alphabet Soup and M
4. Acquiring XML using Restful services
5. So what about SSIS
300
DBA
It's probably rare that you have to install a SQL Server instance manually. You most likely have some unattended install scripts ready and perhaps even a script, using powershell, which configures the newly installed SQL Server according to your (corporate) standards.
But "at home" developers can click a few buttons in Windows Azure and have a SQL Server available within minutes. They are starting to be less impressed with your one-day process. In this session I'll walk you through the steps that you can take to create your own private cloud. Including rolling out sysprepped SQL Server vm's, giving your internal customers self-service capabilities but without losing control over your (corporate) standards. Did you know that you can even get a user interface for your internal customers which looks exactly like the Windows Azure interface?
200
DBA
With the introduction of SQL Server 2014 the line between SQL Server running in your data center and running in the cloud is becoming more and more blurred.  In this session we will review the features which are available with SQL Server 2014 and which integrate with Windows Azure.  After reviewing the available features we'll look at how to configure these features and how to build these features out in the real world to reduce your data center footprint quickly and easily.
300
BI
This session is an introduction to the Apache Hadoop framework and its benefits for processing large volumes of data - all from the perspective of a SQL Server professional. What are the challenges in the RDBMS world? When do we need to work with large data sets (multi-terabyte)? Why do we need a new data framework?

I will present an overview and comparison of several commercial Hadoop distributions (Cloudera, MapR, HortonWorks, Microsoft, IBM, Intel, and EMC-Greenplum), and will discuss Hadoop features, components and extensions. I will show what is available to start a small proof-of-concept Hadoop project, including hardware, software, network, installation, configuration, testing, and tuning details. We will walk through several demos on a small desktop, 3-node Hadoop cluster with data access via ODBC from familiar Windows tools (Excel, SQL Server Integration Service, SQL Server Reporting Services, etc.).
400
BI
Not many people have heard of BIML, even less have used it in anger. Which is why they are all missing a trick. We will demonstrate how to automate the dull repetition out of ETL/SSIS projects with the free BIML tools that come with BIDS Helper.We will also outline how you can build a meta data driven ETL solution that's flexible, reliable and will save you an enormous amount of time and energy when implementing a new ETL solution.
300
DBA
Every solution must be an operational success, yet knowing what needs to be done to make this happen can be difficult compared to meeting its more obvious functionality objectives. This is where quality attributes, also known as non-functional requirements, can help you define your operational readiness by identifying common quality attributes such as availability, manageability, scalability and security. This session will show how you can use SQL Server 2014’s features after having formally defined your criteria for operational success, and its demos and content is relevant to both database developers and database administrators.
200
DBA
In the recent past, data was secure, or at least it appeared to be. What we now know to be true is something quite different. In this session, we will explore Data Security and the implications of making both the data secure and failing to make the data secure. We will cover the fundamentals of what it takes to make data secure right through to practical delivery. This information is vital to the business owner and decision maker, it is what will keep them in business and out of legal difficulties.

Important information for Senior Managers and Decision Makers
  • Making data security real
  • The cost of getting it wrong
  • The gains of getting it right
It is vital, to provide information to senior management that his comprehensible and useful has always been a challenge for technical experts. We must bridge this gap and in this session we will.
300
BI
Since ever the goal of BI solutions was to deliver insights on business process and do further
analysis on top of this data. One of the most common analysis thereby always was and still is  customer analysis. With the release of Power BI this does not require any server-side solutions anymore but can be done directly in Excel using Power BI tools foremost Power Pivot. This session will show straight forward approaches how typical customer analysis like basket analysis, ABC analysis and many others can be implemented in Power Pivot using DAX very easily. One main focus is also to demonstrate how simply rephrasing business cases can be used to better understand how specific problems can be solved in the tabular world of Power Pivot and Analysis Services.
300
BI
Do you know that great feeling when you are struggling to find a formula, spend hours writing non-sense calculations until a light turns into your brain, your fingers move rapidly on the keyboard and, after a quick debug, DAX starts to compute exactly what you wanted? This session shows many of these scenarios spending some time looking at the pattern of each one, discussing the formula and its challenge and, at the end, writing the formula. Scenarios include non-trivial examples, like time intelligence with an ISO-week calendar, budget analysis and comparisons, same period last year with multi selection of non-contiguous periods, distinct count over slowly changing dimensions and others. A medium knowledge of the DAX language will let you get the best out of the session.
300
BI
More frequently these days you will find that there are requirements for DBA’s to be familiar with data warehousing, and in some cases they are expected to know how to build a data warehouse. Building a data warehouse alone is a challenging project, yet it’s still an expectation set by many employers. In this session, we’ll cover the core functional concepts of how to build a data warehouse and load it with SQL Server Integration Services (SSIS). With these core concepts, you’ll be able to build a demonstration for management to prove your technical expertise.
100
Car
As Information Technology Professionals we're not known for being a shiny, happy bunch. That's why when we do polish our professional communication skills we stand out in the crowd. It was not that long ago that Tim Ford was just another geek at the back of the conference hall.  Now he runs his own consulting company, is the Lead DBA for a healthcare center of excellence, SQL Track Leader for DevConnections, Founder of his own line of training events... on Cruise Ships, and Director with the Professional Association for SQL Server.  Join Tim as he shares tips on why he's now a go-to resource for building technical events and how you too can go from just another face in the crowd to a Leader, Disruptor and Aspirational "Inspirer" in the tech community, along the way raising your self worth and confidence.
300
Dev
SQL Server 2014 introduces Extreme Transaction Processing, a brand new memory-optimized data management feature, targetting OLTP workloads. In this session you will learn about two new, and not very well known, features that you can use to share sets of data between modules - either within the database, or between client applications and databases:
  • Memory-optimized Table Variables; and
  • Memory-optimized Table-valued Parameters.
200
BI
In this demo only session we will start out with a blank Excel Workbook. Using Power Pivot we will build a powerful model to analyze our data. Next we will use Power Query to add data to the model, but not before we cleansed to data to make it useful.With a sound model in place we will visualize the data using Power View and Power Map.
500
Dev
The query optimizer is at the heart of SQL Server. Without it SQL Server would be a vastly inferior product, queries would have to be manually tuned at each and every turn, and generally speaking, the optimizer protects us from the complexities and mechanics involved. Much of the optimizer's internal workings are hidden from the user, but can be revealed by using a selection of undocumented trace flags to gain further knowledge and insight into how your queries and data are processed to create a plan.

This session will be a deep dive into the optimizers' internals and not for the feint of heart. 
200
DBA
According to recent findings, 65% of software developers, managers and executives report that their organizations have started down the path to Continuous Delivery. One element sets up a real challenge for continuous delivery automation- the Database. Scripting database objects change-scripts into traditional version or using 'compare & sync' tools is either an inefficient or plain risky thing to automate, as the two concepts are unaware of the other. A better solution had to found.

Continuous delivery for database should follow the proven best practices of change management, enforcing a single change process over the database, and enable dealing with deployment conflicts to eliminate the risk of code overrides, cross updates and merges of code, while plugging into the rest of the release process.
400
BI
You already know that you can accomplish a lot within the Power Query user interface. If you are a data steward, however, you may be asked to deliver complex data sets in Power Query that require knowledge of advanced functionality, including the M language. This session will examine what these requirements
might be, how to implement them, and how best to deploy them to end users. 

In this session you will learn how to create common types of calculations and transforms in M, how to best use functions, how to work with multiple data sources, how to call web services, what the best practices for loading data into Power Pivot with Power Query are, and lots more. 
300
BI
DAX is still relatively new for many BI practitioners, but it has been using for more than 4 years now, also in large installations that push the xVelocity in memory engine to its limit. In this session, we will share some of the lessons learned from the field, where real customers have problems harder to solve than Adventure Works scenarios. How do you make a database fit in memory, if it does not fit?
How do you handle billions of rows with complex calculation? Can you perform reporting in any time zone, with time intelligence, in an efficient way? What tools do you use to benchmark and choose the right hardware? How do you scale up performance on small and large databases? What are the common mistakes in DAX formulas that might cause performance bottlenecks? These are just a few
examples of the problems that you will see in this session. You will learn the solutions found and how far you can push the limits of the system. 
400
BI
You have worked with SQL Server Integration Services (SSIS) for a few years, but more often than not, you see performance problems with your packages. Some packages are running slow, while others are taking up more memory and CPU than before.

In this session, you will learn the internals of SSIS and why having a deep understanding is important to solve performance problems. You will also learn how the control flow engine and data flow engine work, what the impact on performance buffers have, and how execution trees are related to parallelism. 
200
BI
Self Service BI is all about empowering everyone to gather and analyze data, so they can make informed decisions quickly. Having the data available to consume in an accessible manner is a key success factor for making “BI to the masses” really happen.

Power BI Q&A is a new feature that uses natural language for querying data models. The end users do not need to learn any new language and will instantly see the data visualization that gives an answer to their question.

In this session you will learn:
  • How Q&A is working behind the scenes to find the best answer.
  • The types of questions that are supported for sorting, filtering, grouping and aggregation.
  • How the keyword search capabilities will use the context to resolve ambiguity.
  • Whether data normalization is a good idea for Q&A.
  • To what extent the data values within the data sets are recognized for your questions.
  • How and why to configure synonyms, data types and categories for your columns.
  • Why you should configure Default Field Set and Default Label for your tables.
  • How to setup data security for Q&A.
  • How the end user can share their question.
300
DBA
As DBAs, we all know about the big new features, like AlwaysOn Availability Groups, Columnstore Indexes and In-Memory OLTP ("Hekaton"). But there are other "small" hidden features you may have not heard about that can make your lives a lot easier. In this presentation, we will go over some of the cool "small" features that were introduced in SQL Server 2012 and SQL Server 2014, and see how they can help us.
Among other topics, we will talk about:
  • Changes in the storage world and how you can leverage them
  • Buffer Pool Extension to SSD drives
  • IO throttling using Resource Governor
  • Parallelism improvements 
  • The new Cardinality Estimator
  • Partition Switch and Online Index Operations improvements
  • Online Schema Changes
  • Features that can boost your productivity when working with SSMS
200
BI
R, the statistical programming language, is another open-source program making inroads into the Microsoft world. 


R was originally for doing stats, but it now does database CRUD, reporting, documentation, and even websites.  It can be awkward to know where to start, so this session fast tracks you to the sections you need to know about to save yourself pains you didn't even know you had.
400
DBA
What SQL Server doesn't reveal? Every DBA knows that index fragmentation is bad for performance and Indexes need to be rebuilt from time to time – that's basic maintenance knowledge. But how many of us know that index rebuild might cause degradation in performance? This incredible insight has revealed that rebuilding indexes is not enough for eliminating fragmentation. In this session we will deep dive into the core
structure of data files and understand the difference between physical and logical fragmentation. Additionally, we will reveal why SQL Server's fragmentation counter is misleading. A new methodology will be shown to truly eliminate fragmentation and eventually boost the whole system performance.
300
DBA
Why Upgrade? This is the big question that every SQL Server user will be asking, from my experience I say not just for shiny new features but increase your productivity and do-more with-less practices.Data Platform Upgrade topic has been a popular session that I've presented in major conferences like Microsoft Tech-Ed (North America, Europe & India) and SQLbits since 2008. In this session, we will overview end-to-end upgrade process that covers the essential phases, steps and issues involved in upgrading SQL Server 2000, 2005, 2008 R2 & SQL Server 2012 (with a good overview on 2014 too) by using best practices and available resources. We will cover the complete upgrade cycle, including the preparation tasks, upgrade tasks, and post-upgrade tasks. Real-world examples from my Consulting experience expanding on why & how such a solution is offered.
200
DBA
Let’s talk about the elephant in the room. Like the majority
of the SQL-Bits community, I have an extensive Microsoft product background,
with years of experience on the stack. Like the majority of the SQL-Bits
community, I’m being confronted more and more with talk of Big Data
Implementations. Tales of how big data will swoop in and change everything we
know about databases. I’ve spent the last 6 months deep-diving every Hadoop
implementation I could get my hands on (even paid for some training) to try and
dispel some common misconceptions I had about the subject. Here is what I
found:

1)     Big Data compliments your SQL Server, it does not replace it.
2)     I don’t need to be a pro at Linux to use big data.
3)     I don’t need to learn complex new languages .. not when standard ANSI-92 SQL will do
4)     I don’t need to spend “all” my time in the dull, black command prompt. Today we use GUI !!
5)     Forget about SSIS .. after you see Talend .. you might never go back
6)     The most popular Hadoop distributions that everyone is talking about is not necessarily the best fit for me and my current SQL skills. Let’s talk Cascading & Lingual.

By the end of this session, I hope to share enough with you so you feel comfortable enough to take your first Big step into new data.
400
Dev
At the heart of SQL Server is the cost based optimizer. Stop and
think about that a minute, it attempts to give the “best plan” based on the
cost of the work undertaken. 

How does it know the cost of the work before its done the work ? This isn’t a
conundrum, it doesn’t. It estimates! How does it estimate ? That is statistics. 

This will be a deep dive into how the optimizer makes its decisions to give you
a plan, the things that can go wrong and how you can have influence over these
choices.

 

 
400
Dev
SQL Server Integration Services (SSIS) include several most important components required in data acquisition, transformation, and load operations. If the problem at hand demands a solution that cannot be solved using the built-in components alone, you can resort to the freely extensible Script Component, but when the circumstances also call for a more complex, and more robust solution, that will be deployed to more than one destination server, the more appropriate alternative is to design a Custom Component. In this session you will learn how this can be achieved, what advantages it provides, and how to perform even the most complex data transformations in a standardized and reliable environment.
200
DBA
Have you considered consolidation? Should you consolidate on physical or virtualised hardware? Should you use VMWare or Hyper-V? What are the license implications? Should you consolidate the server, instance, or data? Do you need to save money within your SQL Estate? Would you like a more manageable solution for DBA’s?  How will this affect the business?
All of these questions will be addressed as part of a session which will present common questions asked by DBA’s, architects, CTO’s and CFO’s whom all have a say in how your database should be implemented and managed. 
200
Dev
So your CTO wants you to migrate your mission critical database application to the cloud. What's involved and what are the pitfalls?


We will take a sample ecommerce application and database, show tools available for both schema and data migration and the demo performance, scalability and monitoring impacts. 


This session is a wide look at both application and data platform options when migrating to Azure covering IaaS, PaaS and how on premises application features may map to Azure offerings and architecture.
300
BI
This session will look at ColumnStore Indexes (introduced in SQL Server 2012) and Table Partitioning.  Expect examples to be in evidence as we review each of the two technologies separately before combining them and examining how they can be used together to good effect.  The benefits and pitfalls of each will be discussed, as will the new features introduced in SQL Server 2014.
200
DBA
We DBA´s know how important it is to collect performance data. In this session, I will show to you how to properly use the native Get-Counter PowerShell cmdlet to get all information you need from your servers. It has never been as easy to collect performance counter data and save it to a CSV file or a SQL Server table for baselining and later analysis. You may be thinking "But I don’t know nothing about PowerShell and explicitly using .NET classes in my code". Don’t worry; you don’t need to know this to collect counters. Although this is a 100-level session, you’ll be producing 400-level results and your scripts will scale as the number of servers increase, because we’ll use asynchronous collection and processing. And better yet, we’ll schedule all of it. All this using native PowerShell cmdlets and a minimal amount of script code.
300
Dev
Deployment to production is typically a manual process and can take a lot of time. A tempting and faster alternative, but also more dangerous, is to let the developer make changes directly to the production
database. Many companies use continues integration for testing their software and continues deployment to push out code to production in short cycles. But what will it take to do it for SQL Server development?

In this session, you will learn about the technical perspectives of continues integration and continues deployment. You will also learn how you can use SQL Server Tooling for Visual Studio 2013 and Team Foundation Server 2013 to move to a more continues deployment model and deliver changes to your database more frequently.
200
BI
In this session we will walk through how we collected States of Jersey Hansard transcripts from the web, analysed them using HDInsight and loaded them into a data warehouse to be queried and visualised.

The transcripts are unstructured, free-text documents with all the errors and inconsistencies a human can devise! So how do we do it? How can we impose some structure and turn it into something we can work with?

Technologies we will cover include:

  • HDInsight
  • MapReduce
  • Hive
  • Data Quality Services
  • Sql Server Tabular/DAX
  • Python

This session will give you an introduction to using these technologies and help you to understand how you can use them and how to get started.
200
Car
It is rare to do a single job. In the modern world, we find that the number of roles we have of grows year by year. It is through this mechanism that you can find yourself in the position of being a project manager.



How then, can be best navigate this series of pitfalls?



In this session, you will gain an understanding of how to get the best result from a professional project manager. You will also understand some of the pitfalls, which can easily be avoided if you are the accidental project manager. Clearly, a 1 hour slot is not enough to teach you to be a project manager. We will however give you the basic tools so that you can succeed.



The goal is to help you do your job better, by worrying less and to have the confidence to interact with a project manager or to know the steps needed and to avoid the traps that are so easy to fall into.
200
Dev
As businesses strive to extract more value out of their information assets, data professionals are being tasked with assessing the fit of cloud-based solutions to current and future business problems.

In this session we will present the types of business scenarios that best make use of Windows Azure features, what Cloud offerings are now available, and how they can be best used in hybrid solutions.

We'll talk about the costs, benefits, and risks that data professionals need to understand to make the best recommendations for solutions architectures.

Attend this session and learn more about how your role as a data professional is changing and the steps you can take now to secure your future.

We will end with 7 tips for architecting better solutions in a Cloudy world.
100
Car
Ever wanted to convince the boss try something new, but didn't know where to start?  Ever tried lead your peers only to fail to achieve your goals?  This session teaches you the eight techniques of influencing IT professionals, so that you can innovate and achieve change in your organization. 
1. Learn about the fundamental difference between influence and authority and how you can achieve a high degree of influence without explicit authority.
2. Learn the eight techniques of influencing IT professionals, when to apply them, and how to best use them.
3. Discover the communication and procedural techniques that ensure your ideas get a hearing by bosses and peers, and how to best win support for them. 
Prerequisites: Basic interpersonal communication skills and command of the English language.   

Goal 1: Learn about the fundamental difference between influence and authority and how you can achieve a high degree of influence without explicit authority. 
Goal 2:  Learn the eight techniques of influencing IT professionals, when to apply them, and how to best use them. 
Goal 3: Discover the communication and procedural techniques that ensure your ideas get a hearing by bosses and peers, and how to best win support for those ideas.
200
BI
Data mining is one of the key hidden gems inside of Analysis
Services but has traditionally had a steep learning curve.  In this session, you’ll learn how to create a
data mining model to predict who is the best customer for you and learn how to
use other algorithms to spend your marketing model wisely. You’ll also see how
to use Time Series analysis for budget and forecast prediction. Finally, you’ll
learn how to integrate data mining into your application through SSIS or custom
coding. 
300
DBA
You want to consolidate your SQL Server estate but you don’t know where to start. I will demonstrate a methodology of how to do this with the MAP tool and support this with success stories within the field.
300
DBA
As most DBAs will be only too aware of, it's not a lot of fun identifying the cause of production issues against a server that has not had any formal diagnostics or tracing running against it, but all is not lost.

Starting back in SQL Server 2005 the default trace quietly gathers data on key events which can be vital in resolving or at least narrowing down production issues.

Don't believe the bad press this great little tool has been given to date, because in this session we'll go through multiple examples which are based on real world scenarios where I have used the default trace to resolve a number of issues; we'll go through resolving problems as diverse as failed logins and tempdb filling up to identifying who made server configuration changes and DDL changes.

After this session you'll be armed with the knowledge you need to fully exploit the default trace and look at it in a whole new light!
300
Dev
In this session we will take an in depth look at statistics, how they are built, stored and used. We will go over why they are so important and what role they play in selecting a query plan. We will also take a look into estimation and how this affects query plan selection and how estimation and statistics tie together.


Also in this session we will take a look into plan caching, what it is, why it is done and how it can be both a help and a hindrance.


Using lots of demos and examples attendees will leave the session with a greater understanding of these three key areas and also the tools and techniques to work with and understand them in their own environments
300
Dev
Sometimes the default error messages returned by SQL Server are confusing at best, and completely misleading at worst.  If dynamic SQL is introduced into the mix, the picture can get murkier and murkier.  The project plan rarely allows time for developing a robust approach to error handling, and attempts by developers to introduce a modicum of consistency are often rebuffed by the complexities involved.  This session will peep under the bonnet at some of the considerations to take into account when planning your error handling within your application's stored procedures.  Topics covered will include TRY ... CATCH blocks, transactions, throwing errors and making your error messages informative and useful.
400
Dev
Parameters are a fundamental part of T-SQL programming, whether they are used in stored procedures, in dynamic statements or in ad-hoc queries. Although widely used, most people aren't aware of the crucial influence they have on query performance. In fact, wrong use of parameters is one of the common reasons for poor application performance.
In this session we will learn about plan caching and how the query optimizer handles parameters. We will talk about the pros and cons of parameter sniffing as well as about simple vs. forced parameterization. But most important – we will learn how to identify performance problems caused by poor parameter handling, and we will also learn many techniques for solving these problems and boosting your application performance.
300
BI
On this session, we will walk through the best practices learned in the world of huge amount of data. Imagine 150M rows per day (!!!) and you must save all the history of data. Partition, Server configuration, Cache data...
300
DBA
All about the trace flags in the SQL Server. A must session for all DBA's. I will describe how to use the flags, when to use them and what are the benefits as well as what are the possible threats.
300
Dev
Web developers have lots of great tools for simulating times of stress ... BUT how do we stress test the DB - the critical component in lots of processes - has the DB stress been neglected?

Microsoft Load Testing works perfectly and 'point and clicky' for SQL Server too - and it should be a tool in your armoury if you manage OR develop any form of critical SQL DB.

Come to this session and see how to simply and quickly apply some stress to your DB in a controlled environment _before_  that happens in production and the finger-pointing starts. Front-end developers have proved their code - can you say the same for the DB?
200
DBA
Excel can be an incredibly useful tool in your toolbox, but like all things in there you don't want to spend hours doing simple tasks.  This session aims to show you how to do perform some useful tasks quickly and effectively.  

This session is designed to be entirely practical - so if you want a task covered, tweet it to @SteffLocke and I'll see if I can incorporate it.
300
DBA
SQL Server 2014 brings even more opportunities to integrate your on-premises systems with public cloud services, known as hybrid clouds.  The question most people are asking right now is why should I create a hybrid cloud for a Microsoft data platform and how? This session looks at the most relevant Windows Azure features to extend and improve existing SQL Server deployments and why you might want to use them.  It also considers some of the hurdles you might getting to the cloud, but also how you can get over them. The key to integrating cloud services and creating a hybrid cloud is to do it one step at a time.  Instead of being an all or nothing decision, it’s a one piece at a time approach.  By the end of this session you should feel comfortable about what hybrid clouds for Microsoft data platforms are, and how you might create one.
200
BI
"Don’t worry about people stealing an idea. If it’s original, you will have to ram it down their throats.” Howard Aiken, Founder of Harvard’s Computing Science Program. 

Counting is easy, measuring is hard! KPIs, dashboards, and basic data analytics often make us feel like we understand what’s going on. Data is moving so fast these days, and people understand it best through data visualisation. However, Data Visualisation can lead and mislead, and in this session we will look at theory and practice based on dataviz gurus such as Stephen Few and Edward Tufte, and scientists such as Cleveland and Ben Schneiderman. If you don't know who they are - even more reason for you to join us and learn what the gurus have to say about dataviz!  If your data is visualised wrong, your data-based decisions are based on wrong insights. 

In this session, we will look at understanding the data visualisation by exploring the Power BI components and tools available in the cloud, including the Power BI Admin Center, Power Query,Power Pivot, Power View and Power Map. We will look at how to use them will accelerate ideas and help to clarify decisions, and related to this, discuss the roles within IT and the business in relation to these tools. We will also look at business puzzles versus business mysteries, a definition evoked by Malcolm Gladwell (Blink, Outliers) in relation to Power BI.

Out there in some garage is an entrepreneur who’s forging a bullet with your company’s name on it,” said Gary Hamel, a management guru. With Power BI, let’s see how you can translate your ideas and insights to make data-based decisions in to a message that people can see, using cloud as an empowerment tool and the latest, science-based principles of data visualisation. Genius depends upon the data within its reach. (Ernest Dimnet) so let's make sure that your data is visualised according to the body of scientific research for the best results for your business.
100
Dev
Bad habits: we all have them. SELECT * is the obvious one; but in this session you will learn about various other habits and why they can be bad for performance or maintainability. You will also learn about best practices that will help you avoid falling into some of these bad habits. Come learn how these habits develop, what kind of problems they can lead to, and how you can avoid them - leading to more efficient code, a more productive work environment, and - in a lot of cases – both.
200
BI
Imagine a world in that end-user can collect data from your data warehouse, from operational databases, flat files, excel files and even external data sources like a web page or a data market. Imagine that those users can cleanse the data by themselves, manage calculations, and refresh the data through a tool they are already familiar with. Imagine that they also can build nice and crisp interactive reports (tables, charts & maps) by just a few mouse clicks or by natural language.

Guess what! We already live in a world like that. Come in this session if you want to see how you can unleash the full power for your Business Intelligence solution!
400
BI
In this demo-rich presentation, Brian shows you some of the common and not so common ways to tune SQL Server Integration Services (SSIS). Learn how to tune the data flow using some of the advanced
SSIS options and how to avoid common SSIS mistakes. See how to measure performance and how to keep SSIS from monopolizing your server's resources. Lastly, discover SQL Server 2008 and 2012 features that will make SSIS more efficient.
300
Dev
SQL Server offers several Transaction Isolation Levels and a hole variety of different locking-related query hints.  Many database developers a vaguely aware that they are there, but don't really understand them or know when they should be used.  Others sprinkle the NOLOCK query hint liberally through their code, using it as the answer to any and every performance problem.  This session examines what isolation levels are available, how they correlate to locking hints, what their effects are and when to use them.
300
BI
In the Wizard of Oz, Toto pulls back the green curtain to expose that the Wizard of Oz is a fraud. In this session, we will look behind the 'green curtain' of the data visualisation to learn how to 'poke holes' in the data that you are given, both in business and in everyday news headlines. In order to explode the myths in the data that surrounds us every day, it is a little known secret that there are hidden patterns in the data chaos that surrounds us. Deviations from these patterns highlight invention, bias, anomalies and even deliberate fraud. Join Jen Stirrup to learn how to use both data visualisation in Power BI combined with timeless data analysis and patterns such as Benford's Law to reveal or conceal efforts to distort the numbers, and question the veracity of the data. You'll need courage, heart and wisdom to analyse data, since truthful data doesn't necessarily give easy answers!
300
BI
Most data warehouses are in a constant state of flux. There
are new requirements coming in from the business, updates and improvements to
be made to existing data and structures, and new initiatives that drive new
data requirements. How do you manage the complexity of keeping up with the
changes and delivering new features to your business users in a timely manner,
while maintaining high quality? Continuous delivery is an approach for managing
this. It focuses on automation of many steps in the process, so that time is
spent on adding new functionality, rather than repetitive steps. Attend this
session and learn how Continuous Delivery can be applied to your data projects.
300
DBA
A common use case in many databases is a very large table, which serves as some kind of activity log, with an ever increasing date/time column. This table is usually partitioned, and it suffers from heavy load of reads and writes. Such a table presents a challenge in terms of maintenance and performance. Activities such as loading data into the table, querying the table, rebuilding indexes or updating statistics become quite challenging.
SQL Server 2014 offers several new features that can make all these challenges go away. In this session we will analyze a use case involving such a large table. We will examine features such as Incremental Statistics, New Cardinality Estimation and Delayed Durability, and we will apply them on our challenging table and see what happens...
300
Dev
In this session, we’ll explore the good and bad about locking and blocking – essential mechanisms inside SQL Server that every database developer and administrator needs to understand thoroughly. Locking and blocking affects performance and data integrity, and we’ll see how we can influence that functionality under pessimistic concurrency control as well as how snapshot isolation changes the game.

This session will focus on reading data (while being blocked). We’ll consider table & index design and query hints, why we should rarely if ever use NOLOCK, and what alternatives we have. Be prepared for a demo-& code-intensive session – no “GUI-action.”
200
DBA
You don't buy a lot of servers, but you're about to deploy SQL Server, and you only get one chance to make it right. Brent Ozar will boil down everything you need to know into just a few simple decisions. Which SQL Server edition do you need, does your RPO/RTO dictate shared storage, and does your app need 2, 4, or 8 sockets? Armed with the right questions, you'll know exactly what hardware to ask for.
200
BI
Data Mining is the art of data analysis, finding data models, and predicting future behavior based on existing data set. Microsoft Data Mining provides a set of robust and reliable algorithms such as Decision Tree,  Association Rules, Clustering… which are powerful to apply on any structured data set. In this session you will learn opportunities to use Data Mining in real world challenges. There will be lots of demos of Microsoft Data Mining applied on real world scenarios.
400
DBA
  The session will be dedicated to the waits stats - how to interpret them to avoid unecessary guessing. I will present some practical ideas how to do this.
100
DBA
NEW SPEAKER.

New Features of SQL Server 2014 DB Engine not including Hekaton

- Backup Enhancement (Azure, Encryption)
- Partition Switching and Indexing
- Managing the lock priority of online operations
- Columnstore indexes (updatable, Archival data compression)
- Incremental Statistics
- Security Enhancements
- AlwaysOn enhancements
- Buffer pool extension onto SSDs
- Resource Governor enhancements for physical IO
- Improved optimizer with new costings. 
200
Car
While starting Pragmatic Works and other companies, Brian (as with others) made a lot of naive mistakes. In this session, he will share with you the pitfalls he found when starting a business and managing a team of
developers and consultants. Learn how to create a culture in your organization or team that will help you retain employees and help your team stand out. Learn some of the mistakes that most leaders make when starting a company. Finally, learn how to turn your idea into reality and grab your first customers.
500
Dev
This 500 level session will focus on using undocumented statements and trace flags to get insight into how the query optimizer works and show you which operations it performs during query optimization. I will use these undocumented features to explain what the query optimizer does from the moment a query is submitted to SQL Server until an execution plan is generated including operations like parsing, binding, simplification, trivial plan, and full optimization. Concepts like transformation rules, the memo structure, how the query optimizer generates possible alternative execution plans, and how the best alternative is chosen based on those costs will be explained as well.
300
Dev
In this session we will take an in depth look at how query plans work. We will go under the covers and see what happens when you run that query.

We will also take a look at various operators, how they work, why they are chosen and how to avoid them being used in the wrong place / context.

Attendees of this session will walk away with a greater understanding of query plans and the operators, which will enable them to both better interpret their query plans and also write more efficient SQL code
200
Dev
Unit testing has been much slower to catch on in the database world than amongst application code developers, but the times they are a-changing and there has been much talk in recent years about unit-testing database code and even questions about agile development practices amongst database professionals.  How does this work in practice though?  What happens when we have a team developing using a variety of technologies - T-SQL, .Net, SSIS  for example?  How can we handle situations where we need to work with different versions, named and default instances of SQL Server, different drive configurations?  Can it all be made to work?  This session looks at some of these challenges and proposes some possible ways to address them.  No doubt you've got your own experiences to share as well.
300
DBA
Resource Governor is a mechanism built into SQL Server that helps you control resource utilization by different workloads. You can specify limits on CPU, memory and IO consumption for different applications or users. This feature allows you to better exploit server resources and also to provide more predictable performance. It can also serve as a powerful monitoring tool that allows you to monitor resource utilization for specific workloads.
This session will present Resource Governor from the ground up, including the new enhancements in SQL Server 2014. We will demonstrate how Resource Governor can be used to control resources and to monitor workloads in several use cases, such as a multitenancy environment and a single database serving multiple applications.
400
BI
When it comes to clustered columnstore indexes, you may already understand row groups, delta stores, and compression methods, but come see how clustered columnstore indexes work with locking and blocking and when using different compression methods and techniques. We will also dive deep into Dictionaries creation and different methods for ETL processes.
400
DBA
Has your SQL Server or Databases been taken over by a malevolent force and stopped breathing? Worry not, for we shall open the pages of The Necronomicon, SQL Server Book of the Dead, and through its Ancient scripture and verse resurrect them back to life.

In this presentation we will take a detailed look at SQL Server Disaster Recovery and investigate tricks, tips, techniques and best practices to recover from Database and Server failure with minimum downtime and zero data loss.

Whether you are running Standalone or Clustered Instances, use Availability Groups or Database Mirrors, or even use Advanced database functionality we will attempt to cover them all and learn how to bring everything back to life as quickly and easily as possible.

This session will take an in-depth look at Disaster Recovery and discuss diverse topics ranging from Backup and Restore techniques all the way up to rebuilding and recovering failed Clustered environments and much more.

The Necronomicon is a powerful book, so be warned, once you have opened these pages there is no going back!
300
BI
Distributed databases need to be able to dynamically move data in order to resolve queries. PDW is no different. In this session you will learn all about how PDW performs data movement; digging into the DMVs to get valuable insight under the hood
200
BI
In this session, you will be introduced to BI and understand the key components and terminology that is used in the BI arena. You will also explore the components that are provided by Microsoft that make up a BI solution including SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), Master Data Services(MDS), Data Quality Services (DQS) and SQL Server Reporting Services (SSRS). You will understand the role that a data warehouse plays in a BI solution and how the cloud is playing a role in complementing BI Solutions
300
BI
This session will look at MPP architecture and how, coupled with big data, BI will evolve to offer even better insights into business performance. We will explore Microsoft’s relationship and their roadmap for Big Data and show some live demos using the PDW appliance.
300
BI
Windows Azure provides several options for building your BI solution in the cloud, but there are several additional considerations that you must factor into deciding how to architect your solution. In this session, you learn about the available Windows Azure components and services that support a BI ecosystem. In addition, you learn how to properly configure your solution, whether it’s a completely cloud-based solution or a hybrid solution that includes on-premises data sources and on-premises data tools.
200
DBA
The life of a SQL Server DBA is a busy one. A fount of all knowledge, the SQL Server DBA is often interrupted with small questions and routine tasks. This session will show you some functions that you can use to save time spent on this and hopefully inspire SQL DBAs to explore PowerShell further by showing how easy it is to explore SQL Server properties with it.
300
BI
This session describes the role and capabilities of the Power BI administrator. It focuses on the tasks, processes and monitoring made available by the Power BI for Office 365 Admin Center.
 
Topics include how to provision a Power BI tenant and setup a Power BI site, how to install a Data Management Gateway to enable secure access to on-premises data, and then how to publish data from on-premises data sources as OData Feeds and to enable workbook data model refreshes. It will also cover how to manage permissions with roles, and how to monitor resource usage and system health.
 
This session is mostly relevant for Power BI administrators to understand the Power BI feature set and how to enable and support Power BI solutions. It is also relevant for Power BI business analysts to understand what can be achieved with a Power BI site.
200
Dev
This session will cover a range of SQL coding and approaches to various problems faced by anyone writing complex TSQL every day. We'll look at using some under appreciated features and syntax and also some alternate approaches to solving various problems such as unpivoting data and also working with dates.


Finally we'll examine some misconceptions around TSQL & SQL server and look at how they came about and the best way around them
200
Car
The technology field is full of amazing jobs but you need to stand out to get the perfect job and salary. Learn about the jobs and salaries that are available in the market and do a check to see if you’re on track to
achieve one of the amazing jobs in business intelligence or software development. Also, learn how to make your resume shine against thousands of people competing for the same jobs.
400
BI
Is Power Query as powerful as SSIS ? That session will show all the capabilities of Power Query to manipulate data and compare it to SSIS.

Warning M-code inside. 100% demos
300
DBA
Welcome to the strange and mysterious world of SQL Failover Clustering and enter Ye Olde Cluster Curiosity Shoppe where we will reveal a miscellany of top tips, tricks and advice gained from nearly two decades of installing and troubleshooting SQL Clusters.

Roll-up roll-up! Come and see:
  • The Shrunken Head.
  • The Bearded Lady.
  • Many Headed Cluster.
  • The Marie Celeste.
  • The Zombie.
  • Installation of Death!
  • The Bodysnatcher.
  • Montezuma’s Revenge
....and many more exhibits and top tips from the field.
200
BI
Did you know that a large portion of Information Workers’ and Data Analysts’ tasks are akin to “myth busting"? How can Power BI be used for these myth busting tasks? What if myth busting in business applications can be as easily done as few clicks or simply by asking questions in plain English?

Myth busting requires gathering facts, modeling data for repeatable usage and/or testing, exploring data captured, proofing a few theories, coming up with "what-ifs" and making a well informed conclusion. This session will demonstrate how to create an ideal Myth Busting lab for all these aspects all in one suite, Power BI. The demo will show how these activities are played by Information Workers, Data Analysts and IT Administration individually and how Power BI is used for enabling team collaboration. You will learn best practices on delivering self service and mobile Business Intelligence solutions with Power BI, that you can try at home and no safety goggles required!
300
DBA
Ever wonder how someone else does it? There’s no right way or wrong way, but in this session, you can peer over Brent’s shoulder (virtually) while he takes a few Stack Overflow queries, tries various techniques to make them faster, and shows how he measures the before-and-after results.

You'll learn:
  • How to make sure you've got the right execution plan before you start
  • How to measure your changes as you tune the query
  • Why index changes should be the last thing you consider
  • What free tools make the process much easier
300
Dev
A lot of companies have a philosophy of ship early with as much features as possible.
Thoughts about security are an afterthought since it isn't fun to do and no one will attack them anyway.
But the dark side never sleeps and  security breaches have always happened. 
Many have left companies severely exposed or even bankrupt.
In this session we'll look at a few attack vectors that can be used against your company,
and what you as a developer can and should do to protect against them.
It will involve good mix of security conscious SQL Server and application development. 
Because you care about your work and nobody messes with you. 
300
BI
Users expect business intelligence solutions to quickly deliver answers to their questions. When queries start slowing down, how do you determine the root cause of the problem? Is it the report server, the cube, the query, or server resource contention? Come to this session to learn how to use performance counters, report execution log data, and trace files to troubleshoot performance problems. You'll also learn how to set up a monitoring solution to capture data for benchmarking purposes before problems arise and for diagnostic purposes when queries start slowing down. You'll also learn how to interpret the performance monitoring data so that you can take the necessary steps to resolve performance problems in your BI solution.
300
Car
As much as we'd like to believe that our industry is a meritocracy, it isn't. More often than not jobs are offered and awarded based on impressions, recommendations, and references as much as technical ability. This session will examine some of the practical ways that you can use networking, blogging, volunteering, research, writing, speaking, and more to build a brand that will get you noticed. When you find a job that you really want, you'll have a great chance to impress the hiring manager and get the interview, and potentially the offer.
200
BI
Power Pivot and Power View brings with it some amazing business intelligence and data visualization enhancements. Watch how you can improve data cleansing with Excel 2013 and some of the new features integrated into Excel and the other Office tools to visualize data better than ever before. See how the new Power Pivot and Power View features in Excel 2013 take your data to the next level of visualization. We’ll also show how you can visualize map data with Power Map and tie disparate data together with Power Query. 
300
BI
That session will focus on the hybrid facet of Power BI. Power BI have one foot on-prem and another one in the cloud and Data Management Gateway is not the only link between each side. That session will dive into the subject and answer many questions. Where are your data stored ? Where are you queries executed ? etc. Keywords : Power Query, Data Management Gateway, Data Catalogue, Power BI Admin Center, Network
300
BI
This session explores how data can be spatially represented by using Microsoft self-service BI tools. These tools include Excel by using Power Pivot, Power View, Power Map, and Apps for Office; and, Report Builder.
 
Theory and demonstrations will commence with the foundational topic of how data can be stored and prepared to deliver secure, fast, complete and accurate spatial analysis. This topic will be followed by demonstrations and discussion of each of the Microsoft self-service BI tools that enable spatial analysis. Discussion of the capabilities and features of each tool will help analysts determine the best fit tool for a specific spatial analysis requirement.
 
This session will be of interest to business analysts and BI developers.
200
BI
When working with a data warehouse on
SQL Server, it is always best to start by looking at your code before buying
hardware. But you’ve looked at your code, and optimised to the hilt.

This wasn’t enough.

You bought higher and higher end hardware.

This wasn’t enough.

Now you’re not sure what the next step is. Is it to buy the absolute fastest
SQL Server money can buy? Is it to look to NoSQL, or caching solutions, or
scale-out messaging using Service Broker to distribute your load?

These may all be valid solutions.

But there is one more solution.
Automatic scale-out using the Parallel Data Warehouse. With Massively Parallel
Processing, balanced IO and the ability to add nodes at will, the PDW
represents a good solution for certain workloads. Is yours one of them?


In this session, Mark will discuss what the different techniques are that you can use to scale out, dive into the specific details of how MPP scales out, and from his project experience on the PDW and on other scale out projects, guide you through deciding if a PDW is the right solution for you.
300
Dev
Are you working with an application supporting different time zones and international date formats? How does your choice of storing date / time values differ if you are using Windows Azure SQL Database instead of on-prem SQL Server database? Did you know that SQL Server Database Engine, Integration Services (SSIS) and Analysis Services (SSAS) all have slightly different date / time data types? How can you bullet proof your system for the ever growing data in your organization with much more date and time data to come?

With time being the one thing that constantly changes, date and time calculations are widely used and essential to all business transactions. Yet, most systems only use one or two date data types; Is this a wise decision? Quite often data retrieval relating to a period of time performs poorly, or worst of all, is not accurate. These 10 tips will help you bridge the gap, and provide the techniques to build bulletproof systems the users demand and DESERVE!
200
DBA
Microsoft is moving to the cloud and SQL Server 2014 is the first release with features designed specifically to extend your datacentre into the cloud. In this session we'll look at these features with end-to-end demos, discussions and scenarios so you can understand what the future could look like in your own SQL Server environment.
300
BI
Microsoft
Windows Azure offers great potential for small to medium sized applications.
However, especially for Analysis Services such applications can easily become
also quite big very fast. For on-premises solutions this is very common and
various solutions have already been proven to work also for bigger scale
implementations. However, things get different when it comes to Windows Azure.
Single hardware resources are still limited and therefor different approaches
need to be used. In this session I will show how different scale-up and
scale-out scenarios can be implemented in Windows Azure, thereby mainly
focusing on performance and manageability.
300
Dev
You know Bookmark Lookups in SQL Server? You like their flexibility to retrieve data? If yes, you have to know that you are dealing with the most dangerous concept in SQL Server! Bookmark Lookups can lead to massive performance losses that blows up your CPU and I/O resources! Join me in this session to get a basic understanding of Bookmark Lookups, and how they are used by SQL Server. After laying out the foundation we will talk in more details about the various performance problems they can introduce. After attending this session you have a better understanding of Bookmark Lookup and you are finally able to tell if a specific Bookmark Lookup is a good or bad one.
300
BI
This session explores the seven CUBE functions that are natively available in Excel since Office 2007. Unknown to many business analysts, these useful functions can be used to retrieve data model members and values to create cell-by-cell report and dashboard designs.
 
The session topics will introduce each of the seven functions. Demonstrations will range from the simple, to the more sophisticated involving dynamic expressions, MDX expressions, integration of data from multiple data models, and macro-driven layouts. The publication of Excel workbook to SharePoint, and the embedding of reports into web part pages will also be covered.
 
This session is a must for those looking to drive more from Excel when reporting from the BI Semantic Model – tabular or multidimensional.
200
Dev
Have you pulled a script to identify duplicates from a blog post, but couldn’t quite get it to work, because you weren’t sure what that ROW_NUMBER() function was doing. Maybe you heard talk about creating running totals without using sub-queries, but you got frustrated when the groups weren’t totaling correctly. Or maybe, you’ve never even heard of Window Functions. All are good reasons to attend this demo-filled session which demystifies this versatile T-SQL tool. First, we’ll break apart the OVER clause, the key to understanding how window functions work. Then we’ll expand on each group of functions that can use the OVER clause: ranking, aggregate, and analytic functions. Finally, we’ll look at real scenarios where this tool works and talk about performance considerations. When you leave, you’ll have the fundamentals you need to fully develop your mastery of Window Functions.
200
BI
Power BI Q&A is, at the moment, english-speaking oriented. How do you tame it when you are French and are not fluent ? That session will dig into language grammar of Power BI Q&A and help people (not only French 😊) to create Q&A-ready semantic models. Keywords : Power Pivot, Power BI Q&A, Semantic Model, Datastewardship
200
DBA
Any data that we have today is delivered to end users as reports/dashboards. When it is more than 100 reports & 10 departments & 300 users, how to manage the departments, reports, users & security? What are the techniques to implement linked reports, subscriptions, report history, snapshot options, editing report using browser based Report Builder?

Manage the user activity, failed reports, failed subscriptions & other activities with a live dashboard. Take away a working dashboard to monitor all of these after this session.
200
DBA
Ensuring peak SQL Server performance isn’t always easy and requires a lot of work on the part of the DBA. To maintain the best-possible performance, you need to make sure you’re monitoring the right things. But how do you know if the figures you’re seeing are good or bad? Baseline comparisons can help.

In this session I will show you how to get the most from them, explaining what a baseline is, why and when you need to take one, and how you can create one. You’ll also learn about a number of native Windows and SQL Server tools that will allow you to do just that. Don’t wait for a disaster to fully realize the importance of baselining.

100
Car
Would you like to understand the magic of BI and DWH?

SSAS, Cubes, ETL, ELT, etc - demystified & explained into a normal person language.
200
DBA
In-memory OLTP is probably one of the most radical technologies to hit SQL Server in a long time. It has become the topic of endless discussions, debates, and the unavoidable myths. In this session we will cover the actual pains that drove the design of in-memory OLTP. We will understand WHAT it was created for, WHY it solves challenges that proved to be extremely hard to solve, HOW to find out if it is the right technology for you and not less important, what it was NOT designed for. We will cover a few real stories of applications that have utilized In-Memory OLTP successfully.
300
DBA
We are used to the traditional way of connecting our servers to storage via SAN or other common methods. These options do not always scale or easily give us the ability to provide both physical and virtualized deployments the availability, scalability, and reliability that we require. Traditional methods of storage connectivity can be expensive and difficult to implement. With the rise of affordable flash drives, high bandwidth networking, and native support for SMB 3.0 in SQL Server 2012 and later, CSV support in SQL Server 2014, network and alternative storage methods will become the center of our deployments for more than just application traffic and server connectivity. This session will cover how you can take advantage of the speed and reliability of things like a Windows Scale Out File Server, SMB (including SMB Direct), new network protocols, and more to drive down costs, simplify deployments, and standardize your storage connectivity for your SQL Server instances and databases
200
DBA
Using a virtual machine divorces you from the underlying hardware and makes understanding what is happening on your systems more difficult. Take that same situation and move it into the cloud on Azure systems and the difficult can seem almost impossible. Understanding how you can monitor performance in the Azure environment, what works, what doesn't and what's a lie, will help you to better understand how your systems are performing. You'll better be able to identify and fix bottlenecks in order to ensure necessary performance of your systems that are hosted out on the cloud. We'll cover the different methods you have for SQL Server on a VM and for Windows Azure SQL Database so that you can begin monitoring your own cloud-based systems as soon as possible. 
300
Dev
SQL Server 2014 In-Memory OLTP Technology - AKA Hekaton - is here. Should you be getting excited? Will it prove to be the miracle turbo-boost for your databases that you are hoping for? In this session, three simple questions will be answered: Could you use it? How would you use it? And why would you want to use it? Tidy. So when your boss hears the buzz and wants everything converted ASAP, you'll know the score.
500
BI
In an enterprize, merging master data, like customer data, from multiple sources is a common problem. Typically, you do not have a single, i.e. the same key identifying a customer in different sources. You have to match data based on similarity of strings, like names and addresses. In this session, we are going to check how different algorithms for comparing strings included in SQL Server 2012 and SQL Server 2014 work. We are going to use Soundex Transact-SQL function, four different algorithms that come with Master Data Services (Levenshtein, Jaccard, Jaro-Winkler and Ratcliff-Obershelp), and Fuzzy Lookup transformation from Integration Services. Finally, we are going to introduce how SQL Server 2012 Data Quality Services (DQS) help us here. We are also going to tackle the performance problems with string matching merging.
200
Dev
So you are a developer or a systems admin and you've just been handed a SQL Server database and you've got no idea what to do with it.  I've got some of the answers here in this session for you.  During this session we will cover a variety of topics including backup and restore, recovery models, database maintenance, compression, data corruption, database compatibility levels and indexing.

While this session won't teach you everything you need to know, it will give you some insights into the SQL Server database engine and give you the ability to better know what to look for.
300
BI
This session introductory describes and demonstrates how to create a big data analytics solution with structured data by using HDInsight and Excel 2013.
 
This session will be of interest to those new to the concept of big data, new to self-service data modeling with Power Pivot, and for those interested to understand how big data can play a role in a self-service BI solution.
 
The first demonstration will show how to create a big data solution with HIVE and HDInsight. The next demonstration will create a PowerPivot data model to integrate the big data with on-premises and external data. Finally, the data model will be used to produce reports by using Power View.
200
Dev
“Column dbo.xyz' is invalid in the select list because it is not contained in either an aggregate function
or the GROUP BY clause.” Seen it; fixed it, but can we explain why we’re getting a syntax in the first 
place? The optimizer must follow a very specific hierarchy in order to generate a plan. When you 
understand the hierarchy, then you better understand the behavior of the optimizer. 

This all-demo session will explain the logical processing hierarchy, giving you the foundation 
knowledge you need to build well-structured queries that keep the optimizer happy. Learn how the 
FROM clause is processed, why a calculated column’s alias can’t be addressed in the JOIN, why WHERE
isn’t the only filter, and why NULL confuses everything. When you leave, you will think very differently 
about how you build your queries, and the query optimizer will love you for learning to speak its 
language.
300
DBA
In this session, you'll gain invaluable guidance for optimizing your backup strategies. Actually that is a lie, I will be advocating the mind shift from backup strategies and the devastating affect they can have on a company and instead move you to thinking about implementing restore strategies. Things become much simpler when you consider the purpose of a backup, and the effects of the different recovery models and backup options
have on your customers and effectively your livelihood. Using trace flags the session will also cover how SQL Server manages its own backup options and how you can tune them to make sure that you meet the time constraints of your enforced maintenance windows.

300
Dev
The purpose of this session is to have some fun with T-SQL and to learn practical tips and tricks that will help you improve and optimize your solutions. For example, you will learn about performance problems related to using multiple predicates and will be given tips on how to address those problems. You will learn about tips concerning indexing as well as tips concerning various query constructs. Join this session, have fun, and learn a few practical tips along the way.
300
Dev
One of the most often successfully attacked targets is the data that resides in a database server. SQL Server is considered "secure by default" and has in fact been the officially most secure database for 5 years in a row, but most of the exploited weaknesses are due to misconfiguration or weak coding practices.

In this purely demo-based session, I will show several real-life attacks, from mere reading up to disrupting service availability via various types of manual and automated SQL Injection, including a broadly unknown
elevation of privileges attack for a non-sa account.

If you have a database which can be reached by a web-server or other processes beyond your direct control and you are unsure regarding the possible security implications to watch out for as a developer or administrator, this session is meant for you.

– Note: The focus is not to give instructions on how to attack a system, but rather to highlight common weaknesses and why they can be fatal.
300
BI
Polybase is one of the most exciting, innovative features in PDW; enabling transparent data integration with Hadoop's distributed file system (HDFS) and soon Windows Azure Storage Blobs (WASB). See it in action.

In this session you will learn:
* Polybase Architecture
* Parallel Import
* Parallel Export
* Hybrid Query Execution
* New features of Polybase
300
DBA
Microsoft has a lot of useful Business Intelligence (BI) tools in both SQL Server and Office. But why would you only use them to analyse your business? Whether you want to keep a business up and running or a bunch of servers... both can benefit from the same tools. In this talk we have a look at typical problems SQL Server administrators are facing... and how to analyse them using the Microsoft BI tools. From simple reporting up to data mining, this talk covers (and uncovers) them all...
400
DBA
This session will take a deep dive into query scalability with column store indexes and batch mode, this presentation will illustrate how by leveraging vectorised processing and CPU L2/3 cache batch mode scales and compare this to row mode. Stack walking will be used to quantify the cost of conventional page and row compression, column store versus the new to SQL 2014 column store archive compression and row mode versus batch mode operators. The affect of storage that can and cannot keep up with the available CPU resource will be covered along with how well batch mode scales across 24 schedulers.
400
BI
If your database has a few millions rows, DISTINCTCOUNT is easy and efficient. However, when you face billions of rows or hundreds of millions of different values, then performance starts to be an issue. In such a case, you have to find different solutions like partitioning, distinct value reduction, rephrasing of the DAX query and, for each trial, you need to dive into the query plans, to get a good understanding on how to best leverage the VertiPaq engine. You will see all of that in this session. At the end, you will NOT have learned how to write the best formula for any distinct count calculation, but you will have the knowledge to find the best formula for your specific scenario.
300
Dev
A highly interactive and popular session where attendees evaluate the options and best practices of common and advanced design issues, such as:
  • Natural vs. Surrogate keys 
  • NULL/NOT NULL 
  • DBA vs Dev vs DA
  • Data Type Lengths 
  • RDBMs vs. NoSQL
  • the CLOUD
  • Who Calls the Shots and Who Does What?
  •  ...and others. 
Bring your votes, your debates, and your opinions.
200
DBA
The cloud is a polarizing buzzword in IT, especially for DBAs. The reality is that all of us will be affected by it over the next few years in one way or another much like we have been by virtualization. One of the best uses for the cloud is creating disaster recovery easier than we have been able to do in the past. This session will discuss the cloud from a SQL Server DBA perspective and how things like hybrid on premise/cloud solutions can be architected today to bring resiliency to current solutions with features like availability groups in SQL Server 2012 and 2014.
300
Dev
Bad execution plans are the bane of database performance everywhere they crop up. But what is a bad execution plan? How do you identify one in your system and, once identified how do you go about fixing it?

In this session we’ll look at some things that make a plan ‘bad’, how you might detect such plans and various methods of fixing the problem, both immediately and long-term.
400
DBA
While it does sound attractive to use, the online index rebuild has its specifics and it is important to know them when we perform an index maintenance. It will save our time as a DBA, if you could find the answers of question like why it takes different time to complete on every execution, if it is an online operation why a blocking occurs, how to deal with that. The new Managed lock priority for OIR in SQL Server 2014 is a great feature. It could solve some of our problems and easy create bigger ones. That’s why it needs a careful planning and a wise implementation. What does it mean, how it works, and how to implement it, what to avoid - all you need to know about online index rebuild and Managed Lock Priority you can found in this session.
300
BI
Ever tried to import a file with the Import/Export wizard? Or created a bunch of SSIS packages to process a data warehouse load? Then you know how much work it is to specify the metadata correctly just to create a package that actually works. Wouldn't it be cool if you had a descriptive language which looks at your metadata and just created the packages for you? This is what BIML is all about. In this session I explain what BIML is, how it works and I'll show you how you can generate your packages and quickly respond to changes. You can expect a demo rich session with lots of notes from the field and practical examples. This is not just for BI developers, DBA's or SQL developers who need to import or export data occasionally will learn some quick and easy tricks as well.
400
BI
SQL Server 2012 and 2014 Database Engine has so many business intelligence improvements that it might become your primary analytical database system. However, in order to get the maximum out of these features, you need to learn how to properly use them. This in-depth session shows some extremely efficient statistical queries that use the new Window functions and are optimized through algorithms that use mathematical knowledge and creativity. During the session, the formulas and usage of those statistical procedures are explained as well. This session is useful not only for BI developers; database and other developers can successfully learn how to write efficient queries as well. Or maybe you want to learn how to become a data scientist? Then you need to know statistics and programming. You get the best of both in this session.
300
DBA
There's no doubt about it, the transaction log is treated like the proverbial ginger haired step child. The poor thing does not receive much love, the transaction log however is a very essential and misunderstood part of your database. There will be a team of developers creating an absolutely awesome elegant design the likes of which have never been seen before, but the leave the transaction log using default settings. It's as if it doesn't matter, just an afterthought, a relic of the platform architecture.

In this session you will learn to appreciate how the transaction log works and how you can improve the performance of your applications by making the right architectural choices.

400
DBA
The plan cache is one of SQL Server's fundamental components. Getting to know it can take you a few steps ahead in optimizing your system.
In this session we will demonstrate a few basic and advanced ways we can use the plan cache in order to identify and solve query performance problems.
300
Dev
Development has been moving to automatic build and continuous integration. It has been difficult for Database and Business Intelligence project to align with these processes.  SQL Server Data Tools and the Visual Studio Environment are now enabling database and BI projects to embrace automatic builds.  This session takes you through an example warehouse project and shows how you can automate the build and deploy process.  The session is demo rich and will shows by example how you can use T4 and SSDT (with SQL Server 2014) to build and deploy projects automatically.  This session is useful to both BI and Database developers.
300
DBA
Indexing is one of the most important tasks for a well performing database. An optimized database need to be checked regular for all kind of index problematic. This session will show the usage of DMOs (dynamic management objects) for the maintenance of indexes. The audience will get familiar with the physical thresholds for optimal Indexes; approvement of the usage of Indexes within a few minutes and demonstration of results to the vendors! A deep look into the dmo for the new Column Stored Indexes will be demonstrated, too!
300
BI
The DAX language has a low number of built-in functions, but it is very flexible and you can write complex calculations with it. It is so flexible that you might find many alternative ways to write an expression solving a business problem. You can save development time by adapting an existing DAX Pattern to your specific scenario. This session will present a set of fundamental patterns you have to know in DAX, because they are used very often in many business scenarios and are already tested and optimized, saving you from the effort of choosing between different possible optimizations.
300
DBA
We all have to deal with applying some sort of update to Windows, SQL Server, and/or hardware. Keeping things up to date is crucial for supportability, not to mention other things like security, performance, and stability. Since patching often involves downtime, you need to be careful not only what you apply to avoid problems, but also find ways to minimize the impact to the business and end users. This session will cover topics such as how to approach patching including what changes you should and should not consume, features that may help you automate or script patching, and developing your own long term patch management strategy for SQL Server deployments.
200
DBA
Do you use version control for your application development?

Do you handle your database development in a similar way to your application
development?

Do you use any tools for managing database changes scripts?

If you have answered yes to any of the above questions, you definitely want to attend this one.

This session will describe the best practices for dealing with real-life database development challenges and how to keep up with the pace of development processes such as Agile, Continuous Integration, Continuous Delivery, etc.

We will cover the differences between database and application development (local files, local directories vs. database; branches and trunk; parallel development; etc.), as well as the difference between deploying and merging application code as opposed to database.

We will demonstrate how database enforced change management address these challenges, and of course how everything can be automated in order to save time and money.
300
DBA
Have you ever been in a situation when you had to upgrade to a new version of SQL Server or you were just facing a huge performance problem? What if there was a way to tell if the upgrade was going to be successful or that the new index you created would have helped? Now there's a feature for that! It's called Distributed Replay and in this session you will learn how to take advantage of it and thus help your business in very crucial moments! We will take a look at how we can configure and set up our Distributed Replay environment and we'll also go through the whole process - from capturing our application workload to replaying it using various options Distributed Replay offers. 
200
DBA
You are a DBA and your manager asked you to manage the Enterprise Data Warehouse, which includes a number of ETL packages. While comfortable with the relational database, you are not sure how to handle Integration Services (SSIS).

In this session, you will learn what SSIS is and what components it consists of. You will also learn how to use the SSIS catalog, which is new in SQL Server 2012, to track the execution of the package. As well as how to troubleshoot packages when they fail or cause problems.
200
Dev
In this talk, I will share my experience in creating and running Azure SQL Database, a large cloud scale service powered by SQL Server technologies.   This presentation will provide an outline of the scenarios and requirements for a database service and then take you on a journey through the core concepts and capabilities.  You will learn how you might address these needs with SQL Server in an on-premise environment and then take a deep dive "under the bonnet“ to see how the Azure DB system is actually delivered.    The presentation will conclude with discussion on a number of interesting challenges we've solved delivering this service and how this work flows value back to core SQL Server offerings.  
This presentation is targeted at SQL Server practitioners interested in learning more about Azure SQL Database and will provide them with working knowledge of the what/why and an outline of the internals of the system.
300
DBA
Planning on upgrading to SQL Server 2014? Here's what you need to think about and what you need to do, step by step. Simple.
200
DBA
SQL Server 2014 is continuing to expand on the in memory database features that were first introduced in SQL Server 2012.  During this session we will explore the new in memory database tables which were developed under the code name project hekaton.  During this presentation the scope of the feature will be discussed as well as reviewing the use cases and best practices for using in memory database tables and when the in memory database tables shouldn't be used.
300
DBA
A few years ago when someone would suggest to run SQL Server on a file share I would protest and start a rant about latency versus throughput etc.
But today I think that some of the coolest new features of SQL Server 2014 are actually Windows Server 2012 R2 features. In this session I will show you how to use Storage Spaces and Auto Tiering and some other features to build a highly available, fast, scale out file server. And then we'll use it to host SQL Server data & log files on it. Which is fully supported when using SQL  Server 2014.
300
DBA
Many three-letter acronyms (TLAs) adorn our database world. This session focuses on two recent additions – MDS (Master Data Services) and DQS (Data Quality Services,  explaining what the new features are and exploring how to leverage them to improve your data quality. 

High data quality is fundamental to any business analytics system. In this session, we’ll begin with an overview of data quality and the latest Microsoft tools available, and then demonstrate MDS and DQS and how to use them together for continuously improving data quality.

We’ll also look at how to integrate them into your existing data-quality strategies. Data quality is everyone’s responsibility, but we can still lead the way forward. Join this session to see how.
200
DBA
Management Studio Templates and Snippets make the DBA look like a superstar and in the process makes our lives easier through tapping into the mantra of Work Less - Do More. Join Tim as he shares his shortcuts for optimizing repetitive tasks that take into consideration System Views, Dynamic Management Objects, and even output from the widely popular SQL Server Maintenance Solution from Ola Hallengren to enable you to raise your worth at your office while empowering you to free up time in your schedule to do more of the things you want to do!
300
BI
In a Corporate BI environment, it is common having data available in controlled data sources such as Data Warehouse, relational Data Marts, Analysis Services cubes. Consuming this data with self-service BI is very useful, but how do you manage and validate the access to corporate data sources? How do you share and validate new queries created by end users? How can users extracts data at the right granularity level? Power Query bridges the gap between corporate and self-service BI. It empowers end-users so that they can extract and manipulate data from several data sources, inside and outside of the company. In this session, you will learn the Power Query best practices that enable the extraction of data from existing database and cubes, guided by an expert that know both worlds (Corporate BI and Power Pivot data models).
400
DBA
In this session we will analyze some of the anti-forensic techniques that may be used in SQL Server in order to avoid a Forensic Technician or an Auditor in retrieving information and/or to obtain an accurate picture of the state of an instance of SQL Server .

We will address topics such as:
- Cheat Transaction Log;
- Direct manipulation of data files;
- Using MSSQL RootKits
200
Car
Have you been to a bad presentation or is it actually you that want to gain and improve your presentation skills? In this session you will learn what makes of a great presentation and what are the mistakes that sometimes even advanced speakers do! You will see and be presented with the most important concepts and techniques that will help you go to the next level and deliver far better presentations for your audience! We will take a look at both the fundamentals and the specifics of a technical presentation and what makes one great!
100
DBA
Do you want to reduce overhead in your database deployment?
Do you want to automate database deployment as part your Continuous Deployment?
Do you want to be 100% confident in your database change deployment automation?

If you have answered yes to one or more of the above questions, you definitely want to attend this session demonstrating safe database deployment automation.

Many challenges reside in database change deployment processes which rely on
manually generated scripts or on third party compare & sync tools.

Our session will discuss these challenges, as well as talk about the reasons why DBAs veto any automation of the above processes, as they can never be confident enough in the accuracy of the automation script generator or  the manually generated scripts.

In this session we will demonstrate how database change deployment automation can  safely be enjoyed using the database enforced change management approach, starting from real database source control (which is done directly on the database objects) - through a 3-way impact analysis - in order to identify conflicts and resolve them.
300
Dev
Join me in this session where I share real world experiences on how I've been using SSDT. This session will show you how to go about building and deploying your databases locally, checking for anomalies within the generated scripts and what you can do to avoid these. I'll also show you how to use the Managed API for those pesky edge cases. 
300
BI
There are clear rules of modelling an OLTP relational database developed by Boyce–Codd before I was even born. However, dimensional modelling patterns are still rather vague and modellers are faced with some
tough decisions in real world of complex data relationships, tricky/absent reporting requirements and various aspects of performance and ease of use. I would like to present a set of design patterns to use when tackling common DW modelling problems and challenges: when to use star or snowflake, junk and mini dimensions, bridge tables and other data warehouse modelling tricks. 

There are many books on DW terminology and general theory, so I will assume you have at least a grasp of common lingo: dimension, fact, slowly changing dimension types, star and snowflake schemas, etc.

Patterns I present aim to follow one common structure:
Problem (What?) -> Solution(How?) -> Reasons (Why?)->Consequences (Why not?)

The idea is not just presenting a common framework for sharing what works and what doesn’t, but also explain why that is the case and what happens when the patterns are not followed.

These patterns are equally applicable for a Kimball or Inmon religion, so both camps are very welcome to attend.
300
DBA
We all know that ‘Indexing’ is KING when it comes to achieving high levels of performance in SQL Server. When Indexing also combines 2 of the Enterprise features: Partitioning & Compression, we can often see substantial gains. Learn how to identify those objects that benefit greatly from being Partitioned or Compressed, OR combining both of these features to even greater effect. Using Demos to illustrate the performance gains with real-world examples, Take away advanced scripts for use in your own environments.
200
BI
In this session we will explore the Irish Economic Crisis from multiple perspectives, using Microsoft’s latest Visualisation tools including Power BI, Power Query, Power Map, Power View and Q&A. It starts from a very common business angle, where people need to make sense of data, fast, but don't know what questions to ask, nor how to combine available sources in a way that makes sense. This session converts the Irish Economic crisis into a story that can be explored interactively, using familiar tools, in a way that people may feel should have been noticed in the run-up to the crisis. Join me to find out what led to a small country owing over 200 billion euro and if Power BI could have helped.
400
Dev
A vital component inside the query optimizer is the cardinality
estimator,  these are the algorithms that calculate the estimated number
of rows will be outputted from each operator.  In SQL Server 2014, there
have been many changes aimed at giving a more accurate number of rows, and
therefore better plans.

This session will be a look at these changes, comparing and
contrasting with SQL2012/2008 to see how they help.
400
DBA
Is your incoming data volume becoming just too hot to handle? What you need is an In-Memory OLTP "Flaming" partition! In this session you will see how to put SQL 2014 In-Memory OLTP into action with one of the design patterns that it is ideally suited for. We'll start with a problem and end with a solution. Simple.
300
BI
When loading a data warehouse you want the data inserted into the tables as fast as possible. You know you have to use bulk loading, but what do you need to do to ensure a minimal footprint on the transaction log?

In this session, you will learn about minimal logged operations. You will also learn about the different methods for bulk loading data into your data warehouse; using SSIS, BCP and T-SQL.
200
DBA
Some of the biggest challenges in any large SQL environment are maintaining consistent configurations and meeting the pressures from the business for rapid server deployments. By default, SQL Server does not install with best practices for every environment. Learn best practices for system settings, file system layout and scheduling maintenance tasks. Understand what the best practices are for most SQL Server configurations, and how to automate your SQL Server builds in both physical and virtual worlds. Completely automating the build process has great benefits, but has tradeoffs; you will get lessons learned from building a private cloud at a Fortune 100 telecommunications company with 1000s of servers. You will also learn how use these same methods to ensure your own server build consistency, whether your SQL Servers are in the cloud or on-premises.
200
BI
Fans of BBC Radio 4's 'More or Less' will be familiar with the mythbusting theme, where we take some ridiculously wild news headlines and explore the data to understand what really happened. Join us for a fun newspaper-headline myth bashing session using Power BI, inspired by myth busters such as Tim Harman and other headline exposers. We will look at data journalism in various media, and see how it helps us to understand better using data visualisation and PowerBI.
300
DBA
This wizardry session will investigate SQL Server performance in Azure. We all know that the key to SQL Server performance is strong storage performance, in this session I will cast my magic wand over base Azure IOPs to create storage gold. I will make use of the philosophers' stone to provide reliable evidence of the alchemy produced live on stage. For a lucky few in the audience I'll be giving out Golden Snitches as prizes.
300
DBA
Operating a cloud-based application, from our largest internal Microsoft properties, small business apps to social apps at web scale, is one of the most challenging transitions that customers and cloud service vendors have to implement when moving from on-premises to the cloud. Troubleshooting, capacity planning, health analysis, alerting are just some of the traditional practices that requires a different approach in this new environment, and new roles like DevOps are becoming more and more important in the overall solution success. In this practical session we will walk through the learnings from CAT engagements we’ve used to reduce the “cost of the steel”.