25

Jun

Customer 360 Powered by Zero2Hero now available on Azure Marketplace

With today’s fast moving technology and abundance of data sources, gaining a complete view of your customer is increasingly challenging and critical. This includes campaign interaction, opportunities for marketing optimization, current engagement, and recommendations for next best action.

To continuously drive business growth, financial services organizations are especially focused on innovation and speed-to-market in this area, as they look to overcome the added challenge of implementing and integrating best-of-breed solutions jointly, to quickly gain that 360-degree view of the customer.

To address these needs in an accelerated way, Bardess is bringing together the technology of Cloudera, Qlik, and Trifacta, along with their own accelerators and industry expertise, to deliver rapid value to customers.

Customer 360 Powered by Zero2Hero, the first in a new series of integrated solutions coming to Azure Marketplace and AppSource as Consulting Services offers, is now available.

What is Customer 360 Powered by Zero2Hero?

By combining Cloudera’s modern platform for machine learning and analytics, Qlik’s powerful, agile business intelligence and analytics suite, Trifacta’s data preparation platform, and Bardess accelerators, organizations can uncover insights and easily build comprehensive views of their customers across multiple touch points and enterprise systems.

The solution offers a complete platform for Customer

Share

25

Jun

Silicon development on Microsoft Azure
Silicon development on Microsoft Azure

This week at the Design Automation Conference (DAC), we look forward to joining the conversation on “Why Cloud, Why Now,” for silicon development workflows.

Cloud computing is enabling digital transformation across industries. Silicon, or semiconductors, are a foundational building block for the technology industry, and new opportunities are emerging in cloud computing for silicon development. The workflows for silicon development have always pushed the limits of compute, storage and networking. Over time, the silicon development flow has been greatly expanded upon to handle the increasing size, density and manufacturing complexity of the industry. This has and continues to push the envelope for high performance compute (HPC) and storage infrastructure.

Azure provides a globally available, high performance computing (HPC) platform, that is secure, reliable and scalable to meet current and emerging infrastructure needs with the silicon design and development workflow based on EDA software.

Compute: Silicon development is compute and memory intensive. At times, it utilizes up to thousands of cores, demands the ability to quickly move and manage massive data sets for design and collaboration. Azure customers can choose from a range of compute- and memory-optimized Linux and Windows VMs to run their workflows. Storage: Azure Storage offers multiple

Share

21

Jun

SSMS 17.8 is now available
SSMS 17.8 is now available

This post is co-authored byPam Lahoud, Senior Program Manager, SQL Server.

We are excited to announce the release of SQL Server Management Studio (SSMS) 17.8!

Download SSMS 17.8 and review the Release Notes to get started.

SSMS 17.8 provides support for almost all feature areas on SQL Server 2008 through the latest SQL Server 2017, which is now generally available.

In addition to enhancements and bug fixes, SSMS 17.8 comes with several new features:

Database Properties | FileGroups: This improvement exposes the “AUTOGROW_ALL_FILES” configuration option for Filegroups. SQL Editor: Improved experience with Intellisense in SQL Azure DB when the user lacks master access. Scripting: General performance improvements, especially over high-latency connections. Bug Fixes

View the Release Notes for more information.

Database Properties | FileGroups:

In this release of SQL Server Management Studio, we have introduced UI and scripting support for the AUTOGROW_ALL_FILES database filegroup property. This property was introduced in SQL Server 2016 to replace trace flag 1117, but it was only settable via T-SQL script. Now you can set the property via a checkbox in the Database Properties -> Filegroups page:

You can also use the Script button to script out the change:

This will

Share

21

Jun

Resumable Online Index Create is in public preview for Azure SQL DB

We are delighted to announce that Resumable Online Index Create (ROIC) is now available for public preview in Azure SQL DB. The feature allows the ability to pause an index create operation and resume it later from where the index create operation was paused or failed, rather than having to restart the operation from the beginning. Additionally, this feature creates indexes using only a small amount of log space. You can use the new feature in the following scenarios:

Resume an index create operation after an index create failure, such as after a database failover or after running out of disk space. There is no need to restart the operation from the beginning. This can save a significant amount of time when creating indexes for large tables. Pause an ongoing index create operation and resume it later. For example, you may need to temporarily free up system resources in order to execute a high priority task or you may have a single maintenance window that is too short to complete the operation for a large index. Instead of aborting the index create process, you can pause the index create operation and resume it later without losing prior progress. Create large

Share

21

Jun

Resumable Online Index Create is in public preview for Azure SQL DB

We are delighted to announce that Resumable Online Index Create (ROIC) is now available for public preview in Azure SQL DB. The feature allows the ability to pause an index create operation and resume it later from where the index create operation was paused or failed, rather than having to restart the operation from the beginning. Additionally, this feature creates indexes using only a small amount of log space. You can use the new feature in the following scenarios:

Resume an index create operation after an index create failure, such as after a database failover or after running out of disk space. There is no need to restart the operation from the beginning. This can save a significant amount of time when creating indexes for large tables. Pause an ongoing index create operation and resume it later. For example, you may need to temporarily free up system resources in order to execute a high priority task or you may have a single maintenance window that is too short to complete the operation for a large index. Instead of aborting the index create process, you can pause the index create operation and resume it later without losing prior progress. Create large

Share

21

Jun

Cost Reporting ARM APIs across subscriptions for EA customers

Azure enterprise customers today manage their subscriptions on the EA portal and use the EA hierarchy to group and report on usage and costs by subscription. Until today, the only APIs available for the enterprise hierarchy was the key based APIs, this month we are releasing ARM supported APIs for the enrollment hierarchy. This will enable users with the required privileges to make API calls to the individual nodes in the management hierarchy and get the most current cost and usage information.

The benefits of this API is an improved security posture, seamless onboarding to the cost APIs and benefiting from the continued investment in planned work on the APM APIs, like budgets. Departments today support rudimentary spending limits, but in the coming weeks we will be supporting budgets, that were recently announced for subscriptions and resource groups on EA hierarchy nodes as well. The ARM APIs also standardize the pattern and enable AD based authentication.

Hierarchy Updates

As part of this release the ARM API introduces a few new terms:

Enrollments in the ARM APIs are Billing Accounts Departments continue on as Departments Accounts in the ARM APIs are referred to as Enrollment Accounts

This release of ARM APIs

Share

21

Jun

Cost Reporting ARM APIs across subscriptions for EA customers

Azure enterprise customers today manage their subscriptions on the EA portal and use the EA hierarchy to group and report on usage and costs by subscription. Until today, the only APIs available for the enterprise hierarchy was the key based APIs, this month we are releasing ARM supported APIs for the enrollment hierarchy. This will enable users with the required privileges to make API calls to the individual nodes in the management hierarchy and get the most current cost and usage information.

The benefits of this API is an improved security posture, seamless onboarding to the cost APIs and benefiting from the continued investment in planned work on the APM APIs, like budgets. Departments today support rudimentary spending limits, but in the coming weeks we will be supporting budgets, that were recently announced for subscriptions and resource groups on EA hierarchy nodes as well. The ARM APIs also standardize the pattern and enable AD based authentication.

Hierarchy Updates

As part of this release the ARM API introduces a few new terms:

Enrollments in the ARM APIs are Billing Accounts Departments continue on as Departments Accounts in the ARM APIs are referred to as Enrollment Accounts

This release of ARM APIs

Share

21

Jun

Event trigger based data integration with Azure Data Factory

Event driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption and reaction to events. Today, we are announcing the support for event based triggers in your Azure Data Factory (ADF) pipelines. A lot of data integration scenarios requires data factory customers to trigger pipelines based on events. A typical event could be file landing or getting deleted in your azure storage. Now you can simply create an event based trigger in your data factory pipeline.

As soon as the file arrives in your storage location and the corresponding blob is created, it will trigger and run your data factory pipeline. You can create an event based trigger on blob creation, blob deletion or both in your data factory pipelines.

With the “Blob path begins with” and “Blob path ends with” properties, you can tell us for which containers, folders, and blob names you wish to receive events. You can also use wide variety of patterns for both “Blob path begins with” and “Blob path ends with” properties. At least, one of these properties is required.

Examples:

Blob path begins with (/containername/) – Will receive events for any blob in the container. Blob

Share

21

Jun

Event trigger based data integration with Azure Data Factory

Event driven architecture (EDA) is a common data integration pattern that involves production, detection, consumption and reaction to events. Today, we are announcing the support for event based triggers in your Azure Data Factory (ADF) pipelines. A lot of data integration scenarios requires data factory customers to trigger pipelines based on events. A typical event could be file landing or getting deleted in your azure storage. Now you can simply create an event based trigger in your data factory pipeline.

As soon as the file arrives in your storage location and the corresponding blob is created, it will trigger and run your data factory pipeline. You can create an event based trigger on blob creation, blob deletion or both in your data factory pipelines.

With the “Blob path begins with” and “Blob path ends with” properties, you can tell us for which containers, folders, and blob names you wish to receive events. You can also use wide variety of patterns for both “Blob path begins with” and “Blob path ends with” properties. At least, one of these properties is required.

Examples:

Blob path begins with (/containername/) – Will receive events for any blob in the container. Blob

Share

20

Jun

The June release of SQL Operations Studio is now available

We are excited to announce the June release of SQL Operations Studio is now available.

Download SQL Operations Studio and review the Release Notes to get started.

SQL Operations Studio is a data management tool that enables you to work with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux. To learn more, visit our GitHub.

SQL Operations Studio was announced for Public Preview on November 15th at Connect(), and this June release is the seventh major update since the announcement. If you missed it, the May release announcement can be viewed here.

The June public preview release is focused on improving our Extensibility experience with the release of new extensions as well as addressing top GitHub issues.

Highlights for this build include the following.

SQL Server Profiler for SQL Operations Studio Preview extension initial release Azure SQL Data Warehouse extension Edit Data Filtering and Sorting SQL Server Agent for SQL Operations Studio Preview extension enhancements for Jobs and Job History views Build your own SQL Ops Studio extension Visual Studio Code Refresh Fix GitHub Issues

For complete updates, refer to the Release Notes.

SQL Server Profiler for SQL Operations Studio Preview

The SQL

Share