This post is co-authored by Anusua Trivedi, Data Scientist, Microsoft; Patrick Buehler, Data Scientist, Microsoft; Dr. Sunil Gupta, Founder, Intelligent Retinal Imaging System (IRIS); and Jocelyn Desbiens, Researcher, IRIS.
Diabetic Retinopathy (DR) is the most common cause of blindness in the working population of the United States and Europe. The World Health Organization (WHO) predicts that the number of patients with diabetes will increase to 366 million in 2030. For patients with diabetes, early diagnosis and treatment have been shown to prevent visual loss and blindness. Automated grading of DR has potential benefits such as:
Increasing the efficiency, reproducibility, and coverage of screening programs.
Reducing barriers to access.
Improving patient outcomes by providing early detection and treatment.
To maximize the clinical utility of automated grading, an accurate algorithm to detect referable DR is needed.
Machine Learning on DR images
Machine Learning has been used in a variety of medical image classification tasks including automated classification of DR. However, much of the work has focused on feature extraction engineering which involves computing image features specified by experts, resulting in algorithms built to detect specific lesions or predict the presence of many types of DR severity. Deep Learning is a
The General Data Protection Regulation (GDPR) is now in effect as of May 25th, 2018. GDPR has significant implications on the use and management of your customers personal data. Considerations of privacy, security, data management, and marketing practices are all top of mind. With nearly 160 new GDPR requirements, its clear that cloud technology can help accelerate your path to GDPR compliance.
The path to compliance is by no means a simple journey, but by partnering with Microsoft, you will have the right set of resources, tools, and processes to help optimize your privacy and data management practices. Luckily, if you are currently looking for a modern database, youll automatically inherit the benefit of physical and operational security in Azure SQL Database that meet regulatory standards. To help you secure your SQL database, Microsoft helps you protect your existing data, control who can access your data, run regular preventive monitoring tests and manage your security for the long run.
Join our speakers Joachim Hammer and Joanne Wong for a webinar on Azure security to learn about the intuitive, built-in features that accelerate your path to GDPR compliance. Specifically, we will demonstrate how customers use the results from our new vulnerability
Earlier this month, we announced Azure Stack expanding availability to 92 countries. Today, we are announcing the capability to backup files and applications data using Microsoft Azure Backup Server. Azure Stack tenants can now take app consistent backup of their data in Azure Stack VMs, store them on the stack for operational recoveries, and send the data to Azure for long-term retention and offsite copy needs.
Key benefits Application consistent backups for SQL, SharePoint, and Exchange
App consistent backups mean that Azure Backup makes sure that while taking a backup, the memory is flushed, and no IOs are pending. This means that in addition to being recoverable, your applications being completely consistent at the time of backups. With Azure Backup Server, you can take App consistent backups of your applications, ensuring that the data, when recovered, is valid and consistent.
Item Level Recovery for local recovery points
Quick operation recoveries can be triggered directly from the Microsoft Azure Backup Server running on your stack, with Item Level Recovery. This means that you can recover a single file from a 50GB volume backed up, without having to recover the whole volume at a staging location.
Long term retention of offsite
In this blog we’ll discuss the concept of Structured Streaming and how a data ingestion path can be built using Azure Databricks to enable the streaming of data in near-real-time. We’ll touch on some of the analysis capabilities which can be called from directly within Databricks utilising the Text Analytics API and also discuss how Databricks can be connected directly into Power BI for further analysis and reporting. As a final step we cover how streamed data can be sent from Databricks to Cosmos DB as the persistent storage.
Structured streaming is a stream processing engine which allows express computation to be applied on streaming data (e.g. a Twitter feed). In this sense it is very similar to the way in which batch computation is executed on a static dataset. Computation is performed incrementally via the Spark SQL engine which updates the result as a continuous process as the streaming data flows in.
The above architecture illustrates a possible flow on how Databricks can be used directly as an ingestion path to stream data from Twitter (via Event Hubs to act as a buffer), call the Text Analytics API in Cognitive Services to apply intelligence to the data and
Today many healthcare computational workloads still exist at or near the point of care because of high latency, low bandwidth, or challenges with wireless power requirements and limited battery capabilities. Limitations on the numbers of connections per cell are also poised to stunt the future growth of IoT (Internet of Things).
4G LTE has typical latencies of 50-100ms, bandwidth less than 50Mbps, and in the range of thousands of connections per cell (maximum). This has forced users to spend capital buying expensive, powerful hardware to be co-located at or close to the point of need, and to secure and maintain that hardware over its lifetime. In the case of IoT and wearables these limitations have either prevented certain use cases or significantly limited capabilities.
5G has a latency of less than 1ms, bandwidth of up to 10 Gbps, and up to a million connections per square KM! This is going to pave the way for many new innovations in healthcare. I discuss a few of these below.
Healthcare AR / VR from the Cloud
The Access Control Service, otherwise known as ACS, is officially being retired. ACS will remain available for existing customers until November 7, 2018. After this date, ACS will be shut down, causing all requests to the service to fail.
This blog post is a follow up to the initial announcement of the retirement of ACS service.
Who is affected by this change?
This retirement affects any customer who has created one or more ACS namespaces in their Azure subscriptions. For instance, this may include Service Bus customers that have created an ACS namespace indirectly when creating a Service Bus namespace. If your apps and services do not use ACS, then you have no action to take.
What action is required?
If you are using ACS, you will need a migration strategy. The correct migration path for you depends on how your existing apps and services use ACS. We have published migration guidance to assist. In most cases, migration will require code changes on your part.
If you are uncertain whether your apps and services are using ACS, you are not alone. After the retirement of ACS from the Azure portal in April 2018, you had to contact Azure support to
With today’s fast moving technology and abundance of data sources, gaining a complete view of your customer is increasingly challenging and critical. This includes campaign interaction, opportunities for marketing optimization, current engagement, and recommendations for next best action.
To continuously drive business growth, financial services organizations are especially focused on innovation and speed-to-market in this area, as they look to overcome the added challenge of implementing and integrating best-of-breed solutions jointly, to quickly gain that 360-degree view of the customer.
To address these needs in an accelerated way, Bardess is bringing together the technology of Cloudera, Qlik, and Trifacta, along with their own accelerators and industry expertise, to deliver rapid value to customers.
What is Customer 360 Powered by Zero2Hero?
By combining Cloudera’s modern platform for machine learning and analytics, Qlik’s powerful, agile business intelligence and analytics suite, Trifacta’s data preparation platform, and Bardess accelerators, organizations can uncover insights and easily build comprehensive views of their customers across multiple touch points and enterprise systems.
The solution offers a complete platform for Customer
This week at the Design Automation Conference (DAC), we look forward to joining the conversation on “Why Cloud, Why Now,” for silicon development workflows.
Cloud computing is enabling digital transformation across industries. Silicon, or semiconductors, are a foundational building block for the technology industry, and new opportunities are emerging in cloud computing for silicon development. The workflows for silicon development have always pushed the limits of compute, storage and networking. Over time, the silicon development flow has been greatly expanded upon to handle the increasing size, density and manufacturing complexity of the industry. This has and continues to push the envelope for high performance compute (HPC) and storage infrastructure.
Azure provides a globally available, high performance computing (HPC) platform, that is secure, reliable and scalable to meet current and emerging infrastructure needs with the silicon design and development workflow based on EDA software.
Compute: Silicon development is compute and memory intensive. At times, it utilizes up to thousands of cores, demands the ability to quickly move and manage massive data sets for design and collaboration. Azure customers can choose from a range of compute- and memory-optimized Linux and Windows VMs to run their workflows. Storage: Azure Storage offers multiple
This post is co-authored byPam Lahoud, Senior Program Manager, SQL Server.
We are excited to announce the release of SQL Server Management Studio (SSMS) 17.8!
SSMS 17.8 provides support for almost all feature areas on SQL Server 2008 through the latest SQL Server 2017, which is now generally available.
In addition to enhancements and bug fixes, SSMS 17.8 comes with several new features:
Database Properties | FileGroups: This improvement exposes the “AUTOGROW_ALL_FILES” configuration option for Filegroups. SQL Editor: Improved experience with Intellisense in SQL Azure DB when the user lacks master access. Scripting: General performance improvements, especially over high-latency connections. Bug Fixes
View the Release Notes for more information.
Database Properties | FileGroups:
In this release of SQL Server Management Studio, we have introduced UI and scripting support for the AUTOGROW_ALL_FILES database filegroup property. This property was introduced in SQL Server 2016 to replace trace flag 1117, but it was only settable via T-SQL script. Now you can set the property via a checkbox in the Database Properties -> Filegroups page:
You can also use the Script button to script out the change:
We are delighted to announce that Resumable Online Index Create (ROIC) is now available for public preview in Azure SQL DB. The feature allows the ability to pause an index create operation and resume it later from where the index create operation was paused or failed, rather than having to restart the operation from the beginning. Additionally, this feature creates indexes using only a small amount of log space. You can use the new feature in the following scenarios:
Resume an index create operation after an index create failure, such as after a database failover or after running out of disk space. There is no need to restart the operation from the beginning. This can save a significant amount of time when creating indexes for large tables. Pause an ongoing index create operation and resume it later. For example, you may need to temporarily free up system resources in order to execute a high priority task or you may have a single maintenance window that is too short to complete the operation for a large index. Instead of aborting the index create process, you can pause the index create operation and resume it later without losing prior progress. Create large