Azure Cosmos DB is Microsoft’s globally distributed, horizontally partitioned, multi-model database service. The service is designed to allow customers to elastically and independently scale throughput and storage across any number of geographical regions. Azure Cosmos DB offers guaranteed low latency at the 99th percentile, 99.999% high availability, predictable throughput, and multiple well-defined consistency models. Azure Cosmos DB is the first and only globally distributed database service in the industry today to offer comprehensive Service Level Agreements (SLAs) encompassing all four dimensions of global distributions which our customers care the most: throughput, latency at the 99th percentile, availability, and consistency. As a cloud service, we have carefully designed and engineered Azure Cosmos DB with multi-tenancy, horizontal scalability and global distribution in mind.
We have just rolled out a few long-awaited changes and we wanted to share them with you:
Entry point for unlimited collections/containers is now 60% cheaper. In February, we’ve lowered entry point for unlimited containers making them 75% cheaper. We continue making improvements in our service and today we are pleased to announce that unlimited containers have now an entry point that is 60% cheaper than before. Instead of provisioning 2,500 RU/sec as a minimum, you can now
We have had tons of interest in our VMware virtualization on Azure offering. This includes questions about what we are offering and how we will provide an enterprise grade solution. Here are some of the details on the preview.
To enable this solution, we are working with multiple VMware Cloud Provider Program partners and running on existing VMware-certified hardware. For example, our preview hardware will use a flexpod bare metal configuration with NetApp storage. This hosted solution is similar to Azure’s bare metal SAP HANA Large Instances solution that we launched last year. With this approach, we will enable you to use the same industry-leading VMware software and services that you currently use in your on-premises datacenters, but running on Azure infrastructure, allowing L3 network connectivity for existing applications to Azure-native services like Azure Active Directory, Azure Cosmos DB, and Azure Functions.
We are facilitating discussions with VMware and the VCPP partners to ensure you have a great solution and a great support experience when we make this offering generally available next year. More details from VMware on this can be found here. We will share more information on GA plans and partners in the coming months. If you’d like
https://powerbi.microsoft.com/en-us/blog/announcing-the-reddit-solution-template/Source: https://powerbi.microsoft.com/en-us/blog/announcing-the-reddit-solution-template/ Today, we are excited to announce a new suite of Power BI solution templates for brand management and targeting on Reddit through a third-party API relationship with SocialGist. These templates complement existing brand-oriented READ MORE
Two new Azure regions are now in preview in France: France Central in Paris, and France South in Marseille. These regions are part of Azure’s global portfolio of announced regions in 42 locations around the world. Availability Zones in France Central can be paired with the geographically separated France South region for regional disaster recovery while maintaining data residency requirements. This past week also saw new capabilities added to help manage cost on Azure. With Azure Cost Management, Azure is the only platform that offers an end-to-end cloud cost management and optimization solution to help customers make the most of cloud investment across multiple clouds. Cost Management is free to all customers to manage their Azure spend.
Microsoft Azure preview with Azure Availability Zones now open in France – The preview of Microsoft Azure in France is open today to all customers, partners and ISVs worldwide giving them the opportunity to deploy services and test workloads in these latest Azure regions. This is an important step towards offering the Azure cloud platform from our datacenters in France.
Cloud storage now more affordable: Announcing general availability of Azure Archive Storage – Learn how to reduce your storage costs by storing
Achieving compliance with the General Data Protection Regulation (GDPR), the new data privacy law from the European Union (EU), is not a one-time activity but is an ongoing process. When the GDPR goes into effect on May 25, 2018, individuals will have greater control over their personal data. Additionally, the GDPR imposes new obligations on organizations that collect, handle, or analyze personal data. Implementing the right processes and organizational changes to comply with the GDPR will not be an easy task, but Microsoft is here to help. With 10 chapters, 99 articles, and 160 requirements the GDPR is a complex law, and implementing all this will be a challenge, so Microsoft has created a highly detailed guide.
Our colleagues from Microsoft France recently published a detailed implementation guide, GDPR – Get organized and implement the right processes, available in both English and French. The guide provides customers with a methodology for creating and executing a GDPR compliance program in their organization. It describes the necessary steps for achieving GDPR compliance through a plan, do, check, act (PDCA) approach using Microsoft Cloud services such as Azure, as shown in the diagram below.
Figure 1: Consolidated view of the main GDPR
Azure HDInsight is a fully-managed cloud service that makes it easy, fast, and cost-effective to process massive amounts of data. Use the most popular open-source frameworks such as Hadoop, Spark, Hive, LLAP, Kafka, Storm, R & more. Azure HDInsight enables a broad range of scenarios such as ETL, Data Warehousing, Machine Learning, and IoT.
By default, when you provision a HDInsight cluster, you are required to create a local admin user and local SSH user that has full access to the cluster. The local admin user can access all the files, folders, tables, columns, etc. With a single local user, there is no need for role-based access control. However, as enterprise customers move to the cloud, they must enable strict security requirements in terms of authentication, authorization, auditing, and governance. This is especially important with larger or multiple teams that share the same cluster. Admins don’t want to create individual clusters for individual users. When we talked to customers, we received three main requests as part of enabling cluster access to multiple users:
As a data scientist, I want to use my Active Directory domain credentials to run queries on the cluster. As a cluster admin, I want to configure
Today, we are really happy to announce that we are reducing the prices for Azure HDInsight service and making several awesome capabilities generally available.
Launched in 2013, Azure HDInsight is a fully-managed, full spectrum, open-source analytics cloud service by Microsoft that makes it easy, fast, and cost-effective to process massive amounts of data. You can use the most popular open-source engines such as Hadoop, Spark, Hive, LLAP, Kafka, Storm, HBase, R and install more open source frameworks from the OSS ecosystem.
Amazing value for our customers
Customers ranging from startups to enterprises are using Azure HDInsight for their mission-critical applications. The service enables a broad range of scenarios in manufacturing, retail education, nonprofit, government, healthcare, media, banking, telecommunication, insurance and many more industries ranging in use cases from ETL to Data Warehousing, from Machine Learning to IoT and many more. Many Fortune 500 customers are running their big data pipelines on Azure HDInsight:
AccuWeather is using this technology to gain real-time intelligence into weather and business patterns. Handling 17 billion requests for data each day, AccuWeather is helping 1.5 billion people safeguard and improve their lives and businesses.
Cornell Lab of Ornithology improved Machine Learning Workflow with Azure
Apache Kafka on the Azure HDInsight was added last year as a preview service to help enterprises create real-time big data pipelines. Since then, large companies such as Toyota, Adobe, Bing Ads, and GE have been using this service in production to process over a million events per sec to power scenarios for connected cars, fraud detection, clickstream analysis, and log analytics. HDInsight has worked very closely with these customers to understand the challenges of running a robust, real-time production pipeline at an enterprise scale. Using our learnings, we have implemented key features in the managed Kafka service on HDInsight, which is now generally available.
A fully managed Kafka service for the enterprise use case
Running big data streaming pipelines is hard. Doing so with open source technologies for the enterprise is even harder. Apache Kafka, a key open source technology, has emerged as the de-facto technology for ingesting large streaming events in a scalable, low-latency, and low-cost fashion. Enterprises want to leverage this technology, however, there are many challenges with installing, managing, and maintaining a streaming pipeline. Open source bits lack support and in-house talent needs to be well versed with these technologies to ensure the highest levels of
I am excited to announce the general availability of HDInsight Integration with Azure Log Analytics.
Azure HDInsight is a fully managed cloud service for customers to do analytics at scale using the most popular open-source engines such as Hadoop, Hive/LLAP, Presto, Spark, Kafka, Storm, HBase etc.
Thousands of our customers run their big data analytical applications on HDInsight at global scale. The ability to monitor this infrastructure, detect failures quickly and take quick remedial action is key to ensuring a better customer experience.
Log Analytics is part of Microsoft Azure’s overall monitoring solution. Log Analytics helps you monitors cloud and on-premises environments to maintain availability and performance.
Our integration with log analytics will make it easier for our customers to operate their big data production workloads more effective and simple manner.
Monitor & debug full spectrum of big data open source engines at global scale
Typical big data pipelines utilize multiple open source engines such as Kafka for Ingestion, Spark streaming or Storm for stream processing, Hive & Spark for ETL, Interactive Query [LLAP] for blazing fast querying of big data.
Additionally, these pipelines may be running in different datacenters across the globe.
With new HDInsight monitoring
Self-service customization for speech recognition
ASR is an important audio analysis feature in Video Indexer. Speech recognition is artificial intelligence at its best, mimicking the human cognitive ability to extract words from audio. In this blog post, we will learn how to customize ASR in VI, to better fit specialized needs.
Before we get in to technical details, let’s take inspiration from a situation we have all experienced. Try to recall your first days on a job. You can probably remember feeling flooded with new words, product names, cryptic acronyms, and ways to use them. After some time, however, you can understand all these new words. You adapted yourself to the vocabulary.
ASR systems are great, but when it comes to recognizing a specialized vocabulary, ASR systems are just like humans. They need to adapt. Video Indexer now supports a customization layer for speech recognition, which allows you to teach the ASR engine new words, acronyms, and how they are used in your business context.
How does Automatic Speech Recognition work? Why is customization needed?