Azure Databricks provides a fast, easy, and collaborative Apache Spark-based analytics platform to accelerate and simplify the process of building Big Data and AI solutions that drive the business forward, all backed by industry leading SLAs.
I am excited to announce the availability of a set of new features and regions, which enable our customers to accelerate their AI journey with Azure Databricks.
RStudio integration generally available with Azure Databricks
Today, we are announcing the ability to use RStudio with Azure Databricks. Customers can now analyze data with RStudio while taking advantage of the scale and flexibility of Azure Databricks.
RStudio offers in a rich IDE that is very popular with the data scientists in the R community. With this integration, RStudio runs directly inside Azure Databricks. This enables data scientists to continue to use the familiar and powerful RStudio IDE while gaining the ability to build their solutions at unprecedented scale. Azure Databricks provides the flexibility to start with small jobs and automatically scale up to production workloads in the same environment.
Setting up RStudio in Azure Databricks is simple and fast. Learn how to get started today.
Azure Databricks available in Australia and UK
We are excited
What’s an AMA session?
We’ll have folks from across the Azure Backup Engineering team available to answer any questions you have. You can ask us anything about our products, services or even our team!
Why are you doing an AMA?
We like reaching out and learning from our customers and the community. We had great conversations in the past when we did an AMA in 2016 and 2015. We want to know how you use Azure and Azure Backup and how your experience has been. Your questions provide insights into how we can make the service better.
How do I ask questions on Twitter?
You can ask us your questions by putting “#AzureBackupAMA” in your tweet. Your question can span multiple tweets by replying to first tweet you post with this hashtag. You can also directly message @AzureBackup or @AzureSupport if you want to keep your questions private. You can start posting your questions one day before the scheduled time of AMA but we will start
Data Integration solutions can be complex with many moving parts involving complex data factories with multiple pipelines. Monitoring provides data to ensure that your data factory pipelines stay up and running in a healthy state. It also helps you to stave off potential problems or troubleshoot past ones. In addition, you can use monitoring data to gain deep insights about your application. This knowledge can help you to improve application performance or maintainability, or automate actions that would otherwise require manual intervention.
Azure Data Factory (ADF) integration with Azure Monitor allows you to route your data factory metrics to Operations and Management (OMS) Suite. Now, you can monitor the health of your data factory pipelines using ‘Azure Data Factory Analytics’ OMS service pack available in Azure marketplace.
Azure Data Factory OMS pack provides you a summary of overall health of your Data Factory, with options to drill into details and to troubleshoot unexpected behavior patterns. With rich, out of the box views you can get insights into key processing including:
At a glance summary of data factory pipeline, activity and trigger runs Ability to drill into data factory activity runs by type Summary of data factory top pipeline,
https://blogs.msdn.microsoft.com/sql_server_team/tempdb-files-and-trace-flags-and-updates-oh-my/Source: https://blogs.msdn.microsoft.com/sql_server_team/tempdb-files-and-trace-flags-and-updates-oh-my/ TL;DR – Update to the latest CU, create multiple tempdb files, if you’re on SQL 2014 or earlier enable TF 1117 and 1118, if you’re on SQL 2016 enable TF 3427. And now it’s time for everyone’s READ MORE
https://powerbi.microsoft.com/en-us/blog/on-premises-data-gateway-june-update-is-now-available/Source: https://powerbi.microsoft.com/en-us/blog/on-premises-data-gateway-june-update-is-now-available/ We are happy to announce that we have just released the June update for the On-premises data gateway. This month’s Gateway update includes an updated version of the Mashup Engine, which…
On Wednesday, June 27, 2018, we announced the general availability of Azure IoT Edge. This release adds tons of new features for those already using public preview bits. Customers who have never used Azure IoT Edge can start with the Linux or Windows quickstarts. Those who have started projects on preview bits should upgrade to the latest bits and integrate breaking changes. Details on both of these processes are below.
Upgrade to the latest bits Uninstall preview bits
Use iotedgectl to uninstall the preview bits from your Edge device by running the following command. You can skip this step if you are installing GA bits on a device or VM that has never run preview bits.
iotedgectl uninstall Delete preview runtime container images
Use “docker rmi” to remove the container images for preview versions of Edge Agent and Edge Hub from your Edge device. You can skip this step if you are installing GA bits on a device or VM that has never run preview bits.
Remove references to preview container images in deployments
The IoT Edge Security Daemon includes functionality to allow the user to specify which versions of the Edge Agent and Edge Hub are used by
Lambda architecture is a popular pattern in building Big Data pipelines. It is designed to handle massive quantities of data by taking advantage of both a batch layer (also called cold layer) and a stream-processing layer (also called hot or speed layer).
The following are some of the reasons that have led to the popularity and success of the lambda architecture, particularly in big data processing pipelines.
Speed and business challenges
The ability to process data at high speed in a streaming context is necessary for operational needs, such as transaction processing and real-time reporting. Some examples are fault/fraud detection, connected/smart cars/factory/hospitals/city, sentiment analysis, inventory control, network/security monitoring, and many more.
Typically, batch processing, involving massive amounts of data, and related correlation and aggregation is important for business reporting. This is to understand how the business is performing, what the trends are, and what corrective or additive measure can be executed to improve business or customer experience.
One of the triggers that lead to the very existence of lambda architecture was to make the most of the technology and tool set available. Existing batch processing systems, such as data warehouse, data lake, Spark/Hadoop, and more, could
Today we are excited to announce the public preview of static website hosting for Azure Storage! The feature set is available in all public cloud regions with support in government and sovereign clouds coming soon.
How it works
When you enable static websites on your storage account, a new web service endpoint is created of the form
The web service endpoint always allows anonymous read access, returns formatted HTML pages in response to service errors, and allows only object read operations. The web service endpoint returns the index document in the requested directory for both the root and all subdirectories. When the
The digital experience (DX) era is here, and AI is one of the primary technologies to fuel productivity and innovation in the retail and consumer goods industry. Brands that take a wait and see approach may find themselves quickly outpaced by their competitors. And by competitors, I mean not just born-in-the-cloud e-commerce players, but forward-thinking omnichannel retailers who focus on winning customers and evolving retail at scale.
Within the spectrum of digital transformation, AI is not a new technology. It is moving from its research roots to entering the mass market. This is made possible by the growth of cloud computing, availability of big data, and years of improved algorithms developed by researchers. At its core, AI gives computers decision-making capabilities to solve problems in a more natural and responsive way, as compared to the practice of pre-programmed computer routines of today. AI will be an imperative for optimization, automation, scale and most importantly, (gulp) survival.
Cloud computing and big data are accelerating AI technology
According to Accenture, unlimited access to computing power and the growth in big data are creating the right environment for AI. Analyzing data requires massive compute and storage; the cloud provides an efficient way to
This post was authored by Jason Haley, Microsoft Azure MVP.
Recently, I was at Boston Code Camp catching up with some old friends and looking to learn about containers or anything that could help me in my current project of migrating a microservices application to run in containers. I was speaking with one friend who had just presented a session on Polly, and he made a comment that got my attention. He said that one of the attendees at his session was under the impression that using the cloud would make his application inherently resilient and he would not need any of the features that Polly provides.
In case you are not familiar with Polly, you can use this library to easily add common patterns like Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback to your code to make your system more resilient. Scott Hanselman recently wrote a blog post: Adding Resilience and Transient Fault handling to your .NET Core HttpClient with Polly, discussing how he was using Polly and HttpClient with ASP.NET Core 2.1.
What that attendee may have been referring to is that most Azure services and client SDKs have features to perform retries for you (which can