Category Archives : Serverless

31

Oct

Deliver the right events to the right places with Event Domains

As we continue to see our community grow around Event Grid, many of you have started to explore the boundaries of complexity and scale that can be achieved. We’ve been blown away with some of the system architectures we have seen built on top of the platform.

In order to make your life easier with some of these scenarios, we decided to dedicate much of the last few months to building two features we are very excited to announce today: advanced filters, and Event Domains – a managed platform for publishing events to all of your customers. In addition, we’ve been working to improve the developer experience and make Event Grid available in Azure Government regions.

Event Domains

Become your own event source for Event Grid with Event Domains, managing the flow of custom events to your different business organizations, customers, or applications. An Event Domain is essentially a management tool for large numbers of Event Grid Topics related to the same application, a top-level artifact that can contain thousands of topics. With a Domain, you get fine grain authorization and authentication control over each topic via Azure Active Directory, which lets you easily decide which of your tenants or

Share

25

Oct

Two considerations for a serverless data streaming scenario
Two considerations for a serverless data streaming scenario

We recently published a blog on a fraud detection solution delivered to banking customers. At the core of the solution is the requirement to completely process a streaming pipeline of telemetry data in real time using a serverless architecture. Two technologies were evaluated for this requirement, Azure Stream Analytics and Azure Functions. This blog describes the evaluation process and the decision to use Azure Functions.

Scenario

A large bank wanted to build a solution to detect fraudulent transactions, submitted through their mobile phone banking channel. The solution is built on a common big data pipeline pattern. High volumes of real-time data are ingested into a cloud service, where a series of data transformations and extraction activities occur. This results in the creation of a feature, and the use of advanced analytics. For the bank, the pipeline had to be very fast and scalable where end-to-end evaluation of each transaction had to complete in less than two seconds.

Pipeline requirements include:

Scalable and responsive to extreme bursts of ingested event activity. Up to 4 million events and 8 million or more transactions daily. Events were ingested as complex JSON files, each containing from two to five individual bank transactions. Each JSON

Share

18

Oct

Modernize your commerce platform with Rapid Commerce solution

Today’s consumers are using more devices and channels to interact with retailers than ever before. Seamless service across all channels is the expectation, not the exception. Digital-first brands are raising the bar for hyper-personalized experiences through their test-and-learn approach to rapid delivery of modern commerce capabilities. As we all know, traditional brick and mortar retailers are under significant pressure due to high fixed costs associated with managing traditional store infrastructure, material decreases, and flat year-over-year comp-store revenue. A consequence of this pressure is the inability to innovate and create new, competitive user experiences. One answer to this problem is the introduction of a new service on the Azure Marketplace.

You say monoliths, I say microservices.

It’s estimated that more than half of retailers still operate their businesses on monolithic, on-premises commerce applications. Those monoliths inhibit speed and flexibility to build hyper-personalized experiences. They lack agility to support new business models to differentiate. Other drawbacks include all night deployments, six months from concept to go-live, 24 hours to deploy one line of code change, and the requirement of significant rollout and readiness planning.

Unless retailers revamp their platforms, they will be challenged to keep up with new competitors who are operating

Share

15

Oct

New metric in Azure Stream Analytics tracks latency of your streaming pipeline

Azure Stream Analytics is a fully managed service for real-time data processing. Stream Analytics jobs read data from input sources like Azure Event Hubs or IoT Hub. They can perform a variety of tasks from simple ETL and archiving to complex event pattern detection and machine learning scoring. Jobs run 24/7 and while Azure Stream Analytics provides 99.9 percent availability SLA, various external issues may impact a streaming pipeline and can have the significant business impact. For this reason, it is important to proactively monitor jobs, quickly identify root causes, and mitigate possible issues. In this blog, we will explain how to leverage the newly introduced output watermark delay to monitor mission-critical streaming jobs.

Challenges affecting streaming pipelines

What are some issues that can occur in your streaming pipeline? Here are a few examples:

Data stops arriving or arrives with a delay due to network issues that prevent the data from reaching the cloud. The volume of incoming data increases significantly and the Stream Analytics job is not scaled appropriately to manage the increase. Logical errors are encountered causing failures and preventing the job from making progress. Output destination such as SQL or Event Hubs are not scaled properly and

Share

11

Oct

Supercharge your Azure Stream Analytics queries with C# code
Supercharge your Azure Stream Analytics queries with C# code

Azure Stream Analytics (ASA) is Microsoft’s fully managed real-time analytics offering for complex event processing. It enables customers to unlock valuable insights and gain a competitive advantage by harnessing the power of big data.

Our customers love the simple SQL based query language that has been augmented with powerful temporal functions to analyze fast-moving event streams. The ASA query language natively supports complex geo-spatial functions, aggregation functions, and math functions. However, in many advanced scenarios, developers may want to reuse C# code and existing libraries instead of writing long queries for simple operations.

At Microsoft Ignite 2018, we announced a new feature that allows developers to extend the ASA query language with C# code. Currently, this capability is available for Stream Analytics jobs running on Azure IoT Edge (public preview). In many scenarios, it is more efficient to write C# code to perform some operations. In such cases, instead of being constrained by the SQL-like language, you can author a C# function and invoke it directly from the ASA query! Even better, you can use the ASA tools for Visual Studio to get native C# authoring and debugging experience.

Writing your own functions is most useful for scenarios like:

Building

Share

08

Oct

A fast, serverless, big data pipeline powered by a single Azure Function

A single Azure function is all it took to fully implement an end-to-end, real-time, mission critical data pipeline. And it was done with a serverless architecture. Serverless architectures simplify the building, deployment, and management of cloud scale applications. Instead of worrying about data infrastructure like server procurement, configuration, and management a data engineer can focus on the tasks it takes to ensure an end-to-end and highly functioning data pipeline.

This blog describes an Azure function and how it efficiently coordinated a data ingestion pipeline that processed over eight million transactions per day.

Scenario

A large bank wanted to build a solution to detect fraudulent transactions submitted through mobile phone banking applications. The solution requires a big data pipeline approach. High volumes of real-time data are ingested into a cloud service, where a series of data transformation and extraction activities occur. This results in the creation of a feature data set, and the use of advanced analytics. For the bank, the pipeline had to be very fast and scalable, end-to-end evaluation of each transaction had to complete in less than two seconds.

Telemetry from the bank’s multiple application gateways, stream in as embedded events in complex JSON files. The ingestion technology

Share

24

Jul

Event Grid June updates: Dead lettering, retry policies, global availability, and more

Since our updates at //Build 2018, the Event Grid team’s primary focus has been delivering updates that will make it easier for you to run your critical workloads on Event Grid. With that in mind and always aiming for a better development experience, today we are announcing the release dead lettering events to Blob Storage, configurable retry policies, availability in all public regions, Azure Container Registry as a publisher, some SDK updates to include all these new features, and portal UX updates! Let’s dig on what all this means to you.

Dead Lettering

Dead lettering is a common pattern in events and messaging architectures that allow you to handle failed events in a specific way. An event delivery may fail because the endpoint receiving the event is continually down, authorization has changed, the event is malformed, or any number of reasons. But obviously this doesn’t mean the event isn’t important and just be thrown away, as every single event carries critical business value. Even when they don’t, it’s useful to track failed events for post-mortems or telemetry.

Dead lettering is now built right into Azure Event Grid, sending those failed events to Blob Storage. Each Event Subscription can have dead

Share

24

Jul

Improving the development experience worldwide with Event Grid

Since our updates at //Build 2018, the Event Grid team’s primary focus has been delivering updates that will make it easier for you to run your critical workloads on Event Grid. With that in mind and always aiming for a better development experience, today we are announcing the release dead lettering events to Blob Storage, configurable retry policies, availability in all public regions, Azure Container Registry as a publisher, some SDK updates to include all these new features, and portal UX updates! Let’s dig on what all this means to you.

Dead Lettering

Dead lettering is a common pattern in events and messaging architectures that allow you to handle failed events in a specific way. An event delivery may fail because the endpoint receiving the event is continually down, authorization has changed, the event is malformed, or any number of reasons. But obviously this doesn’t mean the event isn’t important and just be thrown away, as every single event carries critical business value. Even when they don’t, it’s useful to track failed events for post-mortems or telemetry.

Dead lettering is now built right into Azure Event Grid, sending those failed events to Blob Storage. Each Event Subscription can have dead

Share

31

May

Receiving and handling HTTP requests anywhere with the Azure Relay

If you followed Microsoft’s coverage from the Build 2018 conference, you may have been as excited as we were about the new Visual Studio Live Share feature that allows instant, remote, peer-to-peer collaboration between Visual Studio users, no matter where they are. One developer could be sitting in a coffee shop and another on a plane with in-flight WiFi, and yet both can collaborate directly on code.

The “networking magic” that enables the Visual Studio team to offer this feature is the Azure Relay, which is a part of the messaging services family along with Azure Service Bus, Azure Event Hubs, and Azure Event Grid. The Relay is, indeed, the oldest of all Azure services, with the earliest public incubation having started exactly 12 years ago today, and it was amongst the handful of original services that launched with the Azure platform in January 2010.

In the meantime, the Relay has learned to speak a fully documented open protocol that can work with any WebSocket client stack, and allows any such client to become a listener for inbound connections from other clients, without needing inbound firewall rules, public IP addresses, or DNS registrations. Since all inbound communication terminates inside the

Share

18

May

Azure the cloud for all – highlights from Microsoft BUILD 2018

Last week, the Microsoft Build conference brought developers lots of innovation and was action packed with in-depth sessions. During the event, my discussions in the halls ranged from containers to dev tools, IoT to Azure Cosmos DB, and of course, AI. The pace of innovation available to developers is amazing. And, in case there was simply too much for you to digest, I wanted to pull together some key highlights and top sessions to watch, starting with a great video playlist with highlights from the keynotes.

Empowering developers through the best tools

Build is for devs, and all innovation in our industry starts with code! So, let’s start with dev tools. Day one of Build marked the introduction of .NET Core 2.1 release candidate. .NET Core 2.1 improves on previous releases with performance gains and many new features. Check out all the details in the release blog and this great session from Build showing what you can use today:

.NET Overview & Roadmap: In this session, Scott Hanselman and Scott Hunter talked about all things .NET, including new .NET Core 2.1 features made available at Build.

Scott Hanselman and Scott Hunter sharing new .NET Core 2.1.

With AI being top

Share