Every platform has limits, workstations and physical servers have resource boundaries, APIs may be rate-limited, and even the perceived endlessness of the virtual public cloud enforces limitations that protect the platform from overuse or misuse. You can learn more about these limitations by visiting our documentation, “Azure subscription and service limits, quotas, and constraints.” When working on scenarios that take platforms to their extreme, those limits become real and therefore thought should be put into overcoming them.
The following post includes essential notes taken from my work with Mike Kiernan, Mayur Dhondekar, and Idan Shahar. It also covers some iterations where we try to reach a limit of 10K virtual machines running on Microsoft Azure and explores the pros/cons of the different implementations.
Load tests at cloud scale
Load and stress tests before moving a new version to production are critical on the one hand, but pose a real challenge for IT on the other. This is because they require a considerable amount of resources to be available for only a short amount of time, every release-cycle. When purchased the infrastructure doesn’t justify its cost over extended periods, making this a perfect use-case for a public cloud platform where payment
This blog was co-authored by Sumeet Mittal, Senior Program Manager, Azure Networking.
Earlier this year in July, we announced the public preview for Virtual Network Service Endpoints and Firewall rules for both Azure Event Hubs and Azure Service Bus. Today, we’re excited to announce that we are making these capabilities generally available to our customers.
This feature adds to the security and control Azure customers have over their cloud environments. Now, traffic from your virtual network to your Azure Service Bus Premium namespaces and Standard and Dedicated Azure Event Hubs namespaces can be kept secure from public Internet access and completely private on the Azure backbone network.
Virtual Network Service Endpoints do this by extending your virtual network private address space and the identity of your virtual network to your virtual networks. Customers dealing with PII (Financial Services, Insurance, etc.) or looking to further secure access to their cloud visible resources will benefit the most from this feature. For more details on the finer workings of Virtual Network service endpoints, refer to the documentation.
Firewall rules further allow a specific IP address or a specified range of IP addresses to access the resources.
Virtual Network Service Endpoints and Firewall rules
Azure Functions provides a powerful programming model for accelerated development and serverless hosting of event-driven applications. Ever since we announced the general availability of the Azure Functions 2.0 runtime, support for Python has been one of our top requests. At Microsoft Connect() last week, we announced the public preview of Python support in Azure Functions. This post gives an overview of the newly introduced experiences and capabilities made available through this feature.
What’s in this release?
With this release, you can now develop your Functions using Python 3.6, based on the open-source Functions 2.0 runtime and publish them to a Consumption plan (pay-per-execution model) in Azure. Python is a great fit for data manipulation, machine learning, scripting, and automation scenarios. Building these solutions using serverless Azure Functions can take away the burden of managing the underlying infrastructure, so you can move fast and actually focus on the differentiating business logic of your applications. Keep reading to find more details about the newly announced features and dev experiences for Python Functions.
Powerful programming model
The programming model is designed to provide a seamless and familiar experience for Python developers, so you can import existing .py scripts and modules, and quickly start
Serverless and PaaS are all about unleashing developer productivity by reducing the management burden and allowing you to focus on what matters most, your application logic. That can’t come at the cost of security, though, and it needs to be easy to achieve best practices. Fortunately, we have a whole host of capabilities in the App Service and Azure Functions platform that dramatically reduce the burden of securing your apps.
Today, we’re announcing new security features which reduce the amount of code you need in order to work with identities and secrets under management. These include:
Key Vault references for Application Settings (public preview) User-assigned managed identities (public preview) Managed identities for App Service on Linux/Web App for Containers (public preview) ClaimsPrincipal binding data for Azure Functions Support for Access-Control-Allow-Credentials in CORS config
We’re also continuing to invest in Azure Security Center as a primary hub for security across your Azure resources, as it offers a fantastic way to catch and resolve configuration vulnerabilities, limit your exposure to threats, or detect attacks so you can respond to them. For example, you may think you’ve restricted all your apps to HTTPS-only, but Security Center will help you make absolutely sure. If
In the blog post “A fast, serverless, big data pipeline powered by a single Azure Function” we discussed a fraud detection solution delivered to a banking customer. This solution required complete processing of a streaming pipeline for telemetry data in real-time using a serverless architecture. This blog post describes the evaluation process and the decision to use Microsoft Azure Functions.
A large bank wanted to build a solution to detect fraudulent transactions submitted through its mobile banking channel. The solution is built on a common big data pipeline pattern where high volumes of real-time data are ingested into a cloud service and a series of data transformations and extraction activities occur. This results in the creation of a feature matrix and the use of advanced analytics. For the bank, the pipeline had to be very fast and scalable allowing end-to-end evaluation of each transaction to finish in fewer than two seconds.
Pipeline requirements include the following:
Scalable and responsive to extreme bursts of ingested event activity. Up to 4 million events and 8 million plus transactions daily. Events were ingested as complex JSON files, each containing from two to five individual bank transactions. Each JSON file had to be
Most modern applications are built using events whether it is reacting to changes coming from IoT devices, responding to a new listing in a marketplace solution, or initiating business processes from customer requests. PostgreSQL is a popular open source database with rich extensibility to meet the event-based notification and distributed design needs of the modern application. PostgreSQL’s Notify functionality allows for sending a notification event as change feed to the listener channel specified in the database. With serverless platforms in Azure such as, Azure Event Grid a fully managed serverless event routing service, Azure Functions a serverless compute engine, and Azure Logic Apps a serverless workflow orchestration engine, it is easy to perform event-based processing and workflows responding to the events in real-time.
Consider a marketplace e-commerce solution where buyers meet sellers. A typical marketplace solution is a collection of microservices providing a seamless buying and selling experience to the end users. The modern microservices design leverages purpose-built app platforms and data stores, which are optimized for scenarios while working in tandem to achieve a unified experience for end users. For example, graph store is better suited for recommendation engine, while a relational datastore like PostgreSQL is suited for relational
As we continue to see our community grow around Event Grid, many of you have started to explore the boundaries of complexity and scale that can be achieved. We’ve been blown away with some of the system architectures we have seen built on top of the platform.
In order to make your life easier with some of these scenarios, we decided to dedicate much of the last few months to building two features we are very excited to announce today: advanced filters, and Event Domains – a managed platform for publishing events to all of your customers. In addition, we’ve been working to improve the developer experience and make Event Grid available in Azure Government regions.
Become your own event source for Event Grid with Event Domains, managing the flow of custom events to your different business organizations, customers, or applications. An Event Domain is essentially a management tool for large numbers of Event Grid Topics related to the same application, a top-level artifact that can contain thousands of topics. With a Domain, you get fine grain authorization and authentication control over each topic via Azure Active Directory, which lets you easily decide which of your tenants or
We recently published a blog on a fraud detection solution delivered to banking customers. At the core of the solution is the requirement to completely process a streaming pipeline of telemetry data in real time using a serverless architecture. Two technologies were evaluated for this requirement, Azure Stream Analytics and Azure Functions. This blog describes the evaluation process and the decision to use Azure Functions.
A large bank wanted to build a solution to detect fraudulent transactions, submitted through their mobile phone banking channel. The solution is built on a common big data pipeline pattern. High volumes of real-time data are ingested into a cloud service, where a series of data transformations and extraction activities occur. This results in the creation of a feature, and the use of advanced analytics. For the bank, the pipeline had to be very fast and scalable where end-to-end evaluation of each transaction had to complete in less than two seconds.
Pipeline requirements include:
Scalable and responsive to extreme bursts of ingested event activity. Up to 4 million events and 8 million or more transactions daily. Events were ingested as complex JSON files, each containing from two to five individual bank transactions. Each JSON
Today’s consumers are using more devices and channels to interact with retailers than ever before. Seamless service across all channels is the expectation, not the exception. Digital-first brands are raising the bar for hyper-personalized experiences through their test-and-learn approach to rapid delivery of modern commerce capabilities. As we all know, traditional brick and mortar retailers are under significant pressure due to high fixed costs associated with managing traditional store infrastructure, material decreases, and flat year-over-year comp-store revenue. A consequence of this pressure is the inability to innovate and create new, competitive user experiences. One answer to this problem is the introduction of a new service on the Azure Marketplace.
You say monoliths, I say microservices.
It’s estimated that more than half of retailers still operate their businesses on monolithic, on-premises commerce applications. Those monoliths inhibit speed and flexibility to build hyper-personalized experiences. They lack agility to support new business models to differentiate. Other drawbacks include all night deployments, six months from concept to go-live, 24 hours to deploy one line of code change, and the requirement of significant rollout and readiness planning.
Unless retailers revamp their platforms, they will be challenged to keep up with new competitors who are operating
Azure Stream Analytics is a fully managed service for real-time data processing. Stream Analytics jobs read data from input sources like Azure Event Hubs or IoT Hub. They can perform a variety of tasks from simple ETL and archiving to complex event pattern detection and machine learning scoring. Jobs run 24/7 and while Azure Stream Analytics provides 99.9 percent availability SLA, various external issues may impact a streaming pipeline and can have the significant business impact. For this reason, it is important to proactively monitor jobs, quickly identify root causes, and mitigate possible issues. In this blog, we will explain how to leverage the newly introduced output watermark delay to monitor mission-critical streaming jobs.
Challenges affecting streaming pipelines
What are some issues that can occur in your streaming pipeline? Here are a few examples:
Data stops arriving or arrives with a delay due to network issues that prevent the data from reaching the cloud. The volume of incoming data increases significantly and the Stream Analytics job is not scaled appropriately to manage the increase. Logical errors are encountered causing failures and preventing the job from making progress. Output destination such as SQL or Event Hubs are not scaled properly and