We are excited to announce that this week we have made Advanced Threat Protection available for public preview on Azure Storage Blob service. Advanced Threat Protection for Azure Storage detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit storage accounts.
The introduction of this feature helps customers detect and respond to potential threats on their storage account as they occur. For a full investigation experience, it is recommended to configure diagnostic logs for read, write, and delete requests for the blob services.
The benefits of Advanced Threat Protection for Azure Storage include:
Detection of anomalous access and data exfiltration activities. Email alerts with actionable investigation and remediation steps. Centralized views of alerts for the entire Azure tenant using Azure Security Center. Easy enablement from Azure portal. How to set up Advanced Threat Protection Launch the Azure portal. Navigate to the configuration page of the Azure Storage account you want to protect. In the Settings page, select Advanced Threat Protection. In the Advanced Threat Protection configuration blade: Turn on Advanced Threat Protection. Click Save to save the new or updated Advanced Threat Protection policy.
Get started today
We encourage you to try out Advanced Threat
Service Fabric is a microservices platform to build, deploy, discover, and scale services with message routing, low-latency storage, and health monitoring. It powers both first- and third-party applications including core Azure infrastructure and cloud services along with several mission-critical applications for enterprises.
This week at the Microsoft Ignite conference in Orlando, Florida, we are announcing an update of the Azure Service Fabric Mesh preview, the serverless microservices platform that was released in July this year. We are also announcing Service Fabric runtime version 6.4 with corresponding SDK and tooling updates which will start rolling out in the coming weeks and come with a bevy of enhancements.
Support for Windows Server version 1803 and multi-tenancy features
We are announcing support for Azure clusters running Windows Server version 1803, as well as those running containers based on the same image for Azure clusters and Service Fabric Mesh. Windows Server version 1803 includes several improvements and fixes.
Customers have often asked us to provide network isolation in addition to compute isolation to help enable multi-tenant scenarios. With this update, Service Fabric enables isolated networks per application as a preview. With an isolated network per application, service endpoints can only be reached from other
We are excited to announce the preview of Azure Active Directory authentication for Azure Files SMB access leveraging Azure AD Domain Services (AAD DS). Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard SMB protocol. Integration with AAD enables SMB access to Azure file shares using AAD credentials from AAD DS domain joined Windows VMs. In addition Azure Files supports preserving, inheriting, and enforcing Microsoft file system NTFS ACLs on all folders and files in a file share.
With this capability, we can extend the traditional identity-based share access experience that you are most familiar with to Azure Files. For lift and shift scenarios, you can sync on-premises AD to AAD, migrate existing files with ACLs to Azure Files, and enable your organization to access file shares with the same credentials with no impact to the business.
In addition to this, we have enhanced our access control story by enforcing granular permission assignment on the share, folder, and file levels. You can use Azure Files as the storage solution for project collaboration, leveraging folder or file level ACLs to protect your organization’s sensitive data.
Previously, when you imported files to Azure file
On behalf of the Azure Data Box team, I’m thrilled to share with you some new solutions for our customers’ data movement needs. Starting today, we are adding online capabilities to the Data Box family with two exciting new products – Azure Data Box Edge and Azure Data Box Gateway. For offline transfer we know customers have all different sizes of data to move, so we’re also introducing the 1 PB Azure Data Box Heavy to go along with Azure Data Box and Azure Data Box Disk. And last but certainly not least, Data Box is now generally available!
Introducing Data Box Edge and Data Box Gateway
Customers are increasingly creating and processing data at the edge. You’re deploying IoT, remote energy exploration, traffic management, and other data-intensive scenarios in record amounts, all of which are massively accelerating edge deployments.
Data Box Edge acts as a storage gateway, creating a a link between your site and Azure storage. This makes moving data into and out of Azure storage as easy as working with a local network share. Data Box Edge provides a
Actuarial compute solutions are what keeps insurance companies in business. The legally required reporting can only be done at scale by employing thousands of server cores for workloads like monthly and quarterly valuation and production reporting. With these compute-intensive workloads, actuaries need the elasticity of the cloud. Azure offers a variety of compute options, from large machines with hundreds of cores or thousands of smaller standard machines with fewer cores. This scalability means you can use as much compute power as needed to finish your production runs with enough time to spare to correct an error and rerun before a deadline.
Microsoft supports a range of actuarial compute solutions running on Azure including popular life actuarial partner platforms from Milliman, Willis Towers Watson, FIS, and Moody’s. All of these companies are experienced in both policy reserving and project modeling. Many life insurance companies today use at least one of these platforms, some of the largest insurance companies use parts of all of them. Partner solutions built on Azure provide different options for model creation and model modification. The solutions are flexible and support customization of both the model and the cloud environment.
The new regulation, IFRS-17, adds new levels
Actuarial risk modeling is a compute-intensive operation. It employs thousands of server cores, with many uneven workloads such as monthly and quarterly valuation and production runs to meet regulatory requirements. With these compute-intensive workloads, actuaries today find themselves trapped by traditional on-premises systems (grid computing on hardware) which lack scale and elasticity. Many find they cannot even finish simple tasks like production runs with enough time to spare to correct an error and rerun before a deadline. With scalability and elasticity being the cornerstone of the cloud, risk modeling is incredibly well suited to take advantage of this near bottomless resource. With Azure, you can access more compute power when needed, without having to manage it. This translates to a great savings in time and money.
Your immediate savings comes from reallocating your costs from hardware investments to operational expenses. With on-demand compute, personnel are released from hardware and software burdens. They can devote their time and expertise to other production costs, such as writing scripts for the optimum deployment of cores.
More power, less time, faster reports
New regulations, such as International Financial Reporting Standard 17 (IFRS 17), Solvency II, and Actuarial Guideline XLIII, are increasing the pressure
Because of existing and upcoming regulations, insurers perform quite a bit of analysis over their assets and liabilities. Actuaries need time to review and correct results before reviewing the reports with regulators. Today, it is common for quarterly reporting to require thousands of hours of compute time. Companies which offer variable annuity products must follow Actuarial Guideline XLIII which requires several compute intensive tasks, including nested stochastic modeling. Solvency II requires quite a bit of computational analysis to understand the Solvency Capital Requirement and the Minimum Capital Requirement. International Financial Reporting Standard 17 requires analysis of each policy, reviews of overall profitability, and more. Actuarial departments everywhere work to make sure that their financial and other models produce results which can be used to evaluate their business for regulatory and internal needs.
With all this reporting, actuaries get pinched for time. They need time for things like:
Development: Actuaries code the models in their favorite software or in custom solutions. Anything they can do to reduce the cycle of Code-Test-Review helps deliver the actuarial results sooner. Data preparation: Much of the source data is initially entered by hand. Errors need to be identified and fixed. If the errors can
Azure Blob Storage is Microsoft’s massively scalable cloud object store. Blob Storage is ideal for storing any unstructured data such as images, documents and other file types. Read this Introduction to object storage in Azure to learn more about how it can be used in a wide variety of scenarios.
The data in Azure Blob Storage is always replicated to ensure durability and high availability. Azure Storage replication copies your data so that it is protected from planned and unplanned events ranging from transient hardware failures, network or power outages, massive natural disasters, and so on. You can choose to replicate your data within the same data center, across zonal data centers within the same region, and even across regions. Find more details on storage replication.
Although Blob storage supports replication out-of-box, it’s important to understand that the replication of data does not protect against application errors. Any problems at the application layer are also committed to the replicas that Azure Storage maintains. For this reason, it can be important to maintain backups of blob data in Azure Storage.
Currently Azure Blob Storage doesn’t offer an out-of-the-box solution for backing up block blobs. In this blog post, I will design
We are excited to announce the capability of converting VMs with unmanaged Disks to Managed Disks in the Azure Portal! Now you can migrate to Managed Disks in single click without requiring PowerShell or CLI scripts.
Our customers love the benefits of using Managed Disks. Many customers have already adopted Managed Disks since we launched it. If you have not started using Managed Disks, here’s a quick recap of all the capabilities to motivate you to use Managed Disks.
Scale your application without worrying about storage account limits. Achieve high-availability across your compute and storage resources with aligned fault domains. Create VM Scale Sets with up to 1,000 instances. Integrate disks, snapshots, images as first-class resources into your architecture. Secure your disks, snapshots, and images through Azure Role Based Access Control (RBAC)
To read more about the benefits of Managed Disks, see Azure Managed Disks Overview.
Migrating to Managed Disks in Azure Portal
Migrating in the Azure Portal is a pretty simple experience. Let’s walk through this process.
If you are using a VM with unmanaged disks, you will see an info banner on the VM overview blade.
Once you click on the banner, it will launch the migration
Cloud scale applications typically require high concurrency to achieve desired performance when accessing remote data. The new Storage Java SDK simplifies building such applications by offering asynchronous operations, eliminating the need to create and manage a large thread-pool. This new SDK uses the RxJava reactive programming model for asynchronous operations, also relying on Netty HTTP client for REST requests. Get started with the Azure Storage SDK for Java now.
Azure Storage SDK v10 for Java adopts the next-generation Storage SDK design providing thread-safe types that were introduced earlier with the Storage Go SDK release. This new SDK is built to effectively move data without any buffering on the client, and provides interfaces close to the ones in the Storage REST APIs. Some of the improvements in the new SDK are:
Asynchronous programming model with RxJava Low-level APIs consistent with Storage REST APIs New high-level APIs built for convenience Thread-safe interfaces Consistent versioning across all Storage SDKs Asynchronous programming model with RxJava
Now that the Storage SDK supports RxJava it is easier to build event driven applications. This is because it allows you to compose sequences together with the observer pattern. The following sample, that uploads a directory of xml files