-
Updated
Jun 8, 2020 - Makefile
Azure

Azure is a cloud computing service created by Microsoft for building, testing, deploying, and managing applications and services through a global network of Microsoft-managed data centers.
Here are 5,792 public repositories matching this topic...
Template name: 101-event-grid, https://github.com/Azure/azure-quickstart-templates/blob/master/101-event-grid/azuredeploy.json
The parameter eventGridSubscriptionUrl's description ends with:
(RequestBin URLs are exempt from this requirement.)
This is no longer true, and deployments will fail, if the validation challenge is not answered, or the validation URL is not visited - even when
https://gocloud.dev/howto/blob/open-bucket/#prefix
Comment from @vangent: "I think we can drop this section and just leave it in the godoc. Thoughts?"
Originally posted in google/go-cloud@ff6e56c_
Running Pulumi CLI commands in CI is failing with the following error:
error: could not get cloud url: unmarshalling credentials file: unexpected end of JSON input
The following scripts are representative of what was running when the error happened:
# prepare.sh
curl https://sdk.cloud.google.com | bash > /dev/null
export PATH=$PATH:/root/google-cloud-sdk/bin
KEY_FI
## Python/Regex fix
This is a reminder for me or a task if anyone wants :P
Basically, The last two questions aren't really regex's questions.
To do:
- Move said questions to correct place.
- Add new regex questions (Python related!)?
- Maybe add a new ## Regex section, as it is a valuable skill
What is the problem?
I've succesfully installed Gitea using the one-click install on a fresh install of CapRover. When I try to install drone-gitea using the one-click Install, I get the following error at the 7th step: "Failed: Error: Request failed with status code 500" .
If applicable, content of captain-definition
file:
N/A
Steps to reproduce the problem:
- Install Gitea
Small feature request. I am using helmfile for the deployment of our k8s infrastructure and wanted to use sops for encryption of secrets. I need to use the --keyservice but as I am calling sops inside a wrapper (helmfile) of a wrapper (helm secrets) I cannot pass this variable to sops in a clean way.
Could you provide an alternative way to provide this option to sops in the .sops.conf and/or i
As a new custodian user, I'm trying to understand the usage of variables in policies. There seems to be multiple types of variables.
A non-exhaustive list for a beginner can be:
vars
in a policy yaml- [standard runtime variables for in
- Explain in notebook/FAQ what non-maxima suppression is what values to set (threshold on IoU)
- Explain and provide code how to pick a good score threshold (reuse Patrick's plot which was implemented for the drone demo)
If you'd like to have your company represented and are using Komiser please give formal written permission below via a comment and email to contact@komiser.io.
We will need a URL to a svg or png logo, a text title and a company URL.
🐛 Bug Report
Operating System:
macOS 10.15.3
Docker Image:
budtmo/docker-android-x86-10.0
Docker Version:
Docker Desktop v2.2.0.3
Docker-compose version (Only if you use it):
N/A
Docker Command to start docker-android:
N/A
Expected Behavior
docker build completes without errors
Actual Behavior
An image is build based on budtmo/docker-android-x86-10.0
I think it would be useful to add a mention about Async streams (IAsyncEnumerable) when a developer wants to tackle 'Handle streams of data' problem.
Version
com.microsoft.ml.spark:mmlspark_2.11:jar:0.18.1
spark= 2.4.3
scala=2.11.12
data (csv with header) https://gist.github.com/ttpro1995/69051647a256af912803c9a16040f43a
download data and save as csv file, put into folder /data/public/HIGGS/higgs.test.predictioncsv
val data = spark.read.option("header","true").option("inferSchema", "true").csv("/data/public/HIGGS
The official API docs are here
https://developers.google.com/maps/documentation/geocoding/start
I wanted to know if it would be possible to add this API's Swagger doc (and that of similar Google APIs like this).
The black theme no longer works on Chrome, the browser complains that:
Uncaught DOMException: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.
This is likely the case because the report is a static/local file.
It would be useful to explain the units that can be specified here, or at least an example suggesting that "102400MB" is what it might look like with units specified.
Document Details
- ID: 913a9562-cc87-8935-b842-ed6d072b3c1b
- Version Independent ID: 88419164-baee-4723-7453-bdb340d5ea74
- Cont
-
Updated
May 28, 2020 - Jupyter Notebook
Summary
In preparation for the next milestone release of the Event Hubs client, the documentation which accompanies it should be reviewed for accuracy and any updates which are needed made.
Scope of Work
-
The document comments for the API surface exist and are accurate. Any needed updates to reflect the new retry options approach have been made.
-
The
README
content is up to
The Metric Trigger now supports a "dividePerInstance" boolean to aid with scaling rules based on Storage Queues. This should be exposed in the Terraform API.
Community Note
Please vote on this issue by adding a
Please do not leave "+1" or "me too" comments, they generate extra noise for issue fol
TensorFlow 2 has been released for more than 1 year.
And there are amounts of developers and users heading to TensorFlow 2, which is more friendly and easier to use. But we don't have any related tutorials or guides for that.
As a TensorFlow contributor and MS Gold Partner, I'd like to submit some PRs to fill in the gap between TF 1.x and 2.x, since I just completed an E2E pipeline of Mobi
Currently I cannot find any docs about dependency-watchdog. Currently it seems to be:
- probing the kube-apiserver and scaling down the kube-controller-manager to 0 replicas when the kube-apiserver is reachable internally but unreachable externally
- restarting control plane components in
CrashloopBackoff
once etcd is again available
We create multiple jars during our builds to accommodate multiple versions of Apache Spark. In the current approach, the implementation is copied from one version to another and then necessary changes are made.
An ideal approach could create a common
directory and extract common classes from duplicate code. Note that even if class/code is exactly the same, you cannot pull out to a common clas
This is from the documentation at http://docs.seldon.io/api-oauth.html#actions
The item attribute definition is:
string name [attr_id 1]
string artist [attr_id 2]
enum category [attr_id 3]
double price [attr_id 4]
Where:
category is the enumeration
** (pop [value_id 1], rock [value_id 2], rap [value_id 3])
a range definition is created for the price ** (<10 [value_id 1], 10-20 [value_id
-
Updated
Jun 6, 2020 - C#
Bug
For want of a better categorisation. The first thing that kube-proxy logs at startup is the following:
W0913 12:02:58.529651 1 server.go:195] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Environment
- Platform: aws
- OS: container-linux
- Ref: v1.11.2
- Terraform: 0.11.8
- Pl
Scan with variables
Hi :),
It's possible to scan with variables json or .tf file name ?
Like
{
"region": "eu-central-1",
"environment_id": "demo",
"tags": {
"EnvironmentId": "integration",
"ApplicationName": "demo",
"EnvironmentType": "development",
"Project": "pepito"
},
"rds_instances": [
{
"sg_name": "test-sg",
"kms_key_label": "kms",
"rds_label": "rds",
Hi Team
Can we pass on a feedback to the team that maintains the documentation for the Azure SDK to provide more details which could help customers as the current documentation is cluttered and doesn't have proper verbal explanations of the available apis and methods .
It will be great if we can improve the documentation a bit so that it will be helpful for end customers
regards,
Jay
Created by Microsoft
Released February 1, 2010
- Organization
- Azure
- Website
- azure.microsoft.com
- Wikipedia
- Wikipedia
Description
Add Azure notebook to our SETUP doc.
I tested google colab and Azure notebook to run reco-repo without requiring creating any DSVM or compute by myself, and it works really well with simple tweaks to the notebooks (e.g. for some libs, should install manually).
I think it would be good to add at least Azure notebook to our SETUP doc, where users can easily test out our repo w/o