create curl task to request response from internet
Add a readme file to specify how to run curl task
Modify parts of the readme file
create two simple taskruns to test curl
change spec for url value
change spec to re-push
change image place to be param
users now can pin the version of image as they want
Add a new param
add a curl image option so users now can use different version of curl image at will
update readme file on param description
- This was to update to the new standard for issue templates for github.
- Also added a `.config.yml` which is also part of the new version
Signed-off-by: JJ Asghar <jjasghar@gmail.com>
The requirements discussed in #233
- action: ['get', 'create', 'apply', 'delete', 'replace', 'patch'] -- the action to perform to the resource
- merge_strategy: ['strategic', 'merge', 'json'] -- the strategy used to merge a patch, defaults to "strategic"
- success_condition: a label selector expression which describes the success condition
- failure_condition: a label selector expression which describes the failure condition
- manifest: the kubernetes manifest"
… also talk a little bit about `privileged` and how to be explicit in
the task if it needs to be run as root and/or privileged.
Signed-off-by: Vincent Demeester <vdemeest@redhat.com>
This change makes it possible for users to provide
environment variables to buildpacks via a volume with
appropriately configured directories.
Also includes some fixes for v1beta1 compatibility.
See https://buildpacks.io/docs/reference/buildpack-api/
for more info on how PLATFORM_DIR is used.
Resolves#160
Signed-off-by: Natalie Arellano <narellano@vmware.com>
This task can be used to make a release on the Gitlab.
Assets or binaries of the released version can also be uploaded with the release.
Signed-off-by: Divyansh42 <diagrawa@redhat.com>
Test would be failing if there is a directory but no yaml template in there,
which in reality we should not have this happening but it happens sometime.
let's make sure we skip the test if there is no yaml files then.
Signed-off-by: Chmouel Boudjnah <chmouel@redhat.com>
Current task doesn't support merge requests. Added support
for merge requests in the same task.
Now we can add labels to merge requests as well.
Signed-off-by: Divyansh42 <diagrawa@redhat.com>
The git-clone example only demonstrated the very simplest of behaviour
and didn't provide much in the way of instruction on using its features.
This PR adds examples for cloning a branch, cloning a specific commit,
and cloning tags. Each example includes clear description of its purpose
and the features it demonstrates.
I've removed the inline git-clone example from the README because it's
really long and the README needs to serve the purpose of documenting
multiple Tasks (git-clone as well as git-batch-merge, and any future
Tasks we add as well). Rather than bloat the README with many examples
which can go stale if we modify git-clone's behaviour I've simply linked
to the new example files.
Parameterized the UPI image so that if anybody wants to use their image then they can use it and also updated the README
Signed-off-by: vinamra28 <vinjain@redhat.com>
The following task can be used to do static analysis of the source code by taking SonarQube server URL as the input.
Signed-off-by: vinamra28 <vinjain@redhat.com>
To binding `cluster-admin` to SA default/default without warning user will introduce some potential security issue, so:
1. Make it clearer to users that this is only intended for insecure/non-critical use cases, and should be modified after installation to match the user's use case and needs, and/or
2. Provide a more scoped-down role binding initially, which users can add to as they need.
By default git-clone now checks out a repo directly into the root
of the provided workspace. The README still states in one spot that
it checks out to the "src" subdirectory but this is no longer true.
This commit updates the content of the README to correctly reflect
the fact that git-clone checks out directly into the workspace's root.
We've been using Pipelines 0.11.0-rc2 for e2e tests but that's
now several months old and there are features in 0.12+ that we
can expect to appear in Catalog Tasks and their Tests.
This commit attempts to update the version of Tekton used for
e2e tests.
This task can be used to make a release in the github.
Multiple assets including binaries of the released version and the release notes, can also be uploaded with the release.
Signed-off-by: Divyansh42 <diagrawa@redhat.com>
I copied this Task (can't wait for OCI registry referencing! :D)
into Pipelines for an example in https://github.com/tektoncd/pipeline/pull/2482
and @ImJasonH asked me to add a comment, so I used
0a8b65343b
to figure out why this was added and now I'm adding the comment
here as well!
The Buildah task does not have a task result. Tasks executing after this task is interested in the resulting image-digest. This commit adds the pushed image-digest as a task result, very similar to the task result in the Kaniko task.
See discussion in https://github.com/tektoncd/catalog/issues/265. Although using layer caching in the buildah build would be nice, there are two prerequisites:
* A PVC to store the cache between pipeline runs
* A fix for https://github.com/containers/buildah/issues/2215
The buildah issue is that buildah uses file timestamps to invalidate the cache. Locally, this works well, but in a CI/CD the files will always be freshly downloaded from source control, so they will be newer than the cache. This means the cache would never be used, even if it was available in a PVC.
Given that, creating and storing the layers is an unnecessary expense. I saw a 40% build time improvement without the layer caching.
I tried to use the kaniko task to update the build + deploy pipelinerun
example in the pipelines repo and run into some trouble which was
partially fixed in #286, in addition I fixed:
- The jq parsing of the digest from the imagedigestexporter output
wasn't quite right, the name of the field was incorrect and it was
surrounded by quotes (removed with -r) and
- Removing the newline from the result so it can be placed directly in
parameters containing image urls (removed with -j)
- If the kaniko Task was used more than once with the same workspace,
even if it was building from different directories, the digests would
conflict b/c it was always writing to the same files. Now the files it
writes to are relative to the context the image is alreayd using to
build
Also removed example output resource in docs since that isn't actually
used anymore.
This task can be used to provision an OpenShift cluster on cloud provider such as AWS and GCP using the Installer Provided Infrastructure.
Run the openshift-provision task as Root.
Signed-off-by: vinamra28 <vinjain@redhat.com>
Immediately after our test runner submits test TaskRuns / PipelineRuns
it checks that those Runs appear in `kubectl get` output. In a recent
PR to add a kythe-go Task (#301) the test runner would fail at this
point. Checking locally I was able to reproduce the issue. It looks
like it can take a small amount of time for submitted resources to
show up in `kubectl get` output.
For other types of test failures the test runner gives a grace period of
up to ten minutes. I did the same for the `kubectl get` check, allowing
the test run loop to run again in ten seconds if nothing appears in a
taskrun / pipelinerun list. Making that change resulted in the kythe-go
test proceeding normally and passing.
This commit updates the test runner to repeatedly check whether
TaskRuns / PipelineRuns have appeared in the `kubectl get` output.
The following task can be used to run openshift commands within the python script.
Also added Dockerfile for the image build
Signed-off-by: vinamra28 <vinjain@redhat.com>
This task allows you to generate Kythe annotation metadata for a given
Go project. This follows the workspace model as presented by the `git`
Task. The expectation is that any results place in the workspace will be
exported by another Task in the pipeline for remote storage (e.g. GCS,
etc).
This is a part of documenting and providing task in the catalog that would help user not using PipelineResource for accesing the target cluster.
This can help in creating a kubeconfig file by providing cluster credentials to the task.
Ref:#95
Signed-off-by: Divyansh42 <diagrawa@redhat.com>