Welcome to the Ranbook 馃帀
- I’ll be sharing my experiences here. 馃檪
Welcome to the Ranbook 馃帀
In this article, we will look at how to run pylint, bandit, pytest and coverage utilities to generate and upload code coverage, quality reports to SonarQube server. Required Software Docker: To run SonarQube and SonarQube CLI containers. Python3 pip3: To install required dependencies. Python3.x-venv: Optional but recommended. SonarQube Setup Run following commands to bring SonarQube Docker. Let鈥檚 create docker network first so our Sonar CLI container can interact with SonarQube Server....
Typically, helm is used for creating deployment charts for kubernetes platform but I like the helm templating features to generate YAML specifications which can be used to deploy to any platform. AWS Elastic Container Service(ECS) is another platform where you can run container workloads in AWS Cloud. While I won鈥檛 go deep into AWS ECS platform or setting up cluster in this article, we will quickly look at how we can deploy a service to an existing ECS cluster with Helm Templates and AWS CLI....
As of Alpine Linux 3.3 there exists a new --no-cache option for apk. It allows users to install packages with an index that is updated and used on-the-fly and not cached locally: FROM openjdk:8-jre-alpine RUN apk --no-cache add curl This avoids the need to use --update and remove /var/cache/apk/* when done installing packages. Reference - https://github.com/gliderlabs/docker-alpine/blob/master/docs/usage.md
You can create dynamic & configuration driven beans in Spring Boot in following way. Note: Below example full code is available at https://github.com/imran9m/spring-boot-dynamic-beans For the example, Let鈥檚 create dynamic RestClient beans with customized connection timeout and user-agent configuration to use anywhere we want. Let鈥檚 go with following configuration to manage customized properties for two dynamic RestClient beans. application.yml restClients: clients: - clientName: test1 connectionTimeout: 6000 responseTimeout: 6000 userAgent: test1 - clientName: test2 connectionTimeout: 5000 responseTimeout: 5000 userAgent: test2 Now, let鈥檚 load this configuration into custom configuration properties record....
You can enable/disable FluentD Matches with environment variables in following way. Below is fluent.conf. <source> @type dummy dummy {"hello":"world"} @label @DUMMY tag dummy </source> <label @DUMMY> <match dummy> @type copy copy_mode deep <store> @type relabel @label @OPENSEARCH </store> <store> @type relabel @label @ELASTICSEARCH </store> </match> </label> <label @OPENSEARCH> @include "#{ENV['FLUENTD_OPENSEARCH']}" </label> <label @ELASTICSEARCH> @include "#{ENV['FLUENTD_ELASTICSEARCH']}" </label> Following is the content for fluent-elasticsearch.conf. For testing purpose, we will send events to stdout....
You will need to make multiple AWS API calls to get Public IPv4 address. Here are the steps. Once you perform taskRun operation. Keep taskFullArn from Output. With above taskArn and cluster name, make describeTasks operation call. Example - AmazonECS client = AmazonECSClientBuilder.standard().build(); DescribeTasksRequest request = new DescribeTasksRequest().withTasks("c5cba4eb-5dad-405e-96db-71ef8eefe6a8"); DescribeTasksResult response = client.describeTasks(request); Above API will give you a response with network attachment details. "attachments": [ { "id": "xxxxx-d02c-4a9d-ae79-xxxxxxx", "type": "ElasticNetworkInterface", "status": "ATTACHED", "details": [ { "name": "subnetId", "value": "subnet-xxxxx" }, { "name": "networkInterfaceId", "value": "eni-e5aa89a3" }, { "name": "macAddress", "value": "xxxxx" }, { "name": "privateIPv4Address", "value": "172....
You can try using plugins copy and relabel to achieve this. Example configuration looks like this. //One Source <source> @type tail tag service path /tmp/l.log format json read_from_head true </source> //Now Copy Source Events to 2 Labels <match service> @type copy <store> @type relabel @label @data </store> <store> @type relabel @label @pi2 </store> </match> //@data Label, you can perform desired filter and output file <label @data> <filter service> ... </filter> <match service> @type file path /tmp/out/data </match> </label> //@pi2 Label, you can perform desired filter and output file <label @pi2> <filter service> ....