How to Create an Ansible Playbook
In this post, you will learn how to create an Ansible playbook. As an exercise, you will install an Apache Webserver onto two target machines and change the welcome page. 1. Introduction In the two previous Ansible posts, you learned how to setup an Ansible test environment and how to create an Ansible inventory. This post continues this series, but it is not necessary to read the first two posts. In this post, you will learn how to create an Ansible playbook. A playbook consists out of one or more plays which execute tasks. The tasks call Ansible modules. Do not worry if you do not understand this yet, this is what you will learn. It is also advised to read the introduction to playbooks in the Ansible documentation. In case you did not read the previous blogs or just as a reminder, the environment consists out of one Controller and two Target machines. The Controller and Target machines run in a VirtualBox VM. Development of the Ansible scripts is done with IntelliJ on the host machine. The files are synchronized from the host machine to the Controller by means of a script. In this blog, the machines have the following IP addresses: Controller: 192.168.2.11 Target 1: 192.168.2.12 Target 2: 192.168.2.13 The files being used in this blog are available in the corresponding git repository at GitHub. 2. Prerequisites The following prerequisites apply to this blog: You need an Ansible test environment, see a previous blog how to set up a test environment; You need to have basic knowledge about Ansible Inventory and Ansible Vault, see a previous blog if you do not have this knowledge; If you use your own environment, you should know that Ubuntu 22.04 LTS is used for the Controller and Target machines and Ansible version 2.13.3; Basic Linux knowledge. 3. Your First Playbook As a first playbook, you will create a playbook which will ping the Target1 and Target2 machines. The playbook can be found in the git repository as playbook-ping-targets-success.yml and looks as follows: YAML - name: Ping target1 hosts: target1 tasks: - name: Ping test ansible.builtin.ping: - name: Ping target2 hosts: target2 tasks: - name: Ping test ansible.builtin.ping: Let’s see how this playbook looks like. A playbook consists out of plays. In this playbook, two plays can be found with name Ping target1 and Ping target2. For each playbook, you indicate where it needs to run by means of the hosts parameter which refers to a name in the inventory file. A play consists out of tasks. In both plays, only one task is defined with name Ping test. A task calls an Ansible module. A list of modules which can be used, can be found here. It is important to learn which modules exists, how to find them, how to use them, etc. The documentation for the Ping module is what you need for this example, so take the time and have a look at it. Last thing to note is that the FQCN (Fully Qualified Collection Name) is used. This is considered to be a best practice. Run the playbook from the Controller machine. If you use the files as-is from the git repository, you will need to enter the vault password, which is itisniceweather. Shell $ ansible-playbook playbook-ping-targets-success.yml -i inventory/inventory.ini --ask-vault-pass Vault password: PLAY [Ping target1] *********************************************************************************************** TASK [Gathering Facts] ******************************************************************************************** ok: [target1] TASK [Ping test] ************************************************************************************************** ok: [target1] PLAY [Ping target2] *********************************************************************************************** TASK [Gathering Facts] ******************************************************************************************** ok: [target2] TASK [Ping test] ************************************************************************************************** ok: [target2] PLAY RECAP ******************************************************************************************************** target1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 target2 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 The logging shows exactly which plays and which tasks are executed and whether they executed successfully. The Ping module also provides the option to crash the command. In the Target1 play, the parameter data is added in order to let the command crash. The playbook can be found in the git repository as playbook-ping-targets-failure.yml. Shell - name: Ping target1 hosts: target1 tasks: - name: Ping test ansible.builtin.ping: data: crash ... Executing this playbook will crash the Target1 play and the playbook just ends. Shell $ ansible-playbook playbook-ping-targets-failure.yml -i inventory/inventory.ini --ask-vault-pass Vault password: PLAY [Ping target1] *********************************************************************************************** TASK [Gathering Facts] ******************************************************************************************** ok: [target1] TASK [Ping test] ************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: boom fatal: [target1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 192.168.2.12 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/home/osboxes/.ansible/tmp/ansible-tmp-1662800777.2553337-6094-259627128894774/AnsiballZ_ping.py\", line 107, in \r\n _ansiballz_main()\r\n File \"/home/osboxes/.ansible/tmp/ansible-tmp-1662800777.2553337-6094-259627128894774/AnsiballZ_ping.py\", line 99, in _ansiballz_main\r\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n File \"/home/osboxes/.ansible/tmp/ansible-tmp-1662800777.2553337-6094-259627128894774/AnsiballZ_ping.py\", line 47, in invoke_module\r\n runpy.run_module(mod_name='ansible.modules.ping', init_globals=dict(_module_fqn='ansible.modules.ping', _modlib_path=modlib_path),\r\n File \"/usr/lib/python3.10/runpy.py\", line 209, in run_module\r\n return _run_module_code(code, init_globals, run_name, mod_spec)\r\n File \"/usr/lib/python3.10/runpy.py\", line 96, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/tmp/ansible_ansible.builtin.ping_payload_xnphtwh8/ansible_ansible.builtin.ping_payload.zip/ansible/modules/ping.py\", line 89, in \r\n File \"/tmp/ansible_ansible.builtin.ping_payload_xnphtwh8/ansible_ansible.builtin.ping_payload.zip/ansible/modules/ping.py\", line 79, in main\r\nException: boom\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} PLAY RECAP ******************************************************************************************************** target1 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 4. Install Apache Webserver In this second exercise, you will install Apache Webserver on a target machine and change the welcome page. The final playbook can be found in the git repository as playbook-httpd-target1.yml. You will learn in this section how to create this final version. 4.1 Install Package For installing packages, you can use the Apt module. It contains many parameters, you will only use a few: name: the name of the package to be installed; update_cache: runs apt-get update before installation; state: indicates the desired package state, present is just fine here. The other items in this playbook should be quite familiar by now. YAML - name: Install Apache webserver hosts: target1 tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present Run the playbook. Shell $ ansible-playbook playbook-httpd-target1.yml -i inventory/inventory.ini --ask-vault-pass Vault password: PLAY [Install Apache webserver] ***************************************************************************************** TASK [Gathering Facts] ************************************************************************************************** ok: [target1] TASK [Install apache httpd (state=present is optional)] **************************************************************** This playbook does not end. It hangs and you can stop it with CTRL+C. So what is happening here? As you probably know, in order to install packages you need sudo privileges. One way or the other, Ansible needs to know whether privilege escalation is needed and you will need to provide the sudo password to Ansible. A detailed description can be read in the Ansible documentation. The short version is, that you need to add the become parameter with value yes. But that is not all, you also need to add the command line parameter --ask-become-pass when running the Ansible playbook. This way, Ansible will ask you for the sudo password. The playbook with the added become parameter looks as follows: YAML - name: Install Apache webserver hosts: target1 become: yes tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present Running this playbook is successfull. As you can see, the become password and the vault password need to be entered. Shell $ ansible-playbook playbook-httpd-target1.yml -i inventory/inventory.ini --ask-vault-pass --ask-become-pass BECOME password: Vault password: PLAY [Install Apache webserver] **************************************************************************************** TASK [Gathering Facts] ************************************************************************************************* ok: [target1] TASK [Install apache httpd (state=present is optional)] *************************************************************** changed: [target1] PLAY RECAP ************************************************************************************************************* target1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 In the output logging, you also notice that Target1 has been changed at line 11. Remember this, this will be important later on when the playbook is run again. Navigate via your browser (or by means of the curl command) to the IP address of the Target1 machine: http://192.16.2.12. You can execute this from your host machine if you have a similar test environment as used in this blog. As you can see, the Apache Webserver default welcome page is shown. 4.2 Change Welcome Page In the playbook, you can also change the contents of the welcome page. You can use the copy module for that. Add the following task to the playbook. YAML - name: Create index page ansible.builtin.copy: content: 'Hello world from target 1' dest: /var/www/html/index.html Execute the playbook. Shell $ ansible-playbook playbook-httpd-target1.yml -i inventory/inventory.ini --ask-vault-pass --ask-become-pass BECOME password: Vault password: PLAY [Install Apache webserver] **************************************************************************************************************************** TASK [Gathering Facts] ************************************************************************************************************************************* ok: [target1] TASK [Install apache httpd (state=present is optional)] *************************************************************************************************** ok: [target1] TASK [Create index page] *********************************************************************************************************************************** changed: [target1] PLAY RECAP ************************************************************************************************************************************************* target1 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 First, take a closer look at the logging. The task Install apache httpd now just returns ok and not changed. This means that Ansible did not install Apache Webserver again. Ansible tasks are idempotent. This means that you can execute them over and over again, the result will be the same. Also note that the welcome page has been changed now. Verify this via the browser or via curl. Shell $ curl http://192.168.2.12 Hello world from target 1 4.3 Install Target2 As a last exercise, you can add a second play for installing Apache Webserver on Target2 and change the welcome page accordingly in order that it welcomes you from Target2. The playbook can be found in the git repository as playbook-httpd-target1-and-target2.yml. YAML - name: Install Apache webserver for target 1 hosts: target1 become: yes tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present - name: Create index page for target 1 ansible.builtin.copy: content: 'Hello world from target 1' dest: /var/www/html/index.html - name: Install Apache webserver for target2 hosts: target2 become: yes tasks: - name: Install apache httpd (state=present is optional) ansible.builtin.apt: name: apache2 update_cache: yes state: present - name: Create index page for target 2 ansible.builtin.copy: content: 'Hello world from target 2' dest: /var/www/html/index.html Execute the playbook, you are now confident enough to explore the logging yourself. Shell $ ansible-playbook playbook-httpd-target1-and-target2.yml -i inventory/inventory.ini --ask-vault-pass --ask-become-pass BECOME password: Vault password: PLAY [Install Apache webserver for target 1] ***************************************************************************************************************************** TASK [Gathering Facts] *************************************************************************************************************************************************** ok: [target1] TASK [Install apache httpd (state=present is optional)] ***************************************************************************************************************** ok: [target1] TASK [Create index page for target 1] ************************************************************************************************************************************ ok: [target1] PLAY [Install Apache webserver for target2] ****************************************************************************************************************************** TASK [Gathering Facts] *************************************************************************************************************************************************** ok: [target2] TASK [Install apache httpd (state=present is optional)] ***************************************************************************************************************** changed: [target2] TASK [Create index page for target 2] ************************************************************************************************************************************ changed: [target2] PLAY RECAP *************************************************************************************************************************************************************** target1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 target2 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Verify whether the welcome pages are changed correctly. Shell $ curl http://192.168.2.12 Hello world from target 1 $ curl http://192.168.2.13 Hello world from target 2 Just as expected! 5. Conclusion In this post, you continued your journey towards learning Ansible. You learned the basics about Ansible playbooks and you wrote and executed a playbook which installs Apache Webserver onto the two target machines. You are now able to write your own playbooks and continue to learn.
November 23, 2022
·
8,933 Views
·
1 Like
Comments
Apr 28, 2024 · Raghuraman Ramaswamy
Nice article, but it would be better if you first design your API in an OpenAPI spec and based on that, generate your server side code. In your case, the spec is the code and that's the other way around. I have a working example in one of my blogs: https://mydeveloperplanet.com/2022/02/08/generate-server-code-using-openapi-generator/
Dec 28, 2023 · Gunter Rotsaert
Interesting question, I have read about the existence of Ollama, but did not use it.
A quick look at the documentation tells me that Ollama has a limited number of built-in models available, while LocalAI can be extended with a model of your choice (which you can download from HuggingFace). Another difference seems to be the Rest API. LocalAI uses the same Rest API as OpenAI which makes it easier to swap between LocalAI and OpenAI. Ollama seems to have a different API specification.
Oct 30, 2023 · Gunter Rotsaert
you are welcome and thank you for the comment
Oct 24, 2023 · Gunter Rotsaert
You are welcome :-) Thank you for creating and maintaining this plugin!
Apr 30, 2023 · Gunter Rotsaert
Some people nowadays only 'scan' blog posts instead of taking the time to actually 'reading' them. But... they do find the time to express their frustration in a lengthy comment. This is an example of spending time in a negative way and pure a waste of time. So, my advice to you is: if you really want to learn something, you need to invest time. Read the introduction again, click the link to the GitHub repository and find the answers to your frustration.
Apr 13, 2023 · Gunter Rotsaert
Thank you and interesting what you mention. Do you have a link to an example for me?
Apr 06, 2023 · Gunter Rotsaert
Thanks for the nice comment. According to me, you have two options: graalvm native image or snapstart. This workshop is very insightful: https://catalog.workshops.aws/java-on-aws-lambda/en-US
Mar 09, 2023 · Gunter Rotsaert
Thank you very much, very much appreciated
Dec 09, 2022 · Gunter Rotsaert
thank you :-)
Oct 23, 2022 · Gunter Rotsaert
Thank you for the nice comment, I fully agree with you :-)
May 26, 2022 · Arvind Sarin
SAFe has nothing in common with agile, see also the opinions of leading experts: https://www.smharter.com/blog/safe-a-collection-of-comments-from-leading-experts/
Apr 06, 2022 · Gunter Rotsaert
Thank you for the comment, always nice to receive feedback.
Dec 25, 2021 · Gunter Rotsaert
Thank you for your comment. I would refer to the official Liquibase documentation for that. If it is not standard available in Liquibase, you still have the opportunity to use the sql changetype where you can execute sql statements: https://docs.liquibase.com/change-types/community/sql.html
Nov 13, 2021 · Alex Omeyer
The problem with a checklist is that people eventually will only tend to use the checklist and will not think outside the checklist anymore. Therefore, a checklist for code reviews is in my opinion a bad idea. It would have been better if you had called it 'Items you at least need to address during a code review'. It might seem as a mere linguistic thing, but the term checklist often means for people: if I only check these, it will be ok.
Oct 11, 2021 · Gunter Rotsaert
thank you for the nice feedback, I really appreciate this.
Aug 31, 2021 · Gunter Rotsaert
I do not have a Windows machine available right now, but did you try to use backslashes in the javac command instead of forward slashes?
Jun 09, 2021 · Gunter Rotsaert
I do not really understand your question but in both cases the module-info file is at the same level as where the package starts. Part 1 is a simple Java project, Part 2 is a Maven project. Both projects can be opened with IntelliJ IDE. You can also reach out at my mail gunter@mydeveloperplanet.com
Mar 14, 2021 · Álvaro Iradier
Great blog! I would like to add that CIS also provides a tool for checking vulnerabilities in your Docker images. I wrote a blog about it some time ago: https://mydeveloperplanet.com/2019/01/16/secure-docker-in-production/ It can help you identify security issues.
Nov 13, 2020 · Gunter Rotsaert
Thank you for your comment. But what is properly in 'After configuring the credStore properly' more specifically?
Nov 04, 2020 · Gene Kim
I can recommend this book, see also my review at https://mydeveloperplanet.com/2020/02/26/book-review-the-unicorn-project/
Jun 11, 2020 · Agilewaters Consulting
According to leading agile experts (several which have created the Agile Manifesto), SAFe is anything but agile: https://www.smharter.com/blog/safe-a-collection-of-comments-from-leading-experts/
Jun 07, 2020 · Gunter Rotsaert
Sources for this blog have been moved to branch feature/blog: https://github.com/mydeveloperplanet/jiratimereport/tree/feature/blog
Jun 07, 2020 · Gunter Rotsaert
Sources for this blog have been moved to branch feature/blog: https://github.com/mydeveloperplanet/jiratimereport/tree/feature/blog
May 24, 2020 · Gunter Rotsaert
2. Also in paragraph 3: it is not needed to add the following to the unit test:
@Container
static PostgreSQLContainer postgreSQL = new PostgreSQLContainer();
@DynamicPropertySource
static void postgreSQLProperties(DynamicPropertyRegistry registry) {
registry.add(“spring.datasource.url”, postgreSQL::getJdbcUrl);
registry.add(“spring.datasource.username”, postgreSQL::getUsername);
registry.add(“spring.datasource.password”, postgreSQL::getPassword);
}
The errata's above are fixed in the original blog post: https://mydeveloperplanet.com/2020/05/05/easy-integration-testing-with-testcontainers/
May 24, 2020 · Gunter Rotsaert
There are two errata's in the post:
1. In paragraph 3, the properties file contains a } too much:
${embedded.postgresql.user}}
May 19, 2020 · Gunter Rotsaert
yes, you are right, it should be 'list of modules'. Thank you for the comment. I cannot change it here at DZone anymore, but will do so at my personal blog.
May 11, 2020 · Preetdeep Kumar
The dockerfile Maven plugin seems easier to use in this case, in my opinion. An example can be found at: https://mydeveloperplanet.com/2018/05/16/build-and-deploy-a-spring-boot-app-on-minikube-part-1/
Jan 12, 2020 · Gunter Rotsaert
Oct 27, 2019 · Gunter Rotsaert
Thank you for the feedback. I agree that it would have been better to add jenkins to the docker group.
The pro's and con's of this setup are clearly described in the first part of paragraph 4.3. It is clearly stated that this setup should only be used in a playground setup. Your comment however implies that you did not read the article thoroughly.
It would have been better if you provided a link to an article how you think it should have been done. This would have been positive feedback instead of just criticizing and giving a vague advice.
Oct 02, 2019 · Gunter Rotsaert
Did you execute the command?
It should show you the IP Address to use. What URL are you using?
Sep 15, 2019 · Gunter Rotsaert
Of course, see the last line of the introduction, a link to the Java and Python repository are available there.
Jun 24, 2019 · Gunter Rotsaert
First of all, you will have to look at the adoption of JDK 11. A prediction of InfoQ shows that they expect 10% of adoption of JDK 11 at the end of 2019 (https://www.infoq.com/news/2018/12/java-2019-predictions/). I don't think that everyone that adopts JDK 11 will start immediately with modules, so I assume that will lag even behind.
Apr 23, 2019 · Gunter Rotsaert
Hi Thiago, I am not completely sure, but according to me, you can use properties you define in your pom file and use them in the appengine-web.xml. If you can do that, then you can use maven profiles in order to set the correct property. You define a dev profile and a prod profile. Hope this guides you in the right direction.
Sep 24, 2018 · Gunter Rotsaert
You can enable it in any Spring Boot app. Spring Actuator will create the endpoints for you. It is up to you to to enable the different endpoints in the application.properties.
Aug 31, 2018 · Ion Mudreac
Is it working fine on Windows? According to the installation instructions of Minikube, Windows support is still experimental: https://github.com/kubernetes/minikube/releases
I had several problems in the past installing Minikube on Hyper-V, then turned to VirtualBox, but also this gave problems. Eventually, I turned to using an Ubuntu VM and installed Minikube but without using a vm-driver.
Aug 04, 2018 · Gunter Rotsaert
I have no experience with an Image Based RESTful service, therefore I cannot answer your question.
Aug 04, 2018 · Gunter Rotsaert
The Spring example has a GreetingRouter with a method route which is annotated as a Bean. I first created an own example ExampleRouter also with a method route and annotated as a Bean. The problem here is that two beans with the same name exist which causes the error. When you give the bean a unique name (exampleRouter in this case), the problem is solved. Hope this explains it enough.
Aug 04, 2018 · Gunter Rotsaert
Common libraries are placed in the parent pom. The modules poms will inherit the libraries of the parent pom. If for some reason you need another version of a library in your module pom, then you can override it in your module pom. It is a similar approach as inheritance in Java. Hope this helps.
Jun 13, 2018 · Gunter Rotsaert
Hello José,
you will use Spring Webflux whenever you need asynchronuous processing and with microservices. Take a look at the following article: https://dzone.com/articles/patterns-for-microservices-sync-vs-async It is quite good explained.
May 19, 2018 · Anupam Gogoi
I don't really understand why you should do something like this. It is far more easy (and cheap if you run Nexus OSS) to run a docker instance of Nexus and don't abuse a VCS for this. Besides that, a repository manager is far more than only storing your own artifacts. It also serves as a proxy for your maven libraries and provides tooling for managing your artifacts.
Maybe your solution is cheap because you don't have to run Nexus yourself, but you will spend extra effort (and therefore extra costs) in managing your artifacts manually with the GitHub solution and abusing a VCS.
May 13, 2018 · Gunter Rotsaert
Thank you for the feedback. I just tried the command again and it works for me with the semicolon. I guess you mean the semicolon in the module-path, right? I also checked the Oracle documentation and it says to use a semicolon: https://docs.oracle.com/javase/9/tools/java.htm#JSWOR624
Feb 24, 2018 · Gunter Rotsaert
First of all, I did not create the plugin, I am a user of the plugin. No credits for me for creating it ;-)
I have read the post you mentioned, which is also interesting. It seems to me, but correct me if I am wrong, that it takes more effort to use the approach described and is less flexible in the information you want to put into a version properties (or class) file.
The git commit id plugin is limited to git, that is correct. The approach with the buildnumber maven plugin is less SCM dependent. So, if you are using another SCM than git, you will need to fall back to the buildnumber maven plugin.
I think a properties file is more convenient than a Version.java class:
* the properties file can be read by anyone without having to run the application. E.g. if someone quickly wants to see the versioning information, the data can be read with a text editor
* a properties file does not prevent you from having a Version.java class. You can still make your Version.java class and read the properties file and then have the possibility to retrieve the versioning information within you Java application. I think this would be the most flexible approach.
To summarize, the approach in the link you provided is also a valid way. It depends on your application, tooling, needs which solution you choose. In any way, make sure that the versioning information is generated and not manually maintained.
Jan 14, 2018 · Mike Gates
You state that 'In addition, if the service is not in the application module, then the module declaration must have a requires directive that specifies the module which exports the service.' is not true. But, the application module is the Consumer module. The Consumer module must have a requires directive for the ServiceInterface module (which exports the service). This is according to the example you gave correct. So, the statement in the JDK documentation is correct, right?
Jan 13, 2018 · Gunter Rotsaert
That is an interesting question. A drawback would be that there is extra configuration to make (certainly if you are planning to do this manually). However, I believe that the stronger encapsulation and the possibility to use a ‘small’ JVM will be the advantages developers will deal with the most. The cost of extra configuration is small, as long as the IDE offers enough support on this. In my next post, I will show how modules can be used with Maven and IntelliJ. IntelliJ makes it quite easy to manage the module-info in such a way that it almost costs no extra effort. Besides that, I did not yet migrate a legacy system to Java 9 Modules, there might be issues with that. Currently, I see no reasons for not using Java Modules. Any other opinions are welcome of course.