Skip to content

Decision Systems/Rule Base + Event-Driven Ansible

Avatar photo

https://www.linkedin.com/in/alpha-wolf-jin/

We use a rule base to find the new routing path with the lowest cost. RHDM gives the path from J to k. Kafka event is the communication channel for all parties, RHDM, Ansible, and possible monitoring or network devices.

Figure-01

Workflow

1. assume the monitoring tool detects the network change and adds the network update to the event queue

2. Ansible RuleBook listens on the event queue and finds the network updating event

3. Ansible RuleBook calls the related playbook

4. Ansible PlayBook calls RHDM to analyze the impacts on the routing due to new changes

5. Ansible Playbook receive the suggestion from RHDM

6. Ansible Playbook add action event to the event queue

7. Ansible RuleBook listens on the event queue and finds action event

8. Ansible RuleBook calls related playbook to take action

Note: The above case may not make sense in the real world. Here, I want to demonstrate the possibility that Rule Base Engine, Event Queue, and Ansile closely work together. It provides a new structure for problem-solving.

Install RHDM

Prepare the VM with the latest version of RHEL 8 for both RHDM and Kafka. The RHEL 8 VM needs internet access.

Refer to “Installing on local machine” for installation, https://github.com/jbossdemocentral/rhdm7-install-demo.

Change ‘127.0.0.1’ to ‘0.0.0.0’ in file ‘./rhdm7-install-demo/target/jboss-eap-7.3/standalone/configuration/standalone.xml’.

# grep -R '0.0.0.0' ./rhdm7-install-demo/target/jboss-eap-7.3/standalone/configuration/standalone.xml
            <wsdl-host>${jboss.bind.address:0.0.0.0}</wsdl-host>
            <inet-address value="${jboss.bind.address.management:0.0.0.0}"/>
            <inet-address value="${jboss.bind.address:0.0.0.0}"/>
            <inet-address value="${jboss.bind.address.unsecure:0.0.0.0}"/>

Start the standalone RHDM

[root@kafka rhdm]# cd rhdm7-install-demo/
[root@kafka rhdm7-install-demo]# ./target/jboss-eap-7.3/bin/standalone.sh
...
22:29:43,732 INFO  [org.kie.workbench.common.screens.datasource.management.backend.DataSourceManagementBootstrap] (pool-24-thread-1) Initialize deployments task finished successfully.

Login to the RHDM console

In browser, open http://<IP>:8080/decision-central/kie-wb.jsp

Creating Space

In Business Central, go to Menu → Design. Access the samples by clicking the labs space and selecting Rout-02 projects. If the space or projects does not exist, you can click the “Add Space” or “Add Project” button to create.

Data objects

If there is not d

  1. In Business Central, go to Menu → Design → Projects and click the project name.
  2. Click Add Asset → Data Object.
  3. Fill in the name of Data Object
  4. Click Ok.
  5. In the data object designer, click add field to add a field to the object with the attributes Name, Label, and Type. Required attributes are marked with an asterisk (*).
  6. Click Create to add the new field, or click Create and continue to add the new field and continue adding other fields.
  7. Click Validate. Once pass, click Save

Assets Created

Three Data Objects and One DRL

Data Object – Path

It records the data from one node to another node and the ‘travel’ cost between these 2 nodes.

The field ‘start’ is the start point. The field is the endpoint. The cost is the ‘travel’ cost from the start point to the end point.

The field ‘child_left’ and ‘child_right’ are sub-paths for this path. If the start point and end point directly connect. Both fields’ values are ‘null’. Both fields are for debugging purposes. It will not affect function without them.

The field ‘record’ is to record the point-to-point path from the start point to the endpoint.

Data Object – Query_Path

It records the query about the source point and destination point.

Data Object – Solution

It records the solution for the query.

The ‘cost’ is the total cost from the resource to the destination. The ‘path’ is to record point-to-point steps from the resource to the destination. The ‘start’ is the resource and the ‘end’ is the destination.

Data Object – new-path DRL – rule: “multi-paths”

It is a set of rules to find the path.

when part

1 fact2 : Query_Path( )
2 fact0 : Path( start == fact2.start )
3 fact1 : Path( start == fact0.end, end != fact0.start )
4 not Path( start == fact0.start, end == fact1.end, cost < (fact0.cost + fact1.cost) )

The first line is to assign the ‘Query_Path’ to variable fact2.

The 2nd line is to find the path in which the start point is the same as the fact2 start point (source), and assign it to variable fact0.

The 3rd line is to find the path in which the start point is the same as the fact0 endpoint, the endpoint is not the same as the fact0 start point (avoid loop), and then assign it to variable fact1.

The 4th line is to ensure there is no such path whose cost is lower than the total cost from paths fact0 and fact1 (avoid loop and ensure path cost lowest).

then part

Path fact3 = new Path();
fact3.setStart( fact0.getStart() );
fact3.setEnd( fact1.getEnd() );
fact3.setCost( fact0.getCost() + fact1.getCost() );
fact3.setChild_left( fact0 )
fact3.setChild_right( fact1 )
fact3.setRecord( fact0.getRecord() + ' ==> ' + fact1.getRecord() )
insertLogical( fact3 );

This part is to combine two paths into one new path.

Data Object – new-path DRL – rule: “find-solution”

rule "find-solution"
	dialect "mvel"
	when
	    fact0 : Query_Path( )
		fact1 : Path( start == fact0.start, end == fact0.end )
		not Path( start == fact0.start, end == fact0.end, cost < fact1.cost )
	then
		Solution fact2 = new Solution();
		fact2.setStart( fact1.getStart() );
		fact2.setEnd( fact1.getEnd() );
		fact2.setCost( fact1.getCost() );
		fact2.setPath( fact1.getRecord() )
		insertLogical( fact2 );
end

It is to find the lowest cost path from the source to the destination.

Note:

Please ensure the ‘validate’ button and the ‘save’ button for each step.

Build & Deploy

Click the ‘build’ and ‘install’ button

Click the ‘redeploy’ button

Install Kafka

Refer to https://kafka.apache.org/quickstart for Kafka installation.

Download the install package from https://www.apache.org/dyn/closer.cgi?path=/kafka/3.4.0/kafka_2.13-3.4.0.tgz

STEP 1: GET KAFKA

$ tar -xzf kafka_2.13-3.4.0.tgz
$ cd kafka_2.13-3.4.0

STEP 2: START THE KAFKA ENVIRONMENT

NOTE: Your local environment must have Java 8+ installed.

Apache Kafka can start using ZooKeeper.

Kafka with ZooKeeper

Run the following commands in order to start all services in the correct order:

# Start the ZooKeeper service
$ bin/zookeeper-server-start.sh config/zookeeper.properties

Open another terminal session and run:

# Start the Kafka broker service
$ bin/kafka-server-start.sh config/server.properties

STEP 3: CREATE A TOPIC TO STORE YOUR EVENTS

Kafka is a distributed event streaming platform that lets you read, write, store, and process events (also called records or messages in the documentation) across many machines.

These events are organized and stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.

So before you can write your first events, you must create a topic. Open another terminal session and run:

$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092

All of Kafka’s command line tools have additional options: run the kafka-topics.sh command without any arguments to display usage information. For example, it can also show you details such as the partition count of the new topic:

$ bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092

Start RHDM & Kafka Event Queu

Both RHFM and Kafka are installed on the same RHEL 8 VM. I use below simple bash script to start both RHDM and Kafka Event Queue:

#!/bin/bash

# Start RHDM
tmux new-session -d -s rhdm      '/root/rhdm/rhdm7-install-demo/target/jboss-eap-7.3/bin/standalone.sh'

# Start zookeeper server
tmux new-session -d -s zookeeper '/root/kafka_2.13-3.3.1/bin/zookeeper-server-start.sh /root/kafka_2.13-3.3.1/config/zookeeper.properties'

# Start kafka server
tmux new-session -d -s kafka     '/root/kafka_2.13-3.3.1/bin/kafka-server-start.sh     /root/kafka_2.13-3.3.1/config/server.properties'

# Optional: monitor the event from topic quickstart-events
tmux new-session -d -s consumer  '/root/kafka_2.13-3.3.1/bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092'

Install Ansible Rule Book

Refer to https://github.com/ansible/ansible-rulebook/blob/main/docs/installation.rst for Ansible rule book installation

Prepare the VM with the latest version of RHEL9.

dnf --assumeyes install gcc java-17-openjdk maven python3-devel python3-pip
export JDK_HOME=/usr/lib/jvm/java-17-openjdk
export JAVA_HOME=$JDK_HOME
pip3 install -U Jinja2
pip3 install ansible ansible-rulebook ansible-runner wheel
# only for kafka event
pip3  install aiokafka    

Ansible Rule Book

- name: Read messages from a kafka topic and act on them
  hosts: localhost
  ## Define our source for events
  sources:   
    - ansible.eda.kafka:
        host: kafka.example.com
        port: 9092
        topic: quickstart-events
        group_id:

  rules:
    - name: receive event for network update
      condition: event.implement == 'Subnet Update'
      action:
        run_playbook:
          name: network-event.yaml
      delegate_to: localhost
        
    - name: receive event for new routing
      condition: event.implement == 'New Routing'
      action:
        run_playbook:
          name: conf-new-routing.yaml
      delegate_to: localhost

Below is one sample simulated ‘Subnet Update’ event.

Initially, there is no link between node ‘J’ and ‘K’. Now, there is a new link with ‘travel’ cost 3 between ‘J’ and ‘K’.

Event-01:

echo '{"subnet": "F,k,1", "implement": "Subnet Update"}' | sed "s/'/\"/g" | ~/kafka_2.13-3.3.1/bin/kafka-console-producer.sh --broker-list 192.168.122.33:9092 --topic quickstart-events

This event is sent to the topic ‘quickstart-events’. The rule book listens on the topic ‘quickstart-events’, detect the ‘Subnet Update’ event, and trigger the playbook ‘network-event.yaml’

Ansible Playbook – network-event.yaml

---
- name: Analyze the network event
  hosts: localhost
  connection: local
  gather_facts: false
  vars:
    ansible_python_interpreter: /usr/bin/python3
    USERNAME: "kieserver"
    PASSWORD: "kieserver1!"
    CONTAINER_ID: "route-02_1.0.0-SNAPSHOT"
    source: 'J'
    dest: 'K'

  tasks:

  - set_fact:
      nodes: "{{ event.subnet | split(',') }}"
  - set_fact:
      node_01: "{{ nodes[0] }}"
      node_02: "{{ nodes[1] }}"
  - set_fact:
      search_01: "{{ node_01 }},{{ node_02 }}"
      search_02: "{{ node_02 }},{{ node_01 }}"

  - name: Update the network map file
    ansible.builtin.lineinfile:
      path: /home/jinzha/event-ansi/ansi-rule/user_data_file
      search_string: "{{ search_01 }}"
      line: "{{ event.subnet }}"
    register: result

  - name: Update the network map file
    ansible.builtin.lineinfile:
      path: /home/jinzha/event-ansi/ansi-rule/user_data_file
      search_string: "{{ search_02 }}"
      line: "{{ event.subnet }}"
    register: result

 
  - name: Retrieve data from csv file
    set_fact:
      raw_user_data: "{{ lookup('file', '/home/jinzha/event-ansi/ansi-rule/user_data_file').split('\n') }}"
      user_data: []
  
  - name: Convert raw data to j2 input data - list of list
    set_fact:
      user_data: "{{ user_data + [ line.split(',') ] }}"
    loop: "{{ raw_user_data }}"
    loop_control:
      loop_var: line

  - name: Generate body data for KIE call
    ansible.builtin.template:
      src: /home/jinzha/event-ansi/ansi-rule/templates/user_data.j2
      dest: /home/jinzha/event-ansi/ansi-rule/files/input.json

  - name: Call KIE
    ansible.builtin.uri:
      url: http://192.168.122.33:8080/kie-server/services/rest/server/containers/instances/{{CONTAINER_ID}}
      user: "{{USERNAME}}"
      password: "{{PASSWORD}}"
      method: POST
      force_basic_auth: yes
      status_code: [201, 200]
      return_content: true
      headers:
        Content-Type: application/json
      body_format: json
      body: "{{ lookup('ansible.builtin.file','/home/jinzha/event-ansi/ansi-rule/files/input.json') }}"
    register: result


  - name: Facts in working memory
    set_fact:
      kie_facts: "{{ result['json']['result']['execution-results']['results'][1]['value'] }}"


  - name: check if new routing is found
    command: /usr/bin/python3
    args:
      stdin: |
        import re
        find_solution = False
        search_pattern = ".*com.labs.route_02.Solution.*"

        for f in {{ kie_facts }}:
          if re.search(search_pattern, str(f)):
            find_solution = True
            f['com.labs.route_02.Solution']['implement'] = 'New Routing'
            print(f)
        print(find_solution)
    register: results

  - name: Failed to find Solution
    debug:
      msg: "Failed to find Solution"
    when:  not results.stdout_lines[-1] | bool 


  - name: Solution is found and add action to event queue
    block:
    - name: Found Solution
      debug:
        msg: "{{ results.stdout_lines[0] }}"

    - set_fact:
        solution: "{{ results.stdout_lines[0] }}"

    - debug:
        msg: "{{ solution['com.labs.route_02.Solution']['path'] }}"

    - name: Send event to Kafka topic
      ansible.builtin.shell: |
        set timeout 300
        echo "{{ solution['com.labs.route_02.Solution'] }}" | sed "s/'/\"/g" | bin/kafka-console-producer.sh --broker-list 192.168.122.33:9092 --topic quickstart-events
      args:
        chdir: /home/jinzha/kafka_2.13-3.3.1
    when:  results.stdout_lines[-1] | bool

We have one file ‘user_data_file’ providing the network map info. Each line presents one direct link between the 1st field and the 2nd field. The 3rd field is the cost. The below data presents the map in ‘Figure-01’.

$ cat user_data_file
A,B,1
B,C,1
D,E,1
E,F,1
G,H,1
H,I,1
A,D,1
D,G,1
B,E,1
H,E,1
C,F,1
I,F,1
J,G,1
k,C,1
H,E,1
I,F,1

When ‘Event-01’ happens, the user_data_file looks like the one below. The new line ‘F,k,1’ is added, which presents the new link built.

$ cat user_data_file
A,B,1
B,C,1
D,E,1
E,F,1
G,H,1
H,I,1
A,D,1
D,G,1
B,E,1
H,E,1
C,F,1
I,F,1
J,G,1
k,C,1
H,E,1
I,F,1
F,k,1

The next tasks are ‘Retrieve data from csv file’ and ‘Convert raw data to j2 input data – list of list’ to assign value to variable ‘user_data’. It looks like this:

user_data:
- [A,B,1]
- [B,C,1]
- [D,E,1]
- [E,F,1]
- [G,H,1]
- [H,I,1]
- [A,D,1]
- [D,G,1]
- [B,E,1]
- [H,E,1]
- [C,F,1]
- [I,F,1]
- [J,G,1]
- [k,C,1]
- [H,E,1]
- [I,F,1]
- [F,k,1]

Jinja Template to generate KIE input

$ cat templates/user_data.j2
{
"commands":[
  {
        "insert":{
          "object":{
            "com.labs.route_02.Query_Path":{
              "start":"Subnet-{{ source|upper }}",
              "end":"Subnet-{{ dest|upper }}"
            }
          }
        }
  },
{% for path in user_data %}
  {
        "insert":{
          "object":{
            "com.labs.route_02.Path":{
              "cost":{{ path[2] }},
              "start":"Subnet-{{ path[0]|upper }}",
              "end":"Subnet-{{ path[1]|upper }}",
              "record":"<Subnet-{{ path[0]|upper }},Subnet-{{ path[1]|upper }},{{ path[2] }}>",
              "child_left":null,
              "child_right":null
            }
          }
        }
  },
  {
        "insert":{
          "object":{
            "com.labs.route_02.Path":{
              "cost":{{ path[2] }},
              "start":"Subnet-{{ path[1]|upper }}",
              "end":"Subnet-{{ path[0]|upper }}",
              "record":"<Subnet-{{ path[1]|upper }},Subnet-{{ path[0]|upper }},{{ path[2] }}>",
              "child_left":null,
              "child_right":null
            }
          }
        }
  },
{% endfor %}
  {
	"fire-all-rules":{
  	"out-identifier": "firedRules"
	}
  },
  {
	"get-objects": {
  	"out-identifier": "objects"
	}
  },
  {"dispose":{}}
]
}

Demo

Start RHDM and Kafka

[root@kafka start]# cat start-rhdm-kafka.sh
#!/bin/bash

# Start RHDM
tmux new-session -d -s rhdm      '/root/rhdm/rhdm7-install-demo/target/jboss-eap-7.3/bin/standalone.sh'

# Start zookeeper server
tmux new-session -d -s zookeeper '/root/kafka_2.13-3.3.1/bin/zookeeper-server-start.sh /root/kafka_2.13-3.3.1/config/zookeeper.properties'

# Start kafka server
tmux new-session -d -s kafka     '/root/kafka_2.13-3.3.1/bin/kafka-server-start.sh     /root/kafka_2.13-3.3.1/config/server.properties'

# Optional: monitor the event from topic quickstart-events
tmux new-session -d -s consumer  '/root/kafka_2.13-3.3.1/bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092'

[root@kafka start]# ./start-rhdm-kafka.sh

[root@kafka start]# tmux ls
consumer: 1 windows (created Sun Feb 19 15:37:29 2023) [80x24]
kafka: 1 windows (created Sun Feb 19 15:37:29 2023) [80x24]
rhdm: 1 windows (created Sun Feb 19 15:37:29 2023) [80x24]
zookeeper: 1 windows (created Sun Feb 19 15:37:29 2023) [80x24]
[root@kafka start]# 

Start EventDriven Ansible

[jinzha@event-ansible ~]$ cd /home/jinzha/event-ansi/ansi-rule 
[jinzha@event-ansible ansi-rule]$ ansible-rulebook --rulebook network-kafka-rhdm.yml -i inventory.yml 

The initial Network map

The below data in file ‘user_data_file’ presents the ‘Figure-01’. The cost is one for the link between direct connecting nodes.

[jinzha@event-ansible ~]$ cd /home/jinzha/event-ansi/ansi-rule 
[jinzha@event-ansible ansi-rule]$ cat user_data_file
A,B,1
B,C,1
D,E,1
E,F,1
G,H,1
H,I,1
A,D,1
D,G,1
B,E,1
H,E,1
C,F,1
I,F,1
J,G,1
k,C,1
H,E,1
I,F,1

Event One

We simulate the cost increases to 3 between nodes ‘B’ and ‘C’

Send event

$ echo '{"subnet": "B,C,3", "implement": "Subnet Update"}' | sed "s/'/\"/g" | ~/kafka_2.13-3.3.1/bin/kafka-console-producer.sh --broker-list 192.168.122.33:9092 --topic quickstart-events 

The output of the EnventDriven-Ansible

$ ansible-rulebook --rulebook network-kafka-rhdm.yml -i inventory.yml 

PLAY [Analyze the network event] ***********************************************

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [Update the network map file - forward connection] ************************
changed: [localhost]

TASK [Update the network map file - backward connection] ***********************
ok: [localhost]

TASK [Retrieve data from csv file] *********************************************
ok: [localhost]

TASK [Convert raw data to j2 input data - list of list] ************************
ok: [localhost] => (item=A,B,1)
ok: [localhost] => (item=B,C,3)
ok: [localhost] => (item=D,E,1)
ok: [localhost] => (item=E,F,1)
ok: [localhost] => (item=G,H,1)
ok: [localhost] => (item=H,I,1)
ok: [localhost] => (item=A,D,1)
ok: [localhost] => (item=D,G,1)
ok: [localhost] => (item=B,E,1)
ok: [localhost] => (item=H,E,1)
ok: [localhost] => (item=C,F,1)
ok: [localhost] => (item=I,F,1)
ok: [localhost] => (item=J,G,1)
ok: [localhost] => (item=k,C,1)
ok: [localhost] => (item=H,E,1)
ok: [localhost] => (item=I,F,1)

...
TASK [Call KIE] ****************************************************************
ok: [localhost]
...
TASK [Found Solution] **********************************************************
ok: [localhost] => {
    "msg": {
        "com.labs.route_02.Solution": {
            "cost": 6,
            "end": "Subnet-K",
            "implement": "New Routing",
            "path": "<Subnet-J,Subnet-G,1> ==> <Subnet-G,Subnet-H,1> ==> <Subnet-H,Subnet-E,1> ==> <Subnet-E,Subnet-F,1> ==> <Subnet-F,Subnet-C,1> ==> <Subnet-C,Subnet-K,1>",
            "start": "Subnet-J"
        }
    }
}
...
TASK [Send event to Kafka topic] ***********************************************
changed: [localhost]
...
TASK [info about new routing] **************************************************
ok: [localhost] => {
    "msg": {
        "cost": 6,
        "end": "Subnet-K",
        "implement": "New Routing",
        "path": "<Subnet-J,Subnet-G,1> ==> <Subnet-G,Subnet-H,1> ==> <Subnet-H,Subnet-E,1> ==> <Subnet-E,Subnet-F,1> ==> <Subnet-F,Subnet-C,1> ==> <Subnet-C,Subnet-K,1>",
        "start": "Subnet-J"
    }
}

TASK [Configure the new routing] ***********************************************
ok: [localhost] => {
    "msg": "Configure Routing: <Subnet-J,Subnet-G,1> ==> <Subnet-G,Subnet-H,1> ==> <Subnet-H,Subnet-E,1> ==> <Subnet-E,Subnet-F,1> ==> <Subnet-F,Subnet-C,1> ==> <Subnet-C,Subnet-K,1>"
}

Look into the output

From the output, you can see the cost is updated to 3 between ‘B’ and ‘C’

...
TASK [Convert raw data to j2 input data - list of list] ************************
ok: [localhost] => (item=A,B,1)
ok: [localhost] => (item=B,C,3)
...

Call RHDM with updated data

...
TASK [Call KIE] ****************************************************************
ok: [localhost]
...

Receive the output from RHDM

RHDM detects the path J ==> G ==> H ==> E ==> F ==> C ==> K with cost 6. The all info is sent to the event queue for the next action. The {“implement”: “New Routing”} is the key value listened to by Ansible RuleBook.

...
TASK [Found Solution] **********************************************************
ok: [localhost] => {
    "msg": {
        "com.labs.route_02.Solution": {
            "cost": 6,
            "end": "Subnet-K",
            "implement": "New Routing",
            "path": "<Subnet-J,Subnet-G,1> ==> <Subnet-G,Subnet-H,1> ==> <Subnet-H,Subnet-E,1> ==> <Subnet-E,Subnet-F,1> ==> <Subnet-F,Subnet-C,1> ==> <Subnet-C,Subnet-K,1>",
            "start": "Subnet-J"
        }
    }
}
...

Send the new solution to the event queue

...
TASK [Send event to Kafka topic] ***********************************************
changed: [localhost]
...

The new event captured & related playbook triggered

TASK [info about new routing] **************************************************
ok: [localhost] => {
    "msg": {
        "cost": 6,
        "end": "Subnet-K",
        "implement": "New Routing",
        "path": "<Subnet-J,Subnet-G,1> ==> <Subnet-G,Subnet-H,1> ==> <Subnet-H,Subnet-E,1> ==> <Subnet-E,Subnet-F,1> ==> <Subnet-F,Subnet-C,1> ==> <Subnet-C,Subnet-K,1>",
        "start": "Subnet-J"
    }
}

TASK [Configure the new routing] ***********************************************
ok: [localhost] => {
    "msg": "Configure Routing: <Subnet-J,Subnet-G,1> ==> <Subnet-G,Subnet-H,1> ==> <Subnet-H,Subnet-E,1> ==> <Subnet-E,Subnet-F,1> ==> <Subnet-F,Subnet-C,1> ==> <Subnet-C,Subnet-K,1>"
}

Event Two

Simulate the new link is built between ‘F’ and ‘K’ with the cost ‘1’.

$ echo '{"subnet": "F,K,1", "implement": "Subnet Update"}' | sed "s/'/\"/g" | ~/kafka_2.13-3.3.1/bin/kafka-console-producer.sh --broker-list 192.168.122.33:9092 --topic quickstart-events

The new event captured & related playbook triggered

The new path: J ==> G ==> H ==> I ==>F ==> K with the cost ‘5’.

TASK [info about new routing] **************************************************
ok: [localhost] => {
    "msg": {
        "cost": 5,
        "end": "Subnet-K",
        "implement": "New Routing",
        "path": "<Subnet-J,Subnet-G,1> ==> <Subnet-G,Subnet-H,1> ==> <Subnet-H,Subnet-I,1> ==> <Subnet-I,Subnet-F,1> ==> <Subnet-F,Subnet-K,1>",
        "start": "Subnet-J"
    }
}

Event-Driven Ansible opens up the possibilities of faster resolution and greater automated observation of our environments. It has the possibility of simplifying the lives of many technical and sleep-deprived engineers. The current ansible-rulebook is easy to learn and work with, and the graphical user interface EDA-Server will simplify this further. 

True IT automation as a strategy

https://www.redhat.com/en/blog/achieving-speed-and-accuracy-through-event-driven-automation

When you combine IT automation with decision systems, rules, and workflow support, automation is elevated and more impactful to your organization. Far more than cost savings, you have the tools to transform IT to focus on innovation and key engineering challenges, eliminating hundreds or thousands of low-level tasks that have to be done, yet distract from key priorities. Imagine the value you can deliver from IT when you are able to tame mundane yet necessary actions using automation.  

Finally, if your automation is centralized with the Ansible Automation Platform and you implement an event-driven automation solution, you can increase the observability of all your systems, gaining an unparalleled level of visibility into what is happening and what is changing at any time.

Disclaimer:

The views expressed and the content shared in all published articles on this website are solely those of the respective authors, and they do not necessarily reflect the views of the author’s employer or the techbeatly platform. We strive to ensure the accuracy and validity of the content published on our website. However, we cannot guarantee the absolute correctness or completeness of the information provided. It is the responsibility of the readers and users of this website to verify the accuracy and appropriateness of any information or opinions expressed within the articles. If you come across any content that you believe to be incorrect or invalid, please contact us immediately so that we can address the issue promptly.

Avatar photo


https://www.linkedin.com/in/alpha-wolf-jin/
I’m Jin, Red Hat ASEAN Senior Platform Consultant. My primary focus is Ansible Automation (Infrastructure as Code), OpenShift, and OpenStack.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.