Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for 'rolling' patches #3

Open
vchepkov opened this issue Oct 24, 2019 · 9 comments
Open

add support for 'rolling' patches #3

vchepkov opened this issue Oct 24, 2019 · 9 comments

Comments

@vchepkov
Copy link
Contributor

In some workflows it's desired to patch one server at a time in a particular group.
consider splunk syslogforwaders or splunk indexers. Instead of having manually to split them between many-many groups, it would be easier to have them in one group and patch one node at a time

@nmaludy
Copy link
Member

nmaludy commented Oct 24, 2019

I think that's a great idea.

We have a workflow that does this internally and we're planning on open sourcing it in the future.

@nmaludy
Copy link
Member

nmaludy commented Oct 24, 2019

in meantime a workaround is to just assign different group values to each node:

targets:
  - uri: splunk01.domain.tld
    vars:
      patching_order: 1
  - uri: splunk02.domain.tld
    vars:
      patching_order: 2
  - uri: splunk03.domain.tld
    vars:
      patching_order: 3

@vchepkov
Copy link
Contributor Author

That what I meant by 'many-many groups', many patching_order :)
It makes it problematic when list is dynamic and pulled from puppetdb

@nmaludy
Copy link
Member

nmaludy commented Oct 24, 2019

So, either you would need some variable in the inventory to signify that certain groups should be patched sequentially (patching_group_strategy: 'sequential' or something)

Or, you could potentially run a different plan for these special patching groups, if you like.

bolt plan run patching::workflow_sequential --nodes splunk_group

Thoughts here?

@vchepkov
Copy link
Contributor Author

vchepkov commented Oct 24, 2019

I think having additional attribute for the group is more convenient, then you can use only one plan for the inventory, so the 'former'

@msurato
Copy link
Contributor

msurato commented Feb 26, 2020

I think I have a similar request, but not certain. Is the goal of this request to have a certain group, which could be a subset of a larger plan, patch completely in some random order before moving on to the next group? Or am I missing something?

@voiprodrigo
Copy link

@msurato I think the idea is that you be able to order the execution of different targets within the same group.

@vchepkov
Copy link
Contributor Author

vchepkov commented Mar 5, 2020

Correct, one at a time. so you wouldn't shutdown all nodes in some cluster

@nmaludy
Copy link
Member

nmaludy commented Apr 30, 2020

Talked with @vchepkov again, i like the idea of having a var/flag that signifies a group's strategy is "sequential". Then all targets in that group would be patched one at a time. Default strategy would be 'parallel' or 'concurrent'.

If i were going to mock this up it would look something like:

groups:
  # this group as being marked as a 'sequential' group
  - name: a
    vars:
      patching_order_strategy: 'sequential'
    targets:
      - x
      - y
      - z

  # potentially additional option per target to specify the order of the target within the group
  - name: a
    vars:
      patching_order_strategy: 'sequential'
    targets:
      - name: foo
        vars:
          patching_target_order: 1
      - name: bar
        vars:
          patching_target_order: 3
      - name: baz
        vars:
          patching_target_order: 2

I really thing i can accomplish this fairly simply by modifying the patching::ordered_groups plan to take these new variables into account and output the proper group orderings. In theory we could support many sequential and parallel groups all being patched in the same inventory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants