Enhancing Logging Efficiency in IDN: Part Two

I’m delighted to follow up on my previous article on Optimising Log Retrieval in IDN , which garnered positive feedback. In this installment, we’re taking our approach to the next level.

In the context of our internal cloud system, log lines may not arrive in proper order. Consequently, when these log lines are retrieved and presented, users often face the challenge of manually rearranging them. Moreover, if a logging line is executed multiple times (such as in a loop), managing multiple entries can be cumbersome, making it difficult to discern the chronological sequence.

Let’s revisit one of the examples from the prior article to illustrate how log lines are currently written:

By adopting a slight modification to this method, we can write multiple log lines with a standardized prefix to easily identify the associated identity.

Taking it a step further, we introduce a logNumber and encapsulate the entire logging structure into a method that is repeatedly executed:

Key differences from the previous code include the introduction of logNumber as a newly initiated counter and the implementation of the logMessage method, which is executed on every line, printing the logNumber and incrementing it accordingly.

The result of this modified code is a more streamlined output, exemplified as follows:

While I won’t provide a full code execution example here, it’s evident how this approach simplifies handling logs in loops or complex rules with multiple log lines, making it easier to decipher the order of execution.

To summarize the main advantages:

  1. Single method to streamline the logging mechanism.
  2. logNumber facilitates pattern-following, making it easy to discern the execution order.
  3. A single instance of log.error, easily adaptable to log.info or other log levels without the need to edit each log line individually.

I collaborated on this work with my colleague Kenny Li, a Senior Solution Architect at SailPoint, with whom we together transformed my individual works into an easily applicable method.

As this will likely be my last blog for the year, I wish you all a Merry Christmas and a Happy New Year!!!

Optimizing Log Retrieval in IDN Cloud Rules

When it comes to extracting logs from cloud rules, our usual route involves reaching out to support or ES. However, if these logs lack proper formatting, sifting through them for a specific user run can be quite challenging.

Here’s a method I employ to streamline the tracking of logs for individual runs, making it easier for you to obtain them via the support team.

Log Prefixing for Enhanced Clarity

To facilitate the process, each rule type has access to some identity data, which we utilize as a logPrefix in every log line within the rule.

While there may be alternative approaches for various rule types, I’ve outlined my preferred methods below.

IdentityAttribute / AttributeGenerator / AttributeGeneratorFromTemplate / Generic Rule

For rules with access to the identity object, you can create a logPrefix attribute to append to each log line as follows:

Now, you can use this logPrefix to append to every log statement, like so:

BeforeProvisioning Rule

For rules with access to the plan object, use the following approach:

Now, you can incorporate the logPrefix in the log lines as mentioned earlier.

Correlation Rule

When dealing with the Correlation Rule and access to the account object, fetch a primary identifier (e.g., STAFF_NUMBER) for enhanced identification:

ManagerCorrelation Rule

For the ManagerCorrelation Rule and access to the link object, retrieve a primary key (e.g., Userid) for better association:

BuildMap Rule

Finally, for the BuildMap Rule and access to cols and record of the accounts, fetch an attribute (e.g., EMP_NO) via a map for logging:

Streamlined Output Request

When requesting logs, provide the formatted logPrefix, the organization name, and the timeframe. For example:

 Generate sAMAccountName - [EMP001]

The logs, once fetched, will be neatly formatted and easily identifiable, even in scenarios where the rule runs for thousands of users but you need information about just one user for troubleshooting:

I hope this aids you on your rule journey! If you have any questions, feel free to reach out.

Fix DNS issue for Domains ending with .local and SailPoint VA

So I came across a client who has a domain ending with .local and stumbled across a weird issue with our SailPoint Linux VAs.

Now, I am no DNS / Linux expert and not saying that you will have this issue if you have a .local domain. So YMMY

The VA could do nslookup on the domain but couldn’t do ping / openssl and other such commands. Thus  couldn’t connect to the server via domain name and SSL verification broke and connector didn’t work. 

For example, the AD domain was call “abc.local” and after the VA setup, it could do a nslookup but couldn’t do openssl command. Which means the connector couldn’t connect via domain name and verify SSL certificate. Workaround was to connect via IP address but then the certificate didn’t contain IP address and thus SSL config didn’t work. This also affected all other servers we need to connect which are domain joined and had a .local in the end.

After doing some research, I found many articles which pointed to /etc/nsswitch.conf file and one particular line 

hosts: files usrfiles resolve [!UNAVAIL=return] myhostname dns

This line needs to be changed to (remove [!UNAVAIL=return])

hosts: files usrfiles resolve myhostname dns

I won’t go into details on why and what it does – plenty of articles explaining DNS and Linux interactions – I am no expert on this.

Now previously we couldn’t edit the file directly in our VA due to the locked down nature of it. So I worked with our internal team and have finally got a fix out if you are in this situation. 

For this to work charon version needs to be atleast 1624. You can check your charon version by running the following command

sudo docker images | grep charon

Note: If you don’t have the version, don’t worry – will get rolled out per standard updates in coming months.

Fix

Run the following commands

To revert the changes

That should re-create the original symlink.

NOTE: A wrong edit to this file can cause DOS. So please do be careful and test it in SB and have direct access to the box if needed. Please be careful and test this out before prod implementation and have direct access to VA to restore file if needed.

Pro Tip: How to Seamlessly Move Cloud Rules Across Tenants

In any deployment, you will end up writing a couple of cloud rules that need to be sent to SailPoint Expert Services (ES) for uploading to your tenant.

Typically, we deploy to the sandbox tenant, test the rules, and then move them to the production tenant. Traditionally, this involved emailing ES to request the rule transfer across tenants.

However, there is a quick and easy alternative if you don’t want to wait for ES deployment in the next tenant. I will explain the sp-config API available for moving code across environments.

Many may not be aware that this API can also export and import deployed rules. The number of supported objects has increased since I last wrote my article.

This list is continually growing, but the main object we currently care about is RULE.

Here are the simple steps:

  • Get your rule approved and deployed into your sandbox environment
  • Export the rule from sandbox environment via sp-config 

  • Once exported, import the rule into the production environment. It will allow you to move rules so long as you don’t change it in transit or edit the JSON.

This method is great for moving rules across tenants. However, I still recommend deploying the rules via the normal process for consistency.

Tokenisation for Environments

When moving rules across environments, we often need to make changes because there are different values for variables, such as the AD OU structure that might be referenced. How can we make the process seamless, allowing you to copy and deploy the same rule without any changes in all environments?

Let’s go through each cloud rule type and see how we can solve this issue. Please note that this may not work for very complex rules but should be effective most of the time.

  • Generic Rule: These rules are always referred to via a transform, so the code inside is easy to maintain. Pass the variables via transform, which can remain the same across tenants.
  • Identity Attribute Rule: For example, LCS rules which uses source names to refer to attributes and retrieve their values. Keep the same source name across environments, ensuring that the backend source name remains consistent and can be referred to in the rule. This will also help in transform moves since source names they are referring to are same.
  • Manager Correlation / Correlation / Account Profile / Before Provisioning Rule: All of these rule types are attached to a source, which means they have an input of application class. Here’s what I typically do

Let’s say a Before Provisioning (BP) rule is created for the AD connector in the sandbox and needs to be moved to production. There might be some subtle changes in the rule between both environments, such as the AD Disabled OU

AD Dev Disabled OU: OU=Disabled,DC=abc,DC=dev,DC=local

AD Prod Disabled OU: OU=Disabled,DC=abc,DC=local

Rest all your logic may be similar but you need to change these between environments. To handle this, I use the following code within the rule

As you can see, I use the application.getId() method to retrieve the ID of the application the rule is attached to. Since we would have created the source in both the sandbox and production environments, you can obtain the ID via the /v3/sources API and set them accordingly. By knowing which environment my rule is attached to, I can then set the variables for AD_SOURCE and AD_DISABLED_OU (and other logics), allowing me to copy the same rule into both the sandbox and production environments.

This approach minimizes code maintenance for both environments and ensures that the same rule can be easily copied to both environments. As long as we have set the correct variables for each environment, our logic will work when tested in the sandbox and in production. This also eliminates the need to inject such values into the source JSON (an old method).

I hope this explanation makes sense and helps simplify your rule development, maintenance, and deployment across tenants.

Happy coding!!!

IdentityNow Rule Validator 3.0 + Generic Rules

As you may know, for IdentityNow Cloud Rules – they have to be submitted to SailPoint for upload to the tenant. We have a rule validator tool to validate IdentityNow rules for malformed or incorrect code fragments, and help make sure they conform to the SailPoint IdentityNow Rule Guide before rule submission.

We have had a great release of a brand new IdentityNow Rule Validator v3.0 (currently sitting on 3.0.23 at the time of writing). This is a major jump forward with mention in release notes (many more enhancements than what it states 🙂 ) 

  • BeanShell linter will now validate syntax and usage to help discover issues in your code before you deploy
  • A watch option which continually monitors and validate/lint  your code while you develop.

Download: https://community.sailpoint.com/t5/Professional-Services/IdentityNow-Rule-Validator/ta-p/166116

Please download and use the latest one when submitting rules for deployment otherwise you rule will get rejected for using the old version.

What I wanted to point out was that Generic Rules may start failing validation as it is doing much strict linting check for variables coming from transform which are not defined in the rule. You will need to add them to <Signature> tag for it to now pass validator.

Example

You will see two inputs 

  • identity – this is the identity context which every cloud rule has access to but not predefined as input in the Generic Rule type.
  • identityEndDate – this is an input coming from a transform which is calling the rule

If I run this on the rule validator, it will fail with the following errors

As you can see – it couldn’t retrieve the definition for both the attributes 

Solution

You need to define them under the Signature XML tag so that the validator allows it through

As you can see the Signature tag is defined with Argument name and type. This will allow the rule validator to understand what they are. So the rule will now look like

Now the rule will pass

You are good to submit your rule now… 

Happy coding!!!