cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

FlexNet Code Insight Workspace Export Import Scripts - Add-On

FlexNet Code Insight Workspace Export Import Scripts - Add-On

Summary

These Groovy scripts export and import group definitions and associated file information (e.g, tags applied to files), along with all attendant custom data.

Synopsis

These FlexNet Code Insight Groovy scripts export and import group definitions and associated file information (e.g, tags applied to files), along with all attendant custom data.

NOTE: There have been changes to the URL syntax of the core server for Palamida versions later than 6.6.x.. In these versions, scriptrunner is run through Palamida's REST interface. In that case scriptRunner and the import/export scripts expect full HTTP URLs with a port and webapp, rather than a simple hostname. For instance, this means that localhost would be specified as http://localhost:8888/palamida/ including the port number and final forward-slash. The syntax for specifying the scan server includes the port number like this: some.server.name:8888.


Discussion

Installation

ZIP installation

  • If you received a zip file, then unzip it in /scriptrunner directory

  • This should place the files in the locations indicated in the "Manual Installation" section (immediately below)

Manual Installation

  • Place the driving scripts (exportWorkspaceData.groovy and importWorkspaceData.groovy) within the scriptRunner/scripts directory

    • These two files will be executed from the ScriptRunner framework

    • These are Groovy scripts (not classes) and cannot function alone; they require the Groovy classes in the groovy_classes directory
  • Place the groovy_classes directory within the scriptRunner/lib directory

    • The files in the groovy_classes directory are Groovy classes (not scripts) which are called from the two driving scripts above

    • These files should never be called directly from ScriptRunner as they cannot function alone

Usage Overview

These scripts have been designed for two distinct scenarios:

  1. In-place backup and restoration of workspace audit information.

  2. Migrating workspace audit information from one Palamida instance to another. This includes transitioning to newer versions of the Palamida application

Usage with ScriptRunner

These scripts will only function properly when executed through the ScriptRunner framework.

The flags used with ScriptRunner script itself are not passed through to the import or export scripts. For instance, on a stand-alone environment (both the core server and scan server running on the same machine) with IP address 111.122.133.144, if you invoke ScriptRunner in this manner.

scriptRunner.sh -u username_foo -p password_foo -c 111.122.133.144

You must still pass into the export script the server name from which you would like to export a workspace.

exportWorkspaceData.groovy -server 111.122.133.144

Otherwise the export script will use its hard-coded server name default value (i.e., localhost).

In the above case, the entire command would be:

scriptRunner.sh -u username_foo -p password_foo -c 111.122.133.144 exportWorkspaceData.groovy -server 111.122.133.144

Export Script

The exportWorkspaceData.groovy script uses the Palamida ScriptRunner framework to export one or more Workspaces from one or more Palamida servers.

Export Options

For the most current list of these options, refer to the output from the script's -h flag.

  • File and Path Options

    -output <file>

    Name of the file to export the workspace(s) to, without extension (Default: workspaceData)

    • If a single workspace is being exported, then the result will be a XML file (with this name) containing that one workspace.

    • If multiple workspaces are being exported, then the result will be a ZIP file (with this name) containing all of the exported workspace XML files, the custom data XML file (if necessary), and a log file indicating the significant events the export.

    -output_path <directory>

    Directory into which the export file(s) will be written (Defaults to the directory from which the script is executed)

    -custom_data_file <file>

    Exported custom data (if any) will be written to this file (no suffix. Default: customData)

  • Server Options

    -server <hostnameOrIP>

    Export from the Palamida Core Server with this name (default: localhost)

    -scan_server <hostnameOrIP>

    Export from the Palamida Scan Engine with this name (default: value of -server flag).

    -server_all <hostnameOrIP>

    Export all workspaces from the Palamida Scan Engine with this name. This option overrides -scan_server, -team, -project, and -workspaceoptions.

    -server_all_from_config <hostnameOrIP>

    Read Palamida Scan Engine machines from config on the indicated server. Export All workspaces from all Palamida Scan Engine machines. This option overrides -server_all, -server, -scan_server, -team, -project, and -workspace options.

  • What to Export

    -team <teamName>

    Export all workspaces in all projects belonging to this team. This option overrides -workspace. If -team and -project are used together, this will export all workspaces for that project and should be used when different projects have the same name, but were created by different teams.

    -project <projectName>

    Export all workspaces in this project. This option overrides -workspace. If -team and -project are used together, this will export all workspaces for that project and should be used when different projects have the same name, but were created by different teams. This option will also cause the -pa (process active only) flag to be ignored.

    -workspace <workspaceName>

    Export only the workspace with this name. This option will cause the -pa (process active only) flag to be ignored.

    -exclude_file_extensions <extensions>

    When exporting files, skip all files that have the suffixes listed here. Comma-separated list, no spaces.

  • Boolean Options

    -pa

    If present, then process only active ('In Progress') projects

    -pe

    If present, then do not process empty groups

    -pm

    If present, then export metadata for the groups being exported

    -ps

    If present, then do not process system groups

    -export_all_custom_data

    If present, export all custom data, not just custom data referenced by exported groups. If this is not set, then only export custom data which is referenced by the workspace(s) being exported.

    -export_tags

    If present, also export the tags for the group files in the exported workspace(s)

  • Other Options

    -retries <number>

    Number of times to re-try export of a failed workspace (integer). Most often, an export failure is due to the workspace being scanned

    -wait <seconds>

    Number of seconds to wait before re-trying export of a failed workspace (integer).

Combining Export Flags

  • Option -server_all_from_config will cause the script to ignore the flags: -server_all, -server, -scan_server, -team, -project, and -workspace

  • Option -server_all will cause the script to ignore the flags: -scan_server, -team, -project, and -workspace

  • Option -team will cause the script to ignore the -workspace flag

  • Option -project will cause the script to ignore the -workspace flag

  • Options -team and -project together will export all workspaces for that project and should be used when different projects have the same name, but were created by different teams

  • Options -workspace or -project will cause the -pa (process active only) flag to be ignored

  • If Option -export_all_custom_data is not set, then only export custom data which is referenced by the workspace(s) being exported

Export Usage

In a stand-alone environment, the core server is the scan server, so there is no need to use the -scan_server flag.

In a clustered environment, export can be run from any machine which can execute Palamida's ScriptRunner (Core Server, Scan Server, Detector, Client). Using one or more of the Server Options flags (above) to indicate the server(s) from which workspace(s) should be exported.

Workspaces can be designated for export using a variety of flags (e.g., -workspace to export one workspace, -server_all to export all workspaces on that server) however, if the export script does not understand which workspace(s) to export, it will prompt the user to choose one workspace from the indicated scan server.

Export Output

The export script prints status messages to the screen as is runs, starting with a list of the flags (and their values) it will use for this execution.

All output (XML files, ZIP file, log file, etc) will be written to either the location specified by the -output_path flag (if present) or to the directory from which the script was executed (if the -output_path flag was not specified).

If one workspace is exported, then the output is: one XML file containing the workspace's information, one log file (export.log), and (if there is custom data for that workspace) one XML file containing that custom data.

If multiple workspaces are exported, then the output is one ZIP file containing: one XML file for each exported workspace, one log file (export.log), and (if there is custom data for any exported workspace) one XML file containing all custom data for the exported workspaces.

In general, only data not generated by a scan are exported. Since an exported XML file must be imported into a workspace of scanned files, exporting scan-generated information would be redundant. For instance, only files which are in groups are exported to the XML file. The other files are skipped because importing them would not add any information which had not already been acquired from the scan itself.

Export Usage Examples

As delineated above, the new export script has a bewildering array of options. Some of the more common usages are presented here.

  1. Export the Workspace named foo from stand-alone server some_server_name to the file named C:\Users\bert\Desktop\ernie\exportedWorkspace.xml:

    exportWorkspaceData.groovy -server some_server_name -workspace foo -output exportedWorkspace -output_path C:\Users\bert\Desktop\ernie
  2. Export the Workspace named foo from core server some_core_server_name and scan server some_scan_server_name to the file named C:\Users\bert\Desktop\ernie\exportedWorkspace.xml:

    exportWorkspaceData.groovy -server some_server_name -scan_server some_scan_server_name -workspace foo -output exportedWorkspace -output_path C:\Users\bert\Desktop\ernie
  3. Export all workspaces from the scan server some_scan_server_name to a set of XML files which will be compressed into a file named C:\Users\bert\Desktop\ernie\workspaceDataFromLocalhost.zip. Note how the core server, localhost, still has to be defined:

    exportWorkspaceData.groovy -server localhost -server_all some_scan_server_name -output workspaceDataFromLocalhost -output_path C:\Users\bert\Desktop\ernie
  4. Export all of the custom data on the stand-alone server localhost to an XML file named foo.xml, but do not export any workspaces:

    exportWorkspaceData.groovy -custom_data_file foo -export_all_custom_data

Import Script

Import can be run from any machine which can execute Palamida's ScriptRunner (Core Server, Scan Server, Detector, Client). Use the --server flag to set the value of the Palamida Core Server. On a clustered environment, also set --scan_server to the Palamida Scan Server containing the workspace you just scanned.

Custom data (user-created licenses, components, and/or component versions) can be imported along with a workspace which uses that custom data, or the custom data can be imported by itself.

The general steps are:

  1. Create a new workspace on the Palamida Core Server onto which you want to perform the import.

    This will be the 'target workspace' which will accept the audit data. This workspace must have all the files which were present in the exported workspace.

    In a clustered environment, both the core server and the target scan server must be designated (--server and --scan_server flags).

  2. Execute a Scan of the workspace files.

    An import only copies over data which is not created by a scan.

  3. Open up the XML file containing the workspace data to be imported.

    If the file paths in the XML file do not matched the file paths in the workspace you just scanned, then the file information from the XML file will not be imported. So, either:

    • Note which values you will need for the --adv_file_comparison and --adv_file_comparison_depth import flags, OR

    • Edit the export XML file to change the file paths to exactly match the file locations in the 'target workspace'
  4. Run the import script.

    See the descriptions of available flags and the examples below, importantly:

    • Required options for importing a workspace:

    --input (-f) to indicate the XML file which contains the workspace information to be imported

    -workspace (-w) to import data into the workspace with this name

    • If the XML file containing the workspace information references custom data (i.e., user-created licenses, components, and/or component versions), then also indicate which XML file contains that custom data (--custom_data_file (-x) flag).
  5. Open up Detector for the 'target workspace' and verify that the audit information was imported correctly.

Import Options

For the most current list of these options, see the output from the script's -h flag.

  • Import Options Which Accept Parameters

    --input <file> (-f)

    File containing the workspace(s) to be imported

    --custom_data_file <file> (-x)

    Import custom data from this file

    --server <hostnameOrIP> (-c)

    Import the workspace onto this core server (default: localhost)

    --scan_server <hostnameOrIP> (-s)

    Import the workspace onto this scan server (default: value of -server flag)

    --workspace <workspaceName> (-w)

    Import data into the workspace with this name

    --reformat_xml_file <file> (-z)

    Reformat this XML file and do nothing else. If this option is set, then that XML file will be reformatted and the script will exit without importing anything.

    --adv_file_comparison <option> (-a)

    Whether to match files using only a portion of each file's path (default: never). Accepts the values: never, if_no_absolute_match, always. If this flag is not set, then a file element (from the imported XML file) must match the full file path and name in the workspace.

    --adv_file_comparison_depth (-d)

    This option is ignored if option --adv_file_comparison is not set. How much of a file's path/name/MD5 to compare (default: md5_file_name). Accepts the values:

  • Require an MD5 Match:

    • md5_only: To find matches between file elements (from the imported XML file) and files in the workspace, only compare the MD5 hashes.

    • md5_file_name: For each file element and each file in the workspace, compare the MD5 hash and the file's name.

    • md5_file_name_dir_depth_1: Compare the MD5 hash, the file's name, and each file's parent directory.

      • For example: /a/b/c/foo.gif will match /d/e/c/foo.gif, but not /d/e/f/foo.gif (assuming the MD5 hashes also match).
    • md5_file_name_dir_depth_2: Compare the he MD5 hash, the file's name, each file's parent directory, and each file's parent directory's parent directory.

    • md5_file_name_dirdepth#: And so on, to an arbitrary depth.
  • Do not require an MD5 match:

    • no_md5_file_name: For each file element and each file in the workspace, compare only the file's name

    • no_md5_file_name_dirdepth#: Compare the file's name and each file's parent directory to the indicated depth, as shown above

    --include_tags (-i)

    Import only these tags. Ignore all other tags. Incompatible with --exclude_tags (-e)

    * Accepts a comma-delimited list of tags
    
    * Converts the % character into a space, allowing for use with tags which have spaces in their names

    --exclude_tags (-e)

    Do not import these tags. Import all other tags. Incompatible with --include_tags (-i)

  • Accepts a comma-delimited list of tags

  • Converts the % character into a space, allowing for use with tags which have spaces in their names

    --path_search_replace_csv (-p)

    Location of the CSV file with the path fragments to find and replace (see below for additional information)

  • Boolean Options

    All of these options default to false. Including any of these flags sets that flag to true.

    --dryrun (-y)

    If present, simulate an import without actually saving anything. Generates a log file containing what would be the significant events encountered during an actual import.

    --check_md5_hash (-m)

    If present, then in addition to checking the full file path, also check each file's MD5 hash. If there is a match, then do not import the file. This option is ignored if advanced file matching (see above flags) is used.

    --update_existing_group_data (-u)

    If present, update existing group data with the content from the XML workspace file.

    --annotate_adv_search_results (-r)

    If present, create a new metadata tag for each file matched with advanced file matching (see above flags).

  • CSV File

The --path_search_replace_csv (-p) flag takes one value: the location of a CSV file which maps the file paths from the workspace XML (indicated by --input (-f)) to the paths on the scan server (indicated by --scan_server (-s))

The CSV file has the format:

/Path/To/Find/In/XML/File/One,Path/To/Replace/It/With/On/Scan/Server/1

/Path/To/Find/In/XML/File/Two,Path/To/Replace/It/With/On/Scan/Server/2

/Path/To/Find/In/XML/File/Three,Path/To/Replace/It/With/On/Scan/Server/3
  • Paths will be found and replaced exactly as shown in the CSV file. No regular expressions

  • The "path to be found" will be matched only to the beginning of each path in the XML file being imported

    • Limits the possibility of accidentally changing parts of other paths
  • The portion of a path which matches is the only portion which is replaced

  • Every "path to be found" will be tried on every path in the XML file being imported

    • Matches are attempted in the order listed in the CSV file
  • If a match is found, the matching portion of the path is replaced before the next match is attempted

    • For each path, this allows for repeated changes as the search/replace map is iterated over
  • Paths may contain any combination of forward slashes and/or backslashes

    • Backslashes must be escaped with other backslashes

    • Any path which contains at least one backslash must be wrapped in double-quotes

    • For example, if you export from a Windows machine and import onto a Linux machine, your CSV file might look like this:

      "C:\\Path\\To\\Find\\In\\XML\\File\\One",Path/To/Replace/It/With/On/Scan/Server/1
      
      "C:\\Path\\To\\Find\\In\\XML\\File\\Two",Path/To/Replace/It/With/On/Scan/Server/2
      
      "C:\\Path\\To\\Find\\In\\XML\\File\\Three",Path/To/Replace/It/With/On/Scan/Server/3
  • In addition, two operators are available: CONVERT_TO_FORWARD_SLASH and CONVERT_TO_BACKSLASH

    • Place either in the first field of a line. The second value in that line is ignored

    • The conversion is performed on each forward slash or backslash in the entire file path

Import Results

The import script prints status messages to the screen as is runs, starting with a list of the flags (and their values) it will use for this execution.

  • Log file (import.log)

    • Located in the directory from which the workspace or custom data files were imported

    • Contains:

      • All significant actions taken during the import

      • All issues encountered during the import

      • All files that did not get associated to any group
  • Metadata tags

    • When a problem is encountered during the import of a group, a metadata tag (display name: Import Notes, field name: import-notes) is created for that group detailing the issue. If such a metadata tag already exists for that group, then the new issue is added to the existing tag.

    • If the --annotate_adv_search_results flag has been included on the command line and a file is matched using advanced file matching (see above), then a metadata tag (display name: File Path Matched by Advanced Logic, field name: file-path-matched-by-advanced-logic, tag value: Yes) is created for that file.

Import Usage Examples

  1. Import the workspace data contained in workspaceData.xml into the workspace foo. This will not overwrite any existing audit group data:

    importWorkspaceData.groovy --input C:\Users\bert\Desktop\workspaceData.xml --workspace foo
  2. Import the workspace data contained in workspaceData.xml into the workspace foo in a clustered environment with core server some_core_server and scan server some_scan_server, where the scan of workspace foo was run on some_scan_server. This will not overwrite any existing audit group data:

    importWorkspaceData.groovy --server some_core_server --scan_server some_scan_server --input C:\Users\bert\Desktop\workspaceData.xml --workspace foo
  3. Import the workspace data contained in workspaceData.xml into the workspace foo and overwrite any existing audit group data with audit group data from that XML file:

    importWorkspaceData.groovy --input C:\Users\bert\Desktop\workspaceData.xml --workspace foo --update_existing_group_data
  4. Import the workspace data contained in workspaceData.xml into the workspace foo along with the custom data from custom.xml. If workspaceData.xml references any custom data from custom.xml, then it is necessary to import custom.xml before (or at the same time) as workspaceData.xml:

    importWorkspaceData.groovy --custom_data_file C:\Users\bert\Desktop\custom.xml --input C:\Users\bert\Desktop\workspaceData.xml --workspace foo
  5. Import only the custom data from file custom.xml:

    importWorkspaceData.groovy --custom_data_file C:\Users\bert\Desktop\custom.xml
  6. Perform a 'dry run' of the import of custom data from the file custom.xml. Since this is a dryrun, the import script will walk through all the import steps, but will not write any data to the database:

    importWorkspaceData.groovy --custom_data_file C:\Users\bert\Desktop\custom.xml --dryrun

Custom Data

The term 'custom data' refers to user-created licenses, components, and component versions. When custom data is exported, it is written to a custom data XML file which is separate from an exported workspace XML file.

  • A piece of custom data is exported any time you export a workspace which references that piece of custom data.

  • If multiple workspaces are exported all at once, then only one custom data XML file is generated. This file contains all of the pieces of custom data referenced by all of the exported workspaces.

  • It is possible to export all of the custom data on an entire Core Server using the -export_all_custom_data flag.

  • Custom data can be imported along with a workspace which uses that custom data, or the custom data can be imported by itself.

  • If you attempt to import a workspace into a database which does not already contain custom data utilized by that workspace, then the workspace import will fail.
Labels (1)
Was this article helpful? Yes No
No ratings
Version history
Last update:
‎Oct 22, 2018 04:58 PM
Updated by: