Sign In

Data Source Configuration

The following describes how to configure a federated data source with some of our most popular Smart Connectors.

QuickBooks

QuickBooks is available in several different editions in both on-prem (aka Desktop) and cloud (i.e. SaaS or "online") configurations. These notes are specifically targeted at 2019/2020 on-prem Enterprise installations. Contact AppSynergy support for other configurations.

To connect AppSynergy to an on-prem QuickBooks installation:

QuickBooks Example Connect String

To obtain the value for the SSLServerCert property used in the connection string you will need to extract it from the binary .pfx file that was generated by Remote Connector. You can use the following openssl command to extract the public certificate:

openssl pkcs12 -in sample.pfx -clcerts -nokeys -out sample_public.crt

The sample_public.crt file generated above can be opened with Notepad and its contents from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- copied into the connection string shown below:

jdbc:quickbooks:User=test;Password=test;URL=https://YOUR_IP_ADDRESS:8166; SSLServerCert=-----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZ...Qw -----END CERTIFICATE-----;

Note that you can also specify the User and Password properties in the User and Password fields on the Data Source panel rather than in the connection URL.

Enabling QuickBooks AutoCache

To enable QuickBooks auto data caching to improve performance, add the following to the end of the connection URL:

AutoCache=true; CacheLocation=/var/opt/parasql/cache/qb/mycache.db; Other="DefaultColumnSize=255;CacheOptions=AutoTruncateStrings;CacheTolerance=1800;"

QuickBooks Schema Documentation

Additional information about the specific tables exposed by QuickBooks is available here.

Salesforce

There are several authentication methods available for connecting to Salesforce: login credentials, SSO, and OAuth. Login credentials are the recommended option. In the example connect string below you will need to set the User, Password and SecurityToken parameters at a minimum. (To obtain a security token simply open the personal information page on Salesforce.com and click the link to reset your security token; the token will be emailed to you.)

By default the SecurityToken is required but you can make it optional by specifying a trusted IP address. (You can find the IP address of your federated server by creating an API key and pinging the listed Database Server Host.) To disable the security token, sign in to Salesforce and enter Network Access in the Quick Find box in the setup section, then add the IP address to the list of trusted IP addresses.

Example connect string:

jdbc:salesforce:User=myUser;Password=myPassword;SecurityToken=myToken;

By default, the driver connects to production environments. Add UseSandbox=true; to the connection string to use a Salesforce sandbox account. Ensure that your user/password/security token are valid for the sandbox.

To enable caching add the following parameters:

AutoCache=true; CacheLocation=/var/opt/parasql/cache/salesforce/db12345_sf.db; Other="CacheOptions=AutoTruncateStrings;"

Shopify

First sign in to Shopify and create a Custom Application, then use those values to set a basic connection string as follows:

jdbc:shopify: AppId=8cf036b3xxxxxxxxxxxx032b07f5; Password=pass; ShopUrl=https://MY_STORE_NAME.myshopify.com;

To enable caching of results add the following parameters:

AutoCache=true; CacheLocation=/var/opt/parasql/cache/shopify/db12345_shopify.db; Other="DefaultColumnSize=5000;CacheOptions=AutoTruncateStrings;"

Note the large DefaultColumnSize parameter and AutoTruncateStrings parameter; these are needed for handling strings of unknown or very long length (common with the Shopify API). This is necessary because certain columns (e.g. the Image column in the Products table) can be exceptionally large (easily over 10,000 characters long) yet must still be accommodated by the cache.

Atlassian JIRA

This section describes how to configure the Atlassian JIRA Smart Connector.

JIRA Cloud - Basic Authentication

You can establish a connection to any JIRA Cloud account. To connect, provide the User and APIToken. To generate an API Token, log in to your Atlassian account and navigate to API tokens > Create API token. Note: API tokens are available in JIRA Cloud only. If you're using JIRA Server, you must provide User and Password for basic authentication as described below.

jdbc:jira: User=admin; APIToken=YOUR_API_TOKEN; Url=https://yoursitename.atlassian.net; Other="DefaultColumnSize=255";

NOTE: The Test Connection button may return a false positive (i.e. indicate connection was successful when it was not) due to how this Smart Connector caches schema info. To actually test the connection you will need to run a query.

JIRA Server - Basic Authentication

You can establish a connection to any JIRA Server instance. To connect, provide the User and Password for basic authentication.

jdbc:jira: User=admin; Password=123abc; Url=https://yoursitename.atlassian.net; Other="DefaultColumnSize=255";

Other Options

NOTE: The DefaultColumnSize=255 parameter may be changed to another value. The JIRA API has string columns with an unspecified maximum length; this parameter sets the default max length when a federated table is created. You can change this to a shorter or longer value, and manually edit values of individual columns when creating the federated table. Note the total row size of a table cannot exceed 64K. When adding a federated table to your database you may need to remove some of the columns with the word "Aggregate" in their name to get under the 64k row size limit (e.g. the Issues table will require removing or editing down the size of certain Aggregate columns).

Add the following parameters to enable caching:

AutoCache=true; CacheLocation=/var/opt/parasql/cache/jira/db12345_jira.db;

Add the following parameters to enable logging:

Logfile=/var/opt/parasql/cache/jira/db12345_jira.log; Verbosity=4;

By default the connector only shows system fields. To access custom fields for Issues add the following parameter:

IncludeCustomFields=true;

Microsoft SQL Server & Azure Databases

This section describes how to configure the Microsoft SQL Server & Azure Smart Connector.

You are able to connect to both on-prem Microsoft SQL Server and cloud-hosted Microsoft Azure SQL databases.

A typical connection string:

jdbc:sqlserver://YOUR_SERVER_NAME.database.windows.net:1433;database=YOUR_DATABASE;encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=10;

MySQL & MariaDB Databases

You are able to connect to both on-prem and cloud-hosted MySQL, MariaDB, Amazon RDS and other "MySQL compatible" databases.

Typical Secure Connection

The following assumes the server has an SSL certificate signed by a global certificate authority:

jdbc:mariadb://server123.domain.com/dbname?useSSL=true&trustServerCertificate=false

Secure Connection with Self-Signed Certificate

The following assumes the server has an SSL certificate but that it is NOT signed by a global certificate authority:

jdbc:mariadb://173.194.84.17/dbname?useSSL=true&trustServerCertificate=false&serverSslCert=-----BEGIN CERTIFICATE----- characters -----END CERTIFICATE-----

The serverSslCert parameter should be an exact copy and paste of the contents of the server-ca.pem file on your remote database server. This allows the certificate to be verified so as to prevent MITM attacks.

PostgreSQL Databases

This section describes how to configure the PostgreSQL Smart Connector.

You are able to connect to both on-prem and cloud-hosted PostgreSQL databases.

Typical Secure Connection

The following assumes the server has an SSL certificate signed by a global certificate authority:

jdbc:postgresql://somehost.mydomain.com/dbname?sslmode=verify-full

The above example will validate the server's identity to prevent MITM attacks. This is the most secure option and is strongly recommended for production environments.

AWS Secure Connection

The following is for connecting to PostgreSQL on AWS RDS. In the case of AWS you must reference the AWS RDS CA bundle.

jdbc:postgresql://somehost.mydomain.com/dbname?sslmode=verify-full&sslrootcert=/etc/opt/aws_rds/rds-combined-ca-bundle.pem

Unsecured Connection

The following example is NOT secure because although it is using SSL it cannot prevent a MITM attack. We recommend that you NEVER use this type of connection for any production environment.

jdbc:postgresql://somehost.mydomain.com/dbname?sslmode=require

Other Options

PostgreSQL supports multiple schemas within a single database. Although AppSynergy can work with multi-schema databases, it is usually best to specify a default schema search path either as part of the database's native definition or in the connection URL. This eliminates the need to fully qualify names (e.g. schema_name.table_name) and makes the database more compatible with third party tools.

To specify a default schema at connection time simply add a parameter like &currentSchema=myschema to any of the above connection strings.

Alternatively, to change the database's default search path to include a specific schema, issue the following command in PostgreSQL:

ALTER DATABASE mydb SET search_path=myschema;

NetSuite

This section describes how to configure the NetSuite Smart Connector.

The AccountId must be set to the Id of a company account that can be used by the specified User, and the user account specified must be granted Web Services access in NetSuite.

A basic connection string is as follows:

jdbc:netsuite:AccountId=XABC123456;User=user;Password=password;RoleId=3;

The RoleId property can be optionally specified to log the user in with limited permissions (if not specified the user's default role is used). There are a large number of optional parameters that may be important in your particular configuration (including caching to improve performance); contact support for assistance with your configuration.

Universal Connectors for JSON, XML and ODATA APIs

Detailed configuration instructions for each of our Universal Smart Connectors is available separately. The information below is intended to provide an overview of the process.

Step 1: Generate Schema Files

Create a data source specifically for generating schema configuration files for a particular API. You will use a URL like the following to have the system auto-discover the basic structure of an API end point:

jdbc:json: Location=/var/opt/parasql/account/db12345/schema/API_NAME; DataModel=RELATIONAL; GenerateSchemaFiles=OnStart; URI="https://api.fbifreight.com/app/api/5/upload_documents/?key=somekey";

The Location parameter determines where the generated schema files are stored. The URI parameter specifies the API end point to auto-discover.

Step 2: Edit Your Generated Schema Files

Request the generated schema files from support. Each file will contain the schema definition of the root API end point, plus additional files that define any nested structures. Nested structures are treated as relational entities. Each schema configuration file maps to a federated table within your database. The configuration files should be renamed, datatypes verified, column lengths checked, etc.

If you need to support INSERT, UPDATE or DELETE commands against the API you will need to complete those sections of the schema configuration files that specify how they map (e.g. POST, PUT), URL re-wire parameters, etc. You can skip this step if you do not intend to write back to the API.

Step 3: Deploy Your Schema Files

Once you are done editing your schema configuration files you can then specify a connection string to deploy them. A connection string to use your custom schema will look something like the following:

jdbc:json: Location=/var/opt/parasql/account/db12345/schema/API_NAME; Logfile=/var/opt/parasql/account/db12345/log/mylog.log"; Verbosity=4; CustomUrlParams="key=xyz";