postgresql-42.7.6-jdbc-src/ 0040755 0000000 0000000 00000000000 00000250600 014212 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/LICENSE 0100664 0000000 0000000 00000002451 00000250600 015220 0 ustar 00 0000000 0000000 Copyright (c) 1997, PostgreSQL Global Development Group
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
postgresql-42.7.6-jdbc-src/README.md 0100664 0000000 0000000 00000104326 00000250600 015476 0 ustar 00 0000000 0000000
# PostgreSQL JDBC Driver
PostgreSQL JDBC Driver (PgJDBC for short) allows Java programs to connect to a PostgreSQL database using standard, database independent Java code. Is an open source JDBC driver written in Pure Java (Type 4), and communicates in the PostgreSQL native network protocol.
### Status
[](https://github.com/pgjdbc/pgjdbc/actions/workflows/main.yml)
[](https://ci.appveyor.com/project/davecramer/pgjdbc/branch/master)
[](http://codecov.io/github/pgjdbc/pgjdbc?branch=master)
[](https://opensource.org/licenses/BSD-2-Clause)
[](https://gitter.im/pgjdbc/pgjdbc?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[](https://maven-badges.herokuapp.com/maven-central/org.postgresql/postgresql)
[](http://javadoc.io/doc/org.postgresql/postgresql)
## Supported PostgreSQL and Java versions
The current version of the driver should be compatible with **PostgreSQL 8.4 and higher** using the version 3.0 of the protocol and **Java 8** (JDBC 4.2) or above. Unless you have unusual requirements (running old applications or JVMs), this is the driver you should be using.
PgJDBC regression tests are run against all PostgreSQL versions since 9.1, including "build PostgreSQL from git master" version. There are other derived forks of PostgreSQL but they have not been certified to run with PgJDBC. If you find a bug or regression on supported versions, please file an [Issue](https://github.com/pgjdbc/pgjdbc/issues).
> **Note:** PgJDBC versions since 42.8.0 are not guaranteed to work with PostgreSQL older than 9.1.
## Get the Driver
Most people do not need to compile PgJDBC. You can download the precompiled driver (jar) from the [PostgreSQL JDBC site](https://jdbc.postgresql.org/download/) or using your chosen dependency management tool:
### Maven Central
You can search on The Central Repository with GroupId and ArtifactId [org.postgresql:postgresql][mvn-search].
[](https://maven-badges.herokuapp.com/maven-central/org.postgresql/postgresql)
```xml
org.postgresql
postgresql
LATEST
```
[mvn-search]: https://search.maven.org/artifact/org.postgresql/postgresql "Search on Maven Central"
#### Development snapshots
Snapshot builds (builds from `master` branch) are also deployed to OSS Sonatype Snapshot Repository, so you can test current development version (test some bugfix) by enabling the repository and using the latest [SNAPSHOT](https://oss.sonatype.org/content/repositories/snapshots/org/postgresql/postgresql/) version.
There are also available (snapshot) binary RPMs in [Fedora's Copr repository](https://copr.fedorainfracloud.org/coprs/g/pgjdbc/pgjdbc-travis/).
----------------------------------------------------
## Documentation
For more information you can read [the PgJDBC driver documentation](https://jdbc.postgresql.org/documentation/) or for general JDBC documentation please refer to [The Java™ Tutorials](http://docs.oracle.com/javase/tutorial/jdbc/).
### Driver and DataSource class
| Implements | Class |
| ----------------------------------- | ---------------------------------------------- |
| java.sql.Driver | **org.postgresql.Driver** |
| javax.sql.DataSource | org.postgresql.ds.PGSimpleDataSource |
| javax.sql.ConnectionPoolDataSource | org.postgresql.ds.PGConnectionPoolDataSource |
| javax.sql.XADataSource | org.postgresql.xa.PGXADataSource |
### Building the Connection URL
The driver recognises JDBC URLs of the form:
```
jdbc:postgresql:database
jdbc:postgresql:
jdbc:postgresql://host/database
jdbc:postgresql://host/
jdbc:postgresql://host:port/database
jdbc:postgresql://host:port/
jdbc:postgresql://?service=myservice
```
The general format for a JDBC URL for connecting to a PostgreSQL server is as follows, with items in square brackets ([ ]) being optional:
```
jdbc:postgresql:[//host[:port]/][database][?property1=value1[&property2=value2]...]
```
where:
* **jdbc:postgresql:** (Required) is known as the sub-protocol and is constant.
* **host** (Optional) is the server address to connect. This could be a DNS or IP address, or it could be *localhost* or *127.0.0.1* for the local computer. To specify an IPv6 address your must enclose the host parameter with square brackets (jdbc:postgresql://[::1]:5740/accounting). Defaults to `localhost`.
* **port** (Optional) is the port number listening on the host. Defaults to `5432`.
* **database** (Optional) is the database name. Defaults to the same name as the *user name* used in the connection.
* **propertyX** (Optional) is one or more option connection properties. For more information see *Connection properties*.
### Logging
PgJDBC uses java.util.logging for logging.
To configure log levels and control log output destination (e.g. file or console), configure your java.util.logging properties accordingly for the org.postgresql logger.
Note that the most detailed log levels, "`FINEST`", may include sensitive information such as connection details, query SQL, or command parameters.
#### Connection Properties
In addition to the standard connection parameters the driver supports a number of additional properties which can be used to specify additional driver behaviour specific to PostgreSQL™. These properties may be specified in either the connection URL or an additional Properties object parameter to DriverManager.getConnection.
| Property | Type | Default | Description |
|-------------------------------| -- |:-----------------------:|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| user | String | null | The database user on whose behalf the connection is being made. |
| password | String | null | The database user's password. |
| options | String | null | Specify 'options' connection initialization parameter. |
| service | String | null | Specify 'service' name described in pg_service.conf file. References: [The Connection Service File](https://www.postgresql.org/docs/current/libpq-pgservice.html) and [The Password File](https://www.postgresql.org/docs/current/libpq-pgpass.html). 'service' file can provide all properties including 'hostname=', 'port=' and 'dbname='. |
| ssl | Boolean | false | Control use of SSL (true value causes SSL to be required) |
| sslfactory | String | org.postgresql.ssl.LibPQFactory | Provide a SSLSocketFactory class when using SSL. |
| sslfactoryarg (deprecated) | String | null | Argument forwarded to constructor of SSLSocketFactory class. |
| sslmode | String | prefer | Controls the preference for opening using an SSL encrypted connection. |
| sslcert | String | null | The location of the client's SSL certificate |
| sslkey | String | null | The location of the client's PKCS#8 or PKCS#12 SSL key, for PKCS the extension must be .p12 or .pfx and the alias must be `user` |
| sslrootcert | String | null | The location of the root certificate for authenticating the server. |
| sslhostnameverifier | String | null | The name of a class (for use in [Class.forName(String)](https://docs.oracle.com/javase/6/docs/api/java/lang/Class.html#forName%28java.lang.String%29)) that implements javax.net.ssl.HostnameVerifier and can verify the server hostname. |
| sslpasswordcallback | String | null | The name of a class (for use in [Class.forName(String)](https://docs.oracle.com/javase/6/docs/api/java/lang/Class.html#forName%28java.lang.String%29)) that implements javax.security.auth.callback.CallbackHandler and can handle PasswordCallback for the ssl password. |
| sslpassword | String | null | The password for the client's ssl key (ignored if sslpasswordcallback is set) |
| sslnegotiation | String | postgres | Determines if ALPN ssl negotiation will be used or not. Set to `direct` to choose ALPN. |
| sendBufferSize | Integer | -1 | Socket write buffer size |
| maxSendBufferSize | Integer | 65536 | Maximum amount of bytes buffered before sending to the backend. pgjdbc uses `least(maxSendBufferSize, greatest(8192, SO_SNDBUF))` to determine the buffer size. |
| receiveBufferSize | Integer | -1 | Socket read buffer size |
| logServerErrorDetail | Boolean | true | Allows server error detail (such as sql statements and values) to be logged and passed on in exceptions. Setting to false will mask these errors so they won't be exposed to users, or logs. |
| allowEncodingChanges | Boolean | false | Allow for changes in client_encoding |
| logUnclosedConnections | Boolean | false | When connections that are not explicitly closed are garbage collected, log the stacktrace from the opening of the connection to trace the leak source |
| binaryTransfer | Boolean | true | Enable binary transfer for supported built-in types if possible. Setting this to false disables any binary transfer unless it's individually activated for each type with `binaryTransferEnable`. Whether it is possible to use binary transfer at all depends on server side prepared statements (see `prepareThreshold` ). |
| binaryTransferEnable | String | "" | Comma separated list of types to enable binary transfer. Either OID numbers or names. |
| binaryTransferDisable | String | "" | Comma separated list of types to disable binary transfer. Either OID numbers or names. Overrides values in the driver default set and values set with binaryTransferEnable. |
| prepareThreshold | Integer | 5 | Determine the number of `PreparedStatement` executions required before switching over to use server side prepared statements. The default is five, meaning start using server side prepared statements on the fifth execution of the same `PreparedStatement` object. A value of -1 activates server side prepared statements and forces binary transfer for enabled types (see `binaryTransfer` ). |
| preparedStatementCacheQueries | Integer | 256 | Specifies the maximum number of entries in per-connection cache of prepared statements. A value of 0 disables the cache. |
| preparedStatementCacheSizeMiB | Integer | 5 | Specifies the maximum size (in megabytes) of a per-connection prepared statement cache. A value of 0 disables the cache. |
| defaultRowFetchSize | Integer | 0 | Positive number of rows that should be fetched from the database when more rows are needed for ResultSet by each fetch iteration |
| loginTimeout | Integer | 0 | Specify how long in seconds max(2147484) to wait for establishment of a database connection. |
| connectTimeout | Integer | 10 | The timeout value in seconds max(2147484) used for socket connect operations. |
| socketTimeout | Integer | 0 | The timeout value in seconds max(2147484) used for socket read operations. |
| cancelSignalTimeout | Integer | 10 | The timeout that is used for sending cancel command. |
| sslResponseTimeout | Integer | 5000 | Socket timeout in milliseconds waiting for a response from a request for SSL upgrade from the server. |
| tcpKeepAlive | Boolean | false | Enable or disable TCP keep-alive. |
| tcpNoDelay | Boolean | true | Enable or disable TCP no delay. |
| ApplicationName | String | PostgreSQL JDBC Driver | The application name (require server version >= 9.0). If assumeMinServerVersion is set to >= 9.0 this will be sent in the startup packets, otherwise after the connection is made |
| readOnly | Boolean | false | Puts this connection in read-only mode |
| readOnlyMode | String | transaction | Specifies the behavior when a connection is set to be read only, possible values: ignore, transaction, always |
| disableColumnSanitiser | Boolean | false | Enable optimization that disables column name sanitiser |
| assumeMinServerVersion | String | null | Assume the server is at least that version |
| currentSchema | String | null | Specify the schema (or several schema separated by commas) to be set in the search-path |
| targetServerType | String | any | Specifies what kind of server to connect, possible values: any, master, slave (deprecated), secondary, preferSlave (deprecated), preferSecondary, preferPrimary |
| hostRecheckSeconds | Integer | 10 | Specifies period (seconds) after which the host status is checked again in case it has changed |
| loadBalanceHosts | Boolean | false | If disabled hosts are connected in the given order. If enabled hosts are chosen randomly from the set of suitable candidates |
| socketFactory | String | null | Specify a socket factory for socket creation |
| socketFactoryArg (deprecated) | String | null | Argument forwarded to constructor of SocketFactory class. |
| autosave | String | never | Specifies what the driver should do if a query fails, possible values: always, never, conservative |
| cleanupSavepoints | Boolean | false | In Autosave mode the driver sets a SAVEPOINT for every query. It is possible to exhaust the server shared buffers. Setting this to true will release each SAVEPOINT at the cost of an additional round trip. |
| preferQueryMode | String | extended | Specifies which mode is used to execute queries to database, possible values: extended, extendedForPrepared, extendedCacheEverything, simple |
| reWriteBatchedInserts | Boolean | false | Enable optimization to rewrite and collapse compatible INSERT statements that are batched. |
| escapeSyntaxCallMode | String | select | Specifies how JDBC escape call syntax is transformed into underlying SQL (CALL/SELECT), for invoking procedures or functions (requires server version >= 11), possible values: select, callIfNoReturn, call |
| maxResultBuffer | String | null | Specifies size of result buffer in bytes, which can't be exceeded during reading result set. Can be specified as particular size (i.e. "100", "200M" "2G") or as percent of max heap memory (i.e. "10p", "20pct", "50percent") |
| gssLib | String | auto | Permissible values are auto (default, see below), sspi (force SSPI) or gssapi (force GSSAPI-JSSE). |
| gssResponseTimeout | Integer | 5000 | Socket timeout in milliseconds waiting for a response from a request for GSS encrypted connection from the server. |
| gssEncMode | String | allow | Controls the preference for using GSSAPI encryption for the connection, values are disable, allow, prefer, and require |
| useSpnego | String | false | Use SPNEGO in SSPI authentication requests |
| adaptiveFetch | Boolean | false | Specifies if number of rows fetched in ResultSet by each fetch iteration should be dynamic. Number of rows will be calculated by dividing maxResultBuffer size into max row size observed so far. Requires declaring maxResultBuffer and defaultRowFetchSize for first iteration. |
| adaptiveFetchMinimum | Integer | 0 | Specifies minimum number of rows, which can be calculated by adaptiveFetch. Number of rows used by adaptiveFetch cannot go below this value. |
| adaptiveFetchMaximum | Integer | -1 | Specifies maximum number of rows, which can be calculated by adaptiveFetch. Number of rows used by adaptiveFetch cannot go above this value. Any negative number set as adaptiveFetchMaximum is used by adaptiveFetch as infinity number of rows. |
| localSocketAddress | String | null | Hostname or IP address given to explicitly configure the interface that the driver will bind the client side of the TCP/IP connection to when connecting. |
| quoteReturningIdentifiers | Boolean | true | By default we double quote returning identifiers. Some ORM's already quote them. Switch allows them to turn this off |
| authenticationPluginClassName | String | null | Fully qualified class name of the class implementing the AuthenticationPlugin interface. If this is null, the password value in the connection properties will be used. |
| unknownLength | Integer | Integer.MAX_LENGTH | Specifies the length to return for types of unknown length |
| stringtype | String | null | Specify the type to use when binding `PreparedStatement` parameters set via `setString()` |
| channelBinding | String | prefer | This option controls the client's use of channel binding. `require` means that the connection must employ channel binding, `prefer` means that the client will choose channel binding if available, and `disable` prevents the use of channel binding. |
#### System Properties
| Property | Type | Default | Description |
|-------------------------------| -- |:-----------------------:|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| pgjdbc.config.cleanup.thread.ttl | long | 30000 | The driver has an internal cleanup thread which monitors and cleans up unclosed connections. This property sets the duration (in milliseconds) the cleanup thread will keep running if there is nothing to clean up. |
## Contributing
For information on how to contribute to the project see the [Contributing Guidelines](CONTRIBUTING.md)
----------------------------------------------------
### Sponsors
* [PostgreSQL International](http://www.postgresintl.com)
postgresql-42.7.6-jdbc-src/build.properties 0100664 0000000 0000000 00000000631 00000250600 017426 0 ustar 00 0000000 0000000 # Default build parameters. These may be overridden by local configuration
# settings in build.local.properties.
#
test.url.PGHOST=localhost
test.url.PGPORT=5432
secondaryServer1=localhost
secondaryPort1=5433
secondaryServer2=localhost
secondaryPort2=5434
test.url.PGDBNAME=test
user=test
password=test
privilegedUser=postgres
privilegedPassword=
sspiusername=testsspi
preparethreshold=5
sslpassword=sslpwd
postgresql-42.7.6-jdbc-src/ssltest.properties 0100664 0000000 0000000 00000000047 00000250600 020031 0 ustar 00 0000000 0000000 certdir=certdir
#enable_ssl_tests=true
postgresql-42.7.6-jdbc-src/pom.xml 0100664 0000000 0000000 00000030127 00000250600 015531 0 ustar 00 0000000 0000000
4.0.0
org.postgresql
postgresql
42.7.6
jar
PostgreSQL JDBC Driver - JDBC 4.2
Java JDBC 4.2 (JRE 8+) driver for PostgreSQL database
https://github.com/pgjdbc/pgjdbc
PostgreSQL Global Development Group
https://jdbc.postgresql.org/
BSD-2-Clause
https://jdbc.postgresql.org/about/license.html
1.8
8
UTF-8
${encoding}
${encoding}
${encoding}
3.12.1
2.22.2
3.3.0
3.0.1
true
com.ongres.scram
scram-client
3.1
se.jiderhamn
classloader-leak-test-framework
1.1.2
test
junit
junit
4.13.2
test
org.junit.jupiter
junit-jupiter-api
5.12.2
test
uk.org.webcompere
system-stubs-jupiter
2.1.7
test
org.junit.jupiter
junit-jupiter-params
5.12.2
test
org.junit.jupiter
junit-jupiter-engine
5.12.2
test
org.junit.vintage
junit-vintage-engine
5.12.2
test
org.apache.maven.plugins
maven-compiler-plugin
${maven-compiler-plugin.version}
maven-surefire-plugin
${maven-surefire-plugin.version}
-Xmx1536m
.
junit.jupiter.extensions.autodetection.enabled=true
junit.jupiter.execution.timeout.default=5 m
org.apache.maven.plugins
maven-jar-plugin
${maven-jar-plugin.version}
src/main/resources/META-INF/MANIFEST.MF
jdk8
1.8
org.apache.maven.plugins
maven-compiler-plugin
${javac.target}
${javac.target}
org/postgresql/test/jdbc2/DriverTest.java
org/postgresql/util/OSUtilTest.java
org/postgresql/util/StubEnvironmentAndProperties.java
org/postgresql/jdbcurlresolver/PgPassParserTest.java
org/postgresql/jdbcurlresolver/PgServiceConfParserTest.java
jdkge11
[11,)
org.apache.maven.plugins
maven-compiler-plugin
${java.target.release}
javadoc
org.apache.maven.plugins
maven-javadoc-plugin
${maven-javadoc-plugin.version}
8
false
attach-javadocs
jar
shade-dependencies
!skipShadeDependencies
org.apache.maven.plugins
maven-shade-plugin
3.6.0
true
com.github.waffle:waffle-jna
org.slf4j:jcl-over-slf4j
com.ongres.scram:*
META-INF/LICENSE
META-INF/MANIFEST.MF
META-INF/maven/**
META-INF/versions/**
com.ongres.stringprep:*
META-INF/LICENSE
META-INF/MANIFEST.MF
META-INF/maven/**
META-INF/versions/**
META-INF/services/**
*:*
com/sun/jna/**
LICENSE
META-INF/maven/**
package
shade
com.ongres
org.postgresql.shaded.com.ongres
postgresql-42.7.6-jdbc-src/src/ 0040755 0000000 0000000 00000000000 00000250600 015001 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/ 0040755 0000000 0000000 00000000000 00000250600 015725 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/ 0040755 0000000 0000000 00000000000 00000250600 017737 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/ 0040775 0000000 0000000 00000000000 00000250600 021101 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/MANIFEST.MF 0100664 0000000 0000000 00000000644 00000250600 022534 0 ustar 00 0000000 0000000 Manifest-Version: 1.0
Bundle-License: BSD-2-Clause
Implementation-Title: PostgreSQL JDBC Driver
Implementation-Version: 42.7.6
Specification-Vendor: Oracle Corporation
Specification-Version: 4.2
Specification-Title: JDBC
Implementation-Vendor: PostgreSQL Global Development Group
Implementation-Vendor-Id: org.postgresql
Main-Class: org.postgresql.util.PGJDBCMain
Automatic-Module-Name: org.postgresql.jdbc
postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/LICENSE 0100664 0000000 0000000 00000003254 00000250600 022107 0 ustar 00 0000000 0000000 Copyright (c) 1997, PostgreSQL Global Development Group
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Additional License files can be found in the 'licenses' folder located in the same directory as the LICENSE file (i.e. this file)
- Software produced outside the ASF which is available under other licenses (not Apache-2.0)
BSD-2-Clause
* com.ongres.scram:scram-client:3.1
* com.ongres.scram:scram-common:3.1
* com.ongres.stringprep:saslprep:2.2
* com.ongres.stringprep:stringprep:2.2
postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/ 0040755 0000000 0000000 00000000000 00000250600 022704 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/ 0040755 0000000 0000000 00000000000 00000250600 026062 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-client-3.1/ 0040755 0000000 0000000 00000000000 00000250600 030742 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000154 00000250600 011611 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-client-3.1/META-INF/ postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-client-3.1/ME0040775 0000000 0000000 00000000000 00000250600 031166 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000163 00000250600 011611 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-client-3.1/META-INF/LICENSE postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-client-3.1/ME0100664 0000000 0000000 00000002365 00000250600 031173 0 ustar 00 0000000 0000000 Copyright (c) 2017 OnGres, Inc.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-common-3.1/ 0040755 0000000 0000000 00000000000 00000250600 030754 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000154 00000250600 011611 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-common-3.1/META-INF/ postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-common-3.1/ME0040775 0000000 0000000 00000000000 00000250600 031200 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000163 00000250600 011611 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-common-3.1/META-INF/LICENSE postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.scram/scram-common-3.1/ME0100664 0000000 0000000 00000002365 00000250600 031205 0 ustar 00 0000000 0000000 Copyright (c) 2017 OnGres, Inc.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/ 0040755 0000000 0000000 00000000000 00000250600 027152 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/saslprep-2.2/ 0040755 0000000 0000000 00000000000 00000250600 031302 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000155 00000250600 011612 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/saslprep-2.2/META-INF/ postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/saslprep-2.2/M0040775 0000000 0000000 00000000000 00000250600 031421 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000164 00000250600 011612 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/saslprep-2.2/META-INF/LICENSE postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/saslprep-2.2/M0100664 0000000 0000000 00000002365 00000250600 031426 0 ustar 00 0000000 0000000 Copyright (c) 2019 OnGres, Inc.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
././@LongLink 0100644 0000000 0000000 00000000146 00000250600 011612 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/stringprep-2.2/ postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/stringprep-2.20040755 0000000 0000000 00000000000 00000250600 031567 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000157 00000250600 011614 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/stringprep-2.2/META-INF/ postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/stringprep-2.20040775 0000000 0000000 00000000000 00000250600 031571 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000166 00000250600 011614 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/stringprep-2.2/META-INF/LICENSE postgresql-42.7.6-jdbc-src/src/main/resources/META-INF/licenses/com.ongres.stringprep/stringprep-2.20100664 0000000 0000000 00000002365 00000250600 031576 0 ustar 00 0000000 0000000 Copyright (c) 2019 OnGres, Inc.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation and/or
other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
postgresql-42.7.6-jdbc-src/src/main/java/ 0040755 0000000 0000000 00000000000 00000250600 016646 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/ 0040775 0000000 0000000 00000000000 00000250600 017437 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ 0040775 0000000 0000000 00000000000 00000250600 021642 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/util/ 0040775 0000000 0000000 00000000000 00000250600 022617 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/util/DriverInfo.java 0100664 0000000 0000000 00000001704 00000250600 025530 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2017, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.util;
/**
* Utility class with constants of Driver information.
*/
public final class DriverInfo {
private DriverInfo() {
}
// Driver name
public static final String DRIVER_NAME = "PostgreSQL JDBC Driver";
public static final String DRIVER_SHORT_NAME = "PgJDBC";
public static final String DRIVER_VERSION = "42.7.6";
public static final String DRIVER_FULL_NAME = DRIVER_NAME + " " + DRIVER_VERSION;
// Driver version
public static final int MAJOR_VERSION = 42;
public static final int MINOR_VERSION = 7;
public static final int PATCH_VERSION = 6;
// JDBC specification
public static final String JDBC_VERSION = "4.2";
public static final int JDBC_MAJOR_VERSION = JDBC_VERSION.charAt(0) - '0';
public static final int JDBC_MINOR_VERSION = JDBC_VERSION.charAt(2) - '0';
}
postgresql-42.7.6-jdbc-src/src/main/feature/ 0040775 0000000 0000000 00000000000 00000250600 017362 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/feature/feature.xml 0100664 0000000 0000000 00000000447 00000250600 021541 0 ustar 00 0000000 0000000
transaction-api
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/Driver.java 0100664 0000000 0000000 00000073554 00000250600 023753 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.jdbc.PgConnection;
import org.postgresql.jdbc.ResourceLock;
import org.postgresql.jdbcurlresolver.PgPassParser;
import org.postgresql.jdbcurlresolver.PgServiceConfParser;
import org.postgresql.util.DriverInfo;
import org.postgresql.util.GT;
import org.postgresql.util.HostSpec;
import org.postgresql.util.PGPropertyUtil;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.SharedTimer;
import org.postgresql.util.URLCoder;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.net.URL;
import java.security.PrivilegedActionException;
import java.security.PrivilegedExceptionAction;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.DriverPropertyInfo;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.Properties;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* The Java SQL framework allows for multiple database drivers. Each driver should supply a class
* that implements the Driver interface
*
*
The DriverManager will try to load as many drivers as it can find and then for any given
* connection request, it will ask each driver in turn to try to connect to the target URL.
*
* It is strongly recommended that each Driver class should be small and standalone so that the
* Driver class can be loaded and queried without bringing in vast quantities of supporting code.
*
* When a Driver class is loaded, it should create an instance of itself and register it with the
* DriverManager. This means that a user can load and register a driver by doing
* Class.forName("foo.bah.Driver")
*
* @see org.postgresql.PGConnection
* @see java.sql.Driver
*/
public class Driver implements java.sql.Driver {
private static /* @Nullable */ Driver registeredDriver;
private static final Logger PARENT_LOGGER = Logger.getLogger("org.postgresql");
private static final Logger LOGGER = Logger.getLogger("org.postgresql.Driver");
private static final SharedTimer SHARED_TIMER = new SharedTimer();
static {
try {
// moved the registerDriver from the constructor to here
// because some clients call the driver themselves (I know, as
// my early jdbc work did - and that was based on other examples).
// Placing it here, means that the driver is registered once only.
register();
} catch (SQLException e) {
throw new ExceptionInInitializerError(e);
}
}
// Helper to retrieve default properties from classloader resource
// properties files.
private /* @Nullable */ Properties defaultProperties;
private final ResourceLock lock = new ResourceLock();
private Properties getDefaultProperties() throws IOException {
try (ResourceLock ignore = lock.obtain()) {
if (defaultProperties != null) {
return defaultProperties;
}
// Make sure we load properties with the maximum possible privileges.
try {
defaultProperties =
doPrivileged(new PrivilegedExceptionAction() {
@Override
public Properties run() throws IOException {
return loadDefaultProperties();
}
});
} catch (PrivilegedActionException e) {
Exception ex = e.getException();
if (ex instanceof IOException) {
throw (IOException) ex;
}
throw new RuntimeException(e);
} catch (Throwable e) {
if (e instanceof IOException) {
throw (IOException) e;
}
if (e instanceof RuntimeException) {
throw (RuntimeException) e;
}
if (e instanceof Error) {
throw (Error) e;
}
throw new RuntimeException(e);
}
return defaultProperties;
}
}
private static T doPrivileged(PrivilegedExceptionAction action) throws Throwable {
try {
Class> accessControllerClass = Class.forName("java.security.AccessController");
Method doPrivileged = accessControllerClass.getMethod("doPrivileged",
PrivilegedExceptionAction.class);
//noinspection unchecked
return (T) doPrivileged.invoke(null, action);
} catch (ClassNotFoundException e) {
return action.run();
} catch (InvocationTargetException e) {
throw castNonNull(e.getCause());
}
}
private Properties loadDefaultProperties() throws IOException {
Properties merged = new Properties();
try {
PGProperty.USER.set(merged, System.getProperty("user.name"));
} catch (SecurityException se) {
// We're just trying to set a default, so if we can't
// it's not a big deal.
}
// If we are loaded by the bootstrap classloader, getClassLoader()
// may return null. In that case, try to fall back to the system
// classloader.
//
// We should not need to catch SecurityException here as we are
// accessing either our own classloader, or the system classloader
// when our classloader is null. The ClassLoader javadoc claims
// neither case can throw SecurityException.
ClassLoader cl = getClass().getClassLoader();
if (cl == null) {
LOGGER.log(Level.FINE, "Can't find our classloader for the Driver; "
+ "attempt to use the system class loader");
cl = ClassLoader.getSystemClassLoader();
}
if (cl == null) {
LOGGER.log(Level.WARNING, "Can't find a classloader for the Driver; not loading driver "
+ "configuration from org/postgresql/driverconfig.properties");
return merged; // Give up on finding defaults.
}
LOGGER.log(Level.FINE, "Loading driver configuration via classloader {0}", cl);
// When loading the driver config files we don't want settings found
// in later files in the classpath to override settings specified in
// earlier files. To do this we've got to read the returned
// Enumeration into temporary storage.
ArrayList urls = new ArrayList<>();
Enumeration urlEnum = cl.getResources("org/postgresql/driverconfig.properties");
while (urlEnum.hasMoreElements()) {
urls.add(urlEnum.nextElement());
}
for (int i = urls.size() - 1; i >= 0; i--) {
URL url = urls.get(i);
LOGGER.log(Level.FINE, "Loading driver configuration from: {0}", url);
InputStream is = url.openStream();
merged.load(is);
is.close();
}
return merged;
}
/**
* Try to make a database connection to the given URL. The driver should return "null" if it
* realizes it is the wrong kind of driver to connect to the given URL. This will be common, as
* when the JDBC driverManager is asked to connect to a given URL, it passes the URL to each
* loaded driver in turn.
*
* The driver should raise an SQLException if it is the right driver to connect to the given URL,
* but has trouble connecting to the database.
*
* The java.util.Properties argument can be used to pass arbitrary string tag/value pairs as
* connection arguments.
*
*
* user - (required) The user to connect as
* password - (optional) The password for the user
* ssl -(optional) Use SSL when connecting to the server
* readOnly - (optional) Set connection to read-only by default
* charSet - (optional) The character set to be used for converting to/from
* the database to unicode. If multibyte is enabled on the server then the character set of the
* database is used as the default, otherwise the jvm character encoding is used as the default.
* This value is only used when connecting to a 7.2 or older server.
* loglevel - (optional) Enable logging of messages from the driver. The value is an integer
* from 0 to 2 where: OFF = 0, INFO =1, DEBUG = 2 The output is sent to
* DriverManager.getPrintWriter() if set, otherwise it is sent to System.out.
* compatible - (optional) This is used to toggle between different functionality
* as it changes across different releases of the jdbc driver code. The values here are versions
* of the jdbc client and not server versions. For example in 7.1 get/setBytes worked on
* LargeObject values, in 7.2 these methods were changed to work on bytea values. This change in
* functionality could be disabled by setting the compatible level to be "7.1", in which case the
* driver will revert to the 7.1 functionality.
*
*
* Normally, at least "user" and "password" properties should be included in the properties. For a
* list of supported character encoding , see
* http://java.sun.com/products/jdk/1.2/docs/guide/internat/encoding.doc.html Note that you will
* probably want to have set up the Postgres database itself to use the same encoding, with the
* {@code -E } argument to createdb.
*
* Our protocol takes the forms:
*
*
* jdbc:postgresql://host:port/database?param1=val1&...
*
*
* @param url the URL of the database to connect to
* @param info a list of arbitrary tag/value pairs as connection arguments
* @return a connection to the URL or null if it isnt us
* @exception SQLException if a database access error occurs or the url is
* {@code null}
* @see java.sql.Driver#connect
*/
@Override
public /* @Nullable */ Connection connect(String url, /* @Nullable */ Properties info) throws SQLException {
if (url == null) {
throw new SQLException("url is null");
}
// get defaults
Properties defaults;
if (!url.startsWith("jdbc:postgresql:")) {
return null;
}
try {
defaults = getDefaultProperties();
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Error loading default settings from driverconfig.properties"),
PSQLState.UNEXPECTED_ERROR, ioe);
}
// override defaults with provided properties
Properties props = new Properties(defaults);
if (info != null) {
Set e = info.stringPropertyNames();
for (String propName : e) {
String propValue = info.getProperty(propName);
if (propValue == null) {
throw new PSQLException(
GT.tr("Properties for the driver contains a non-string value for the key ")
+ propName,
PSQLState.UNEXPECTED_ERROR);
}
props.setProperty(propName, propValue);
}
}
// parse URL and add more properties
if ((props = parseURL(url, props)) == null) {
throw new PSQLException(
GT.tr("Unable to parse URL {0}", url),
PSQLState.UNEXPECTED_ERROR);
}
try {
LOGGER.log(Level.FINE, "Connecting with URL: {0}", url);
// Enforce login timeout, if specified, by running the connection
// attempt in a separate thread. If we hit the timeout without the
// connection completing, we abandon the connection attempt in
// the calling thread, but the separate thread will keep trying.
// Eventually, the separate thread will either fail or complete
// the connection; at that point we clean up the connection if
// we managed to establish one after all. See ConnectThread for
// more details.
long timeout = timeout(props);
if (timeout <= 0) {
return makeConnection(url, props);
}
ConnectThread ct = new ConnectThread(url, props);
Thread thread = new Thread(ct, "PostgreSQL JDBC driver connection thread");
thread.setDaemon(true); // Don't prevent the VM from shutting down
thread.start();
return ct.getResult(timeout);
} catch (PSQLException ex1) {
LOGGER.log(Level.FINE, "Connection error: ", ex1);
// re-throw the exception, otherwise it will be caught next, and a
// org.postgresql.unusual error will be returned instead.
throw ex1;
} catch (Exception ex2) {
if ("java.security.AccessControlException".equals(ex2.getClass().getName())) {
// java.security.AccessControlException has been deprecated for removal, so compare the class name
throw new PSQLException(
GT.tr(
"Your security policy has prevented the connection from being attempted. You probably need to grant the connect java.net.SocketPermission to the database server host and port that you wish to connect to."),
PSQLState.UNEXPECTED_ERROR, ex2);
}
LOGGER.log(Level.FINE, "Unexpected connection error: ", ex2);
throw new PSQLException(
GT.tr(
"Something unusual has occurred to cause the driver to fail. Please report this exception."),
PSQLState.UNEXPECTED_ERROR, ex2);
}
}
/**
* this is an empty method left here for graalvm
* we removed the ability to setup the logger from properties
* due to a security issue
* @param props Connection Properties
*/
@SuppressWarnings({"MethodCanBeStatic", "UnusedMethod"})
private void setupLoggerFromProperties(@SuppressWarnings("UnusedVariable") Properties props) {
}
/**
* Perform a connect in a separate thread; supports getting the results from the original thread
* while enforcing a login timeout.
*/
private static class ConnectThread implements Runnable {
private final ResourceLock lock = new ResourceLock();
private final Condition lockCondition = lock.newCondition();
ConnectThread(String url, Properties props) {
this.url = url;
this.props = props;
}
@Override
public void run() {
Connection conn;
Throwable error;
try {
conn = makeConnection(url, props);
error = null;
} catch (Throwable t) {
conn = null;
error = t;
}
try (ResourceLock ignore = lock.obtain()) {
if (abandoned) {
if (conn != null) {
try {
conn.close();
} catch (SQLException ignored) {
// TODO: should we rethrow it?
}
}
} else {
result = conn;
resultException = error;
lockCondition.signal();
}
}
}
/**
* Get the connection result from this (assumed running) thread. If the timeout is reached
* without a result being available, a SQLException is thrown.
*
* @param timeout timeout in milliseconds
* @return the new connection, if successful
* @throws SQLException if a connection error occurs or the timeout is reached
*/
public Connection getResult(long timeout) throws SQLException {
long expiry = TimeUnit.NANOSECONDS.toMillis(System.nanoTime()) + timeout;
try (ResourceLock ignore = lock.obtain()) {
while (true) {
if (result != null) {
return result;
}
Throwable resultException = this.resultException;
if (resultException != null) {
if (resultException instanceof SQLException) {
resultException.fillInStackTrace();
throw (SQLException) resultException;
} else {
throw new PSQLException(
GT.tr(
"Something unusual has occurred to cause the driver to fail. Please report this exception."),
PSQLState.UNEXPECTED_ERROR, resultException);
}
}
long delay = expiry - TimeUnit.NANOSECONDS.toMillis(System.nanoTime());
if (delay <= 0) {
abandoned = true;
throw new PSQLException(GT.tr("Connection attempt timed out."),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
try {
lockCondition.await(delay, TimeUnit.MILLISECONDS);
} catch (InterruptedException ie) {
// reset the interrupt flag
Thread.currentThread().interrupt();
abandoned = true;
// throw an unchecked exception which will hopefully not be ignored by the calling code
throw new RuntimeException(GT.tr("Interrupted while attempting to connect."));
}
}
}
}
private final String url;
private final Properties props;
private /* @Nullable */ Connection result;
private /* @Nullable */ Throwable resultException;
private boolean abandoned;
}
/**
* Create a connection from URL and properties. Always does the connection work in the current
* thread without enforcing a timeout, regardless of any timeout specified in the properties.
*
* @param url the original URL
* @param props the parsed/defaulted connection properties
* @return a new connection
* @throws SQLException if the connection could not be made
*/
private static Connection makeConnection(String url, Properties props) throws SQLException {
return new PgConnection(hostSpecs(props), props, url);
}
/**
* Returns true if the driver thinks it can open a connection to the given URL. Typically, drivers
* will return true if they understand the subprotocol specified in the URL and false if they
* don't. Our protocols start with jdbc:postgresql:
*
* @param url the URL of the driver
* @return true if this driver accepts the given URL
* @see java.sql.Driver#acceptsURL
*/
@Override
public boolean acceptsURL(String url) {
return parseURL(url, null) != null;
}
/**
* The getPropertyInfo method is intended to allow a generic GUI tool to discover what properties
* it should prompt a human for in order to get enough information to connect to a database.
*
* Note that depending on the values the human has supplied so far, additional values may become
* necessary, so it may be necessary to iterate through several calls to getPropertyInfo
*
* @param url the Url of the database to connect to
* @param info a proposed list of tag/value pairs that will be sent on connect open.
* @return An array of DriverPropertyInfo objects describing possible properties. This array may
* be an empty array if no properties are required
* @see java.sql.Driver#getPropertyInfo
*/
@Override
public DriverPropertyInfo[] getPropertyInfo(String url, Properties info) {
Properties copy = new Properties(info);
Properties parse = parseURL(url, copy);
if (parse != null) {
copy = parse;
}
PGProperty[] knownProperties = PGProperty.values();
DriverPropertyInfo[] props = new DriverPropertyInfo[knownProperties.length];
for (int i = 0; i < props.length; i++) {
props[i] = knownProperties[i].toDriverPropertyInfo(copy);
}
return props;
}
@Override
public int getMajorVersion() {
return DriverInfo.MAJOR_VERSION;
}
@Override
public int getMinorVersion() {
return DriverInfo.MINOR_VERSION;
}
/**
* Returns the server version series of this driver and the specific build number.
*
* @return JDBC driver version
* @deprecated use {@link #getMajorVersion()} and {@link #getMinorVersion()} instead
*/
@Deprecated
public static String getVersion() {
return DriverInfo.DRIVER_FULL_NAME;
}
/**
* Report whether the driver is a genuine JDBC compliant driver. A driver may only report "true"
* here if it passes the JDBC compliance tests, otherwise it is required to return false. JDBC
* compliance requires full support for the JDBC API and full support for SQL 92 Entry Level.
*
* For PostgreSQL, this is not yet possible, as we are not SQL92 compliant (yet).
*/
@Override
public boolean jdbcCompliant() {
return false;
}
/**
* Constructs a new DriverURL, splitting the specified URL into its component parts.
*
* @param url JDBC URL to parse
* @param defaults Default properties
* @return Properties with elements added from the url
*/
public static /* @Nullable */ Properties parseURL(String url, /* @Nullable */ Properties defaults) {
// priority 1 - URL values
Properties priority1Url = new Properties();
// priority 2 - Properties given as argument to DriverManager.getConnection()
// argument "defaults" EXCLUDING defaults
// priority 3 - Values retrieved by "service"
Properties priority3Service = new Properties();
// priority 4 - Properties loaded by Driver.loadDefaultProperties() (user, org/postgresql/driverconfig.properties)
// argument "defaults" INCLUDING defaults
// priority 5 - PGProperty defaults for PGHOST, PGPORT, PGDBNAME
String urlServer = url;
String urlArgs = "";
int qPos = url.indexOf('?');
if (qPos != -1) {
urlServer = url.substring(0, qPos);
urlArgs = url.substring(qPos + 1);
}
if (!urlServer.startsWith("jdbc:postgresql:")) {
LOGGER.log(Level.FINE, "JDBC URL must start with \"jdbc:postgresql:\" but was: {0}", url);
return null;
}
urlServer = urlServer.substring("jdbc:postgresql:".length());
if ("//".equals(urlServer) || "///".equals(urlServer)) {
urlServer = "";
} else if (urlServer.startsWith("//")) {
urlServer = urlServer.substring(2);
long slashCount = urlServer.chars().filter(ch -> ch == '/').count();
if (slashCount > 1) {
LOGGER.log(Level.WARNING, "JDBC URL contains too many / characters: {0}", url);
return null;
}
int slash = urlServer.indexOf('/');
if (slash == -1) {
LOGGER.log(Level.WARNING, "JDBC URL must contain a / at the end of the host or port: {0}", url);
return null;
}
if (!urlServer.endsWith("/")) {
String value = urlDecode(urlServer.substring(slash + 1));
if (value == null) {
return null;
}
PGProperty.PG_DBNAME.set(priority1Url, value);
}
urlServer = urlServer.substring(0, slash);
String[] addresses = urlServer.split(",");
StringBuilder hosts = new StringBuilder();
StringBuilder ports = new StringBuilder();
for (String address : addresses) {
int portIdx = address.lastIndexOf(':');
if (portIdx != -1 && address.lastIndexOf(']') < portIdx) {
String portStr = address.substring(portIdx + 1);
ports.append(portStr);
CharSequence hostStr = address.subSequence(0, portIdx);
if (hostStr.length() == 0) {
hosts.append(PGProperty.PG_HOST.getDefaultValue());
} else {
hosts.append(hostStr);
}
} else {
ports.append(PGProperty.PG_PORT.getDefaultValue());
hosts.append(address);
}
ports.append(',');
hosts.append(',');
}
ports.setLength(ports.length() - 1);
hosts.setLength(hosts.length() - 1);
PGProperty.PG_HOST.set(priority1Url, hosts.toString());
PGProperty.PG_PORT.set(priority1Url, ports.toString());
} else if (urlServer.startsWith("/")) {
return null;
} else {
String value = urlDecode(urlServer);
if (value == null) {
return null;
}
priority1Url.setProperty(PGProperty.PG_DBNAME.getName(), value);
}
// parse the args part of the url
String[] args = urlArgs.split("&");
String serviceName = null;
for (String token : args) {
if (token.isEmpty()) {
continue;
}
int pos = token.indexOf('=');
if (pos == -1) {
priority1Url.setProperty(token, "");
} else {
String pName = PGPropertyUtil.translatePGServiceToPGProperty(token.substring(0, pos));
String pValue = urlDecode(token.substring(pos + 1));
if (pValue == null) {
return null;
}
if (PGProperty.SERVICE.getName().equals(pName)) {
serviceName = pValue;
} else {
priority1Url.setProperty(pName, pValue);
}
}
}
// load pg_service.conf
if (serviceName != null) {
LOGGER.log(Level.FINE, "Processing option [?service={0}]", serviceName);
Properties result = PgServiceConfParser.getServiceProperties(serviceName);
if (result == null) {
LOGGER.log(Level.WARNING, "Definition of service [{0}] not found", serviceName);
return null;
}
priority3Service.putAll(result);
}
// combine result based on order of priority
Properties result = new Properties();
result.putAll(priority1Url);
if (defaults != null) {
// priority 2 - forEach() returns all entries EXCEPT defaults
defaults.forEach(result::putIfAbsent);
}
priority3Service.forEach(result::putIfAbsent);
if (defaults != null) {
// priority 4 - stringPropertyNames() returns all entries INCLUDING defaults
defaults.stringPropertyNames().forEach(s -> result.putIfAbsent(s, castNonNull(defaults.getProperty(s))));
}
// priority 5 - PGProperty defaults for PGHOST, PGPORT, PGDBNAME
result.putIfAbsent(PGProperty.PG_PORT.getName(), castNonNull(PGProperty.PG_PORT.getDefaultValue()));
result.putIfAbsent(PGProperty.PG_HOST.getName(), castNonNull(PGProperty.PG_HOST.getDefaultValue()));
if (PGProperty.USER.getOrDefault(result) != null) {
result.putIfAbsent(PGProperty.PG_DBNAME.getName(), castNonNull(PGProperty.USER.getOrDefault(result)));
}
// consistency check
if (!PGPropertyUtil.propertiesConsistencyCheck(result)) {
return null;
}
// try to load .pgpass if password is missing
if (PGProperty.PASSWORD.getOrDefault(result) == null) {
String password = PgPassParser.getPassword(
PGProperty.PG_HOST.getOrDefault(result), PGProperty.PG_PORT.getOrDefault(result), PGProperty.PG_DBNAME.getOrDefault(result), PGProperty.USER.getOrDefault(result)
);
if (password != null && !password.isEmpty()) {
PGProperty.PASSWORD.set(result, password);
}
}
//
return result;
}
// decode url, on failure log and return null
private static /* @Nullable */ String urlDecode(String url) {
try {
return URLCoder.decode(url);
} catch (IllegalArgumentException e) {
LOGGER.log(Level.FINE, "Url [{0}] parsing failed with error [{1}]", new Object[]{url, e.getMessage()});
}
return null;
}
/**
* @return the address portion of the URL
*/
private static HostSpec[] hostSpecs(Properties props) {
String[] hosts = castNonNull(PGProperty.PG_HOST.getOrDefault(props)).split(",");
String[] ports = castNonNull(PGProperty.PG_PORT.getOrDefault(props)).split(",");
String localSocketAddress = PGProperty.LOCAL_SOCKET_ADDRESS.getOrDefault(props);
HostSpec[] hostSpecs = new HostSpec[hosts.length];
for (int i = 0; i < hostSpecs.length; i++) {
hostSpecs[i] = new HostSpec(hosts[i], Integer.parseInt(ports[i]), localSocketAddress);
}
return hostSpecs;
}
/**
* @return the timeout from the URL, in milliseconds
*/
private static long timeout(Properties props) {
String timeout = PGProperty.LOGIN_TIMEOUT.getOrDefault(props);
if (timeout != null) {
try {
return (long) (Float.parseFloat(timeout) * 1000);
} catch (NumberFormatException e) {
LOGGER.log(Level.WARNING, "Couldn't parse loginTimeout value: {0}", timeout);
}
}
return (long) DriverManager.getLoginTimeout() * 1000;
}
/**
* This method was added in v6.5, and simply throws an SQLException for an unimplemented method. I
* decided to do it this way while implementing the JDBC2 extensions to JDBC, as it should help
* keep the overall driver size down. It now requires the call Class and the function name to help
* when the driver is used with closed software that don't report the stack trace
*
* @param callClass the call Class
* @param functionName the name of the unimplemented function with the type of its arguments
* @return PSQLException with a localized message giving the complete description of the
* unimplemented function
*/
public static SQLFeatureNotSupportedException notImplemented(Class> callClass,
String functionName) {
return new SQLFeatureNotSupportedException(
GT.tr("Method {0} is not yet implemented.", callClass.getName() + "." + functionName),
PSQLState.NOT_IMPLEMENTED.getState());
}
@Override
public Logger getParentLogger() {
return PARENT_LOGGER;
}
public static SharedTimer getSharedTimer() {
return SHARED_TIMER;
}
/**
* Register the driver against {@link DriverManager}. This is done automatically when the class is
* loaded. Dropping the driver from DriverManager's list is possible using {@link #deregister()}
* method.
*
* @throws IllegalStateException if the driver is already registered
* @throws SQLException if registering the driver fails
*/
public static void register() throws SQLException {
if (isRegistered()) {
throw new IllegalStateException(
"Driver is already registered. It can only be registered once.");
}
Driver registeredDriver = new Driver();
DriverManager.registerDriver(registeredDriver);
Driver.registeredDriver = registeredDriver;
}
/**
* According to JDBC specification, this driver is registered against {@link DriverManager} when
* the class is loaded. To avoid leaks, this method allow unregistering the driver so that the
* class can be gc'ed if necessary.
*
* @throws IllegalStateException if the driver is not registered
* @throws SQLException if deregistering the driver fails
*/
public static void deregister() throws SQLException {
if (registeredDriver == null) {
throw new IllegalStateException(
"Driver is not registered (or it has not been registered using Driver.register() method)");
}
DriverManager.deregisterDriver(registeredDriver);
registeredDriver = null;
}
/**
* @return {@code true} if the driver is registered against {@link DriverManager}
*/
public static boolean isRegistered() {
return registeredDriver != null;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/PGConnection.java 0100664 0000000 0000000 00000034216 00000250600 025036 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
import org.postgresql.copy.CopyManager;
import org.postgresql.fastpath.Fastpath;
import org.postgresql.jdbc.AutoSave;
import org.postgresql.jdbc.PreferQueryMode;
import org.postgresql.largeobject.LargeObjectManager;
import org.postgresql.replication.PGReplicationConnection;
import org.postgresql.util.GT;
import org.postgresql.util.PGobject;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.PasswordUtil;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.Array;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Arrays;
import java.util.Map;
/**
* This interface defines the public PostgreSQL extensions to java.sql.Connection. All Connections
* returned by the PostgreSQL driver implement PGConnection.
*/
public interface PGConnection {
/**
* Creates an {@link Array} wrapping elements . This is similar to
* {@link java.sql.Connection#createArrayOf(String, Object[])}, but also
* provides support for primitive arrays.
*
* @param typeName
* The SQL name of the type to map the elements to.
* Must not be {@code null}.
* @param elements
* The array of objects to map. A {@code null} value will result in
* an {@link Array} representing {@code null}.
* @return An {@link Array} wrapping elements .
* @throws SQLException
* If for some reason the array cannot be created.
* @see java.sql.Connection#createArrayOf(String, Object[])
*/
Array createArrayOf(String typeName, /* @Nullable */ Object elements) throws SQLException;
/**
* This method returns any notifications that have been received since the last call to this
* method. Returns null if there have been no notifications.
*
* @return notifications that have been received
* @throws SQLException if something wrong happens
* @since 7.3
*/
PGNotification[] getNotifications() throws SQLException;
/**
* This method returns any notifications that have been received since the last call to this
* method. Returns null if there have been no notifications. A timeout can be specified so the
* driver waits for notifications.
*
* @param timeoutMillis when 0, blocks forever. when > 0, blocks up to the specified number of millis
* or until at least one notification has been received. If more than one notification is
* about to be received, these will be returned in one batch.
* @return notifications that have been received
* @throws SQLException if something wrong happens
* @since 43
*/
PGNotification[] getNotifications(int timeoutMillis) throws SQLException;
/**
* This returns the COPY API for the current connection.
*
* @return COPY API for the current connection
* @throws SQLException if something wrong happens
* @since 8.4
*/
CopyManager getCopyAPI() throws SQLException;
/**
* This returns the LargeObject API for the current connection.
*
* @return LargeObject API for the current connection
* @throws SQLException if something wrong happens
* @since 7.3
*/
LargeObjectManager getLargeObjectAPI() throws SQLException;
/**
* This returns the Fastpath API for the current connection.
*
*
* Note: This API is somewhat obsolete, as one may achieve similar performance
* and greater functionality by setting up a prepared statement to define
* the function call. Then, executing the statement with binary transmission of parameters
* and results substitutes for a fast-path function call.
* @return Fastpath API for the current connection
* @throws SQLException if something wrong happens
* @since 7.3
*/
Fastpath getFastpathAPI() throws SQLException;
/**
* This allows client code to add a handler for one of org.postgresql's more unique data types. It
* is approximately equivalent to addDataType(type, Class.forName(name))
.
*
* @param type JDBC type name
* @param className class name
* @throws RuntimeException if the type cannot be registered (class not found, etc).
* @deprecated As of 8.0, replaced by {@link #addDataType(String, Class)}. This deprecated method
* does not work correctly for registering classes that cannot be directly loaded by
* the JDBC driver's classloader.
*/
@Deprecated
void addDataType(String type, String className);
/**
* This allows client code to add a handler for one of org.postgresql's more unique data types.
*
*
NOTE: This is not part of JDBC, but an extension.
*
* The best way to use this is as follows:
*
*
* ...
* ((org.postgresql.PGConnection)myconn).addDataType("mytype", my.class.name.class);
* ...
*
*
* where myconn is an open Connection to org.postgresql.
*
* The handling class must extend org.postgresql.util.PGobject
*
* @param type the PostgreSQL type to register
* @param klass the class implementing the Java representation of the type; this class must
* implement {@link org.postgresql.util.PGobject}).
* @throws SQLException if klass
does not implement
* {@link org.postgresql.util.PGobject}).
* @see org.postgresql.util.PGobject
* @since 8.0
*/
void addDataType(String type, Class extends PGobject> klass) throws SQLException;
/**
* Set the default statement reuse threshold before enabling server-side prepare. See
* {@link org.postgresql.PGStatement#setPrepareThreshold(int)} for details.
*
* @param threshold the new threshold
* @since build 302
*/
void setPrepareThreshold(int threshold);
/**
* Get the default server-side prepare reuse threshold for statements created from this
* connection.
*
* @return the current threshold
* @since build 302
*/
int getPrepareThreshold();
/**
* Set the default fetch size for statements created from this connection.
*
* @param fetchSize new default fetch size
* @throws SQLException if specified negative fetchSize
parameter
* @see Statement#setFetchSize(int)
*/
void setDefaultFetchSize(int fetchSize) throws SQLException;
/**
* Get the default fetch size for statements created from this connection.
*
* @return current state for default fetch size
* @see PGProperty#DEFAULT_ROW_FETCH_SIZE
* @see Statement#getFetchSize()
*/
int getDefaultFetchSize();
/**
* Return the process ID (PID) of the backend server process handling this connection.
*
* @return PID of backend server process.
*/
int getBackendPID();
/**
* Sends a query cancellation for this connection.
* @throws SQLException if there are problems cancelling the query
*/
void cancelQuery() throws SQLException;
/**
* Return the given string suitably quoted to be used as an identifier in an SQL statement string.
* Quotes are added only if necessary (i.e., if the string contains non-identifier characters or
* would be case-folded). Embedded quotes are properly doubled.
*
* @param identifier input identifier
* @return the escaped identifier
* @throws SQLException if something goes wrong
*/
String escapeIdentifier(String identifier) throws SQLException;
/**
* Return the given string suitably quoted to be used as a string literal in an SQL statement
* string. Embedded single-quotes and backslashes are properly doubled. Note that quote_literal
* returns null on null input.
*
* @param literal input literal
* @return the quoted literal
* @throws SQLException if something goes wrong
*/
String escapeLiteral(String literal) throws SQLException;
/**
* Returns the query mode for this connection.
*
* When running in simple query mode, certain features are not available: callable statements,
* partial result set fetch, bytea type, etc.
*
* The list of supported features is subject to change.
*
* @return the preferred query mode
* @see PreferQueryMode
*/
PreferQueryMode getPreferQueryMode();
/**
* Connection configuration regarding automatic per-query savepoints.
*
* @return connection configuration regarding automatic per-query savepoints
* @see PGProperty#AUTOSAVE
*/
AutoSave getAutosave();
/**
* Configures if connection should use automatic savepoints.
* @param autoSave connection configuration regarding automatic per-query savepoints
* @see PGProperty#AUTOSAVE
*/
void setAutosave(AutoSave autoSave);
/**
* @return replication API for the current connection
*/
PGReplicationConnection getReplicationAPI();
/**
* Change a user's password to the specified new password.
*
*
* If the specific encryption type is not specified, this method defaults to querying the database server for the server's default password_encryption.
* This method does not send the new password in plain text to the server.
* Instead, it encrypts the password locally and sends the encoded hash so that the plain text password is never sent on the wire.
*
*
*
* Acceptable values for encryptionType are null, "md5", or "scram-sha-256".
* Users should avoid "md5" unless they are explicitly targeting an older server that does not support the more secure SCRAM.
*
*
* @param user The username of the database user
* @param newPassword The new password for the database user. The implementation will zero
* out the array after use
* @param encryptionType The type of password encryption to use or null if the database server default should be used.
* @throws SQLException If the password could not be altered
*/
default void alterUserPassword(String user, char[] newPassword, /* @Nullable */ String encryptionType) throws SQLException {
try (Statement stmt = ((Connection) this).createStatement()) {
if (encryptionType == null) {
try (ResultSet rs = stmt.executeQuery("SHOW password_encryption")) {
if (!rs.next()) {
throw new PSQLException(GT.tr("Expected a row when reading password_encryption but none was found"),
PSQLState.NO_DATA);
}
encryptionType = rs.getString(1);
if (encryptionType == null) {
throw new PSQLException(GT.tr("SHOW password_encryption returned null value"),
PSQLState.NO_DATA);
}
}
}
String sql = PasswordUtil.genAlterUserPasswordSQL(user, newPassword, encryptionType);
stmt.execute(sql);
} finally {
Arrays.fill(newPassword, (char) 0);
}
}
/**
* Returns the current values of all parameters reported by the server.
*
* PostgreSQL reports values for a subset of parameters (GUCs) to the client
* at connect-time, then sends update messages whenever the values change
* during a session. PgJDBC records the latest values and exposes it to client
* applications via getParameterStatuses()
.
*
* PgJDBC exposes individual accessors for some of these parameters as
* listed below. They are more backwards-compatible and should be preferred
* where possible.
*
* Not all parameters are reported, only those marked
* GUC_REPORT
in the source code. The pg_settings
* view does not expose information about which parameters are reportable.
* PgJDBC's map will only contain the parameters the server reports values
* for, so you cannot use this method as a substitute for running a
* SHOW paramname;
or SELECT
* current_setting('paramname');
query for arbitrary parameters.
*
* Parameter names are case-insensitive and case-preserving
* in this map, like in PostgreSQL itself. So DateStyle
and
* datestyle
are the same key.
*
*
* As of PostgreSQL 11 the reportable parameter list, and related PgJDBC
* interfaces or assessors, are:
*
*
*
*
* application_name
-
* {@link java.sql.Connection#getClientInfo()},
* {@link java.sql.Connection#setClientInfo(java.util.Properties)}
* and ApplicationName
connection property.
*
*
* client_encoding
- PgJDBC always sets this to UTF8
.
* See allowEncodingChanges
connection property.
*
* DateStyle
- PgJDBC requires this to always be set to ISO
* standard_conforming_strings
- indirectly via {@link #escapeLiteral(String)}
*
* TimeZone
- set from JDK timezone see {@link java.util.TimeZone#getDefault()}
* and {@link java.util.TimeZone#setDefault(TimeZone)}
*
* integer_datetimes
* IntervalStyle
* server_encoding
* server_version
* is_superuser
* session_authorization
*
*
* Note that some PgJDBC operations will change server parameters
* automatically.
*
* @return unmodifiable map of case-insensitive parameter names to parameter values
* @since 42.2.6
*/
Map getParameterStatuses();
/**
* Shorthand for getParameterStatuses().get(...) .
*
* @param parameterName case-insensitive parameter name
* @return parameter value if defined, or null if no parameter known
* @see #getParameterStatuses
* @since 42.2.6
*/
/* @Nullable */ String getParameterStatus(String parameterName);
/**
* Turn on/off adaptive fetch for connection. Existing statements and resultSets won't be affected
* by change here.
*
* @param adaptiveFetch desired state of adaptive fetch.
*/
void setAdaptiveFetch(boolean adaptiveFetch);
/**
* Get state of adaptive fetch for connection.
*
* @return state of adaptive fetch (turned on or off)
*/
boolean getAdaptiveFetch();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/PGEnvironment.java 0100664 0000000 0000000 00000005333 00000250600 025241 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2021, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.util.HashMap;
import java.util.Map;
/**
* Some environment variables are intended to have same meaning as libpq describes here:
* https://www.postgresql.org/docs/current/libpq-envars.html
*/
public enum PGEnvironment {
/**
* Specified location of password file.
*/
ORG_POSTGRESQL_PGPASSFILE(
"org.postgresql.pgpassfile",
null,
"Specified location of password file."),
/**
* Specified location of password file.
*/
PGPASSFILE(
"PGPASSFILE",
"pgpass",
"Specified location of password file."),
/**
* The connection service resource (file, url) allows connection parameters to be associated
* with a single service name.
*/
ORG_POSTGRESQL_PGSERVICEFILE(
"org.postgresql.pgservicefile",
null,
"Specifies the service resource to resolve connection properties."),
/**
* The connection service resource (file, url) allows connection parameters to be associated
* with a single service name.
*/
PGSERVICEFILE(
"PGSERVICEFILE",
"pg_service.conf",
"Specifies the service resource to resolve connection properties."),
/**
* sets the directory containing the PGSERVICEFILE file and possibly other system-wide
* configuration files.
*/
PGSYSCONFDIR(
"PGSYSCONFDIR",
null,
"Specifies the directory containing the PGSERVICEFILE file"),
;
private final String name;
private final /* @Nullable */ String defaultValue;
private final String description;
PGEnvironment(String name, /* @Nullable */ String defaultValue, String description) {
this.name = name;
this.defaultValue = defaultValue;
this.description = description;
}
private static final Map PROPS_BY_NAME = new HashMap<>();
static {
for (PGEnvironment prop : PGEnvironment.values()) {
if (PROPS_BY_NAME.put(prop.getName(), prop) != null) {
throw new IllegalStateException("Duplicate PGProperty name: " + prop.getName());
}
}
}
/**
* Returns the name of the parameter.
*
* @return the name of the parameter
*/
public String getName() {
return name;
}
/**
* Returns the default value for this parameter.
*
* @return the default value for this parameter or null
*/
public /* @Nullable */ String getDefaultValue() {
return defaultValue;
}
/**
* Returns the description for this parameter.
*
* @return the description for this parameter
*/
public String getDescription() {
return description;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/PGNotification.java 0100664 0000000 0000000 00000001650 00000250600 025361 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
/**
* This interface defines the public PostgreSQL extension for Notifications.
*/
public interface PGNotification {
/**
* Returns name of this notification.
*
* @return name of this notification
* @since 7.3
*/
String getName();
/**
* Returns the process id of the backend process making this notification.
*
* @return process id of the backend process making this notification
* @since 7.3
*/
int getPID();
/**
* Returns additional information from the notifying process. This feature has only been
* implemented in server versions 9.0 and later, so previous versions will always return an empty
* String.
*
* @return additional information from the notifying process
* @since 8.0
*/
String getParameter();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/PGProperty.java 0100664 0000000 0000000 00000115545 00000250600 024570 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
import org.postgresql.util.DriverInfo;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.Connection;
import java.sql.DriverPropertyInfo;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
/**
* All connection parameters that can be either set in JDBC URL, in Driver properties or in
* datasource setters.
*/
public enum PGProperty {
/**
* Specifies if number of rows, used during fetching rows of a result set, should be computed
* dynamically. Number of rows will be calculated by dividing maxResultBuffer size by max row size
* observed so far, rounded down. First fetch will have number of rows declared in
* defaultRowFetchSize. Number of rows can be limited by adaptiveFetchMinimum and
* adaptiveFetchMaximum. Requires declaring of maxResultBuffer and defaultRowFetchSize to work.
* Default value is false.
*/
ADAPTIVE_FETCH(
"adaptiveFetch",
"false",
"Specifies if number of rows fetched in ResultSet should be adaptive to maxResultBuffer and max row size."),
/**
* Specifies the highest number of rows which can be calculated by adaptiveFetch. Requires
* adaptiveFetch set to true to work. Default value is -1 (used as infinity).
*/
ADAPTIVE_FETCH_MAXIMUM(
"adaptiveFetchMaximum",
"-1",
"Specifies maximum number of rows used by adaptive fetch."),
/**
* Specifies the lowest number of rows which can be calculated by adaptiveFetch. Requires
* adaptiveFetch set to true to work. Default value is 0.
*/
ADAPTIVE_FETCH_MINIMUM(
"adaptiveFetchMinimum",
"0",
"Specifies minimum number of rows used by adaptive fetch."),
/**
* When using the V3 protocol the driver monitors changes in certain server configuration
* parameters that should not be touched by end users. The {@code client_encoding} setting is set
* by the driver and should not be altered. If the driver detects a change it will abort the
* connection.
*/
ALLOW_ENCODING_CHANGES(
"allowEncodingChanges",
"false",
"Allow for changes in client_encoding"),
/**
* The application name (require server version >= 9.0).
*/
APPLICATION_NAME(
"ApplicationName",
DriverInfo.DRIVER_NAME,
"Name of the Application (backend >= 9.0)"),
/**
* Assume the server is at least that version.
*/
ASSUME_MIN_SERVER_VERSION(
"assumeMinServerVersion",
null,
"Assume the server is at least that version"),
/**
* AuthenticationPluginClass
*/
AUTHENTICATION_PLUGIN_CLASS_NAME(
"authenticationPluginClassName",
null,
"Name of class which implements AuthenticationPlugin"
),
/**
* Specifies what the driver should do if a query fails. In {@code autosave=always} mode, JDBC driver sets a savepoint before each query,
* and rolls back to that savepoint in case of failure. In {@code autosave=never} mode (default), no savepoint dance is made ever.
* In {@code autosave=conservative} mode, savepoint is set for each query, however the rollback is done only for rare cases
* like 'cached statement cannot change return type' or 'statement XXX is not valid' so JDBC driver rollsback and retries
*/
AUTOSAVE(
"autosave",
"never",
"Specifies what the driver should do if a query fails. In autosave=always mode, JDBC driver sets a savepoint before each query, "
+ "and rolls back to that savepoint in case of failure. In autosave=never mode (default), no savepoint dance is made ever. "
+ "In autosave=conservative mode, safepoint is set for each query, however the rollback is done only for rare cases"
+ " like 'cached statement cannot change return type' or 'statement XXX is not valid' so JDBC driver rollsback and retries",
false,
new String[]{"always", "never", "conservative"}),
/**
* Use binary format for sending and receiving data if possible.
*/
BINARY_TRANSFER(
"binaryTransfer",
"true",
"Enable binary transfer for supported built-in types if possible. "
+ "Setting this to false disables any binary transfer unless it's individually activated "
+ "for each type with `binaryTransferEnable`."),
/**
* Comma separated list of types to disable binary transfer. Either OID numbers or names.
* Overrides values in the driver default set and values set with binaryTransferEnable.
*/
BINARY_TRANSFER_DISABLE(
"binaryTransferDisable",
"",
"Comma separated list of types to disable binary transfer. Either OID numbers or names. "
+ "Overrides values in the driver default set and values set with binaryTransferEnable."),
/**
* Comma separated list of types to enable binary transfer. Either OID numbers or names
*/
BINARY_TRANSFER_ENABLE(
"binaryTransferEnable",
"",
"Comma separated list of types to enable binary transfer. Either OID numbers or names."),
/**
* Cancel command is sent out of band over its own connection, so cancel message can itself get
* stuck.
* This property controls "connect timeout" and "socket timeout" used for cancel commands.
* The timeout is specified in seconds. Default value is 10 seconds.
*/
CANCEL_SIGNAL_TIMEOUT(
"cancelSignalTimeout",
"10",
"The timeout that is used for sending cancel command."),
/**
* Channel binding is a method for the server to authenticate itself to the
* client. It is only supported over SSL connections with PostgreSQL 11 or later
* servers using the SCRAM authentication method.
*/
CHANNEL_BINDING(
"channelBinding",
"prefer",
"This option controls the client's use of channel binding.",
false,
new String[] {"disable", "prefer", "require"}),
/**
* Determine whether SAVEPOINTS used in AUTOSAVE will be released per query or not
*/
CLEANUP_SAVEPOINTS(
"cleanupSavepoints",
"false",
"Determine whether SAVEPOINTS used in AUTOSAVE will be released per query or not",
false,
new String[]{"true", "false"}),
/**
* The timeout value used for socket connect operations. If connecting to the server takes longer
* than this value, the connection is broken.
*
* The timeout is specified in seconds and a value of zero means that it is disabled.
*/
CONNECT_TIMEOUT(
"connectTimeout",
"10",
"The timeout value in seconds used for socket connect operations."),
/**
* Specify the schema (or several schema separated by commas) to be set in the search-path. This schema will be used to resolve
* unqualified object names used in statements over this connection.
*/
CURRENT_SCHEMA(
"currentSchema",
null,
"Specify the schema (or several schema separated by commas) to be set in the search-path"),
/**
* Specifies the maximum number of fields to be cached per connection. A value of {@code 0} disables the cache.
*/
DATABASE_METADATA_CACHE_FIELDS(
"databaseMetadataCacheFields",
"65536",
"Specifies the maximum number of fields to be cached per connection. A value of {@code 0} disables the cache."),
/**
* Specifies the maximum size (in megabytes) of fields to be cached per connection. A value of {@code 0} disables the cache.
*/
DATABASE_METADATA_CACHE_FIELDS_MIB(
"databaseMetadataCacheFieldsMiB",
"5",
"Specifies the maximum size (in megabytes) of fields to be cached per connection. A value of {@code 0} disables the cache."),
/**
* Default parameter for {@link java.sql.Statement#getFetchSize()}. A value of {@code 0} means
* that need fetch all rows at once
*/
DEFAULT_ROW_FETCH_SIZE(
"defaultRowFetchSize",
"0",
"Positive number of rows that should be fetched from the database when more rows are needed for ResultSet by each fetch iteration"),
/**
* Enable optimization that disables column name sanitiser.
*/
DISABLE_COLUMN_SANITISER(
"disableColumnSanitiser",
"false",
"Enable optimization that disables column name sanitiser"),
/**
* Specifies how the driver transforms JDBC escape call syntax into underlying SQL, for invoking procedures or functions. (backend >= 11)
* In {@code escapeSyntaxCallMode=select} mode (the default), the driver always uses a SELECT statement (allowing function invocation only).
* In {@code escapeSyntaxCallMode=callIfNoReturn} mode, the driver uses a CALL statement (allowing procedure invocation) if there is no return parameter specified, otherwise the driver uses a SELECT statement.
* In {@code escapeSyntaxCallMode=call} mode, the driver always uses a CALL statement (allowing procedure invocation only).
*/
ESCAPE_SYNTAX_CALL_MODE(
"escapeSyntaxCallMode",
"select",
"Specifies how the driver transforms JDBC escape call syntax into underlying SQL, for invoking procedures or functions. (backend >= 11)"
+ "In escapeSyntaxCallMode=select mode (the default), the driver always uses a SELECT statement (allowing function invocation only)."
+ "In escapeSyntaxCallMode=callIfNoReturn mode, the driver uses a CALL statement (allowing procedure invocation) if there is no return parameter specified, otherwise the driver uses a SELECT statement."
+ "In escapeSyntaxCallMode=call mode, the driver always uses a CALL statement (allowing procedure invocation only).",
false,
new String[]{"select", "callIfNoReturn", "call"}),
/**
* Group startup parameters in a transaction
* This is important in pool-by-transaction scenarios in order to make sure that all the statements
* reaches the same connection that is being initialized. All of the startup parameters will be wrapped
* in a transaction
* Note this is off by default as pgbouncer in statement mode
* @deprecated since we can send the startup parameters as a multistatment transaction
*/
@Deprecated
GROUP_STARTUP_PARAMETERS(
"groupStartupParameters",
"false",
"This is important in pool-by-transaction scenarios in order to make sure that all "
+ "the statements reaches the same connection that is being initialized."
),
GSS_ENC_MODE(
"gssEncMode",
"allow",
"Force Encoded GSS Mode",
false,
new String[]{"disable", "allow", "prefer", "require"}
),
/**
* Force one of
*
* SSPI (Windows transparent single-sign-on)
* GSSAPI (Kerberos, via JSSE)
*
* to be used when the server requests Kerberos or SSPI authentication.
*/
GSS_LIB(
"gsslib",
"auto",
"Force SSSPI or GSSAPI",
false,
new String[]{"auto", "sspi", "gssapi"}),
/**
* After requesting an upgrade to SSL from the server there are reports of the server not responding due to a failover
* without a timeout here, the client can wait forever. The pattern for requesting a GSS encrypted connection is the same so we provide the same
* timeout mechanism This timeout will be set before the request and reset after
*/
GSS_RESPONSE_TIMEOUT(
"gssResponseTimeout",
"5000",
"Time in milliseconds we wait for a response from the server after requesting a GSS upgrade"),
/**
* Flag to enable/disable the obtaining the default GSS credentials from a pre-existing ccache,
* rather than using JAAS. This also allows GSS to work in environments where the default
* kerberos principal a user has is not user@DEFAULT_REALM, but some other user (this is valid,
* and often the case in more advanced Kerberos setups). Finally, this also means that if
* the "native" GSS implementation is used (i.e. the local system GSS libraries), all means of
* fetching the default credential are supported. Currently, JAAS is pure java on Linux, and
* does not support the use of KCM (and only supports file-based ccaches and keytabs).
*/
GSS_USE_DEFAULT_CREDS(
"gssUseDefaultCreds",
"false",
"Use the default GSS credentials the process already has, rather than a JAAS login"),
/**
* Enable mode to filter out the names of database objects for which the current user has no privileges
* granted from appearing in the DatabaseMetaData returned by the driver.
*/
HIDE_UNPRIVILEGED_OBJECTS(
"hideUnprivilegedObjects",
"false",
"Enable hiding of database objects for which the current user has no privileges granted from the DatabaseMetaData"),
HOST_RECHECK_SECONDS(
"hostRecheckSeconds",
"10",
"Specifies period (seconds) after which the host status is checked again in case it has changed"),
/**
* Specifies the name of the JAAS system or application login configuration.
*/
JAAS_APPLICATION_NAME(
"jaasApplicationName",
"pgjdbc",
"Specifies the name of the JAAS system or application login configuration."),
/**
* Flag to enable/disable obtaining a GSS credential via JAAS login before authenticating.
* Useful if setting system property javax.security.auth.useSubjectCredsOnly=false
* or using native GSS with system property sun.security.jgss.native=true
*/
JAAS_LOGIN(
"jaasLogin",
"true",
"Login with JAAS before doing GSSAPI authentication"),
/**
* The Kerberos service name to use when authenticating with GSSAPI. This is equivalent to libpq's
* PGKRBSRVNAME environment variable.
*/
KERBEROS_SERVER_NAME(
"kerberosServerName",
null,
"The Kerberos service name to use when authenticating with GSSAPI."),
LOAD_BALANCE_HOSTS(
"loadBalanceHosts",
"false",
"If disabled hosts are connected in the given order. If enabled hosts are chosen randomly from the set of suitable candidates"),
/**
* If this is set then the client side will bind to this address. This is useful if you need
* to choose which interface to connect to.
*/
LOCAL_SOCKET_ADDRESS(
"localSocketAddress",
null,
"Local Socket address, if set bind the client side of the socket to this address"),
/**
* This property is no longer used by the driver and will be ignored.
* @deprecated Logging is configured via java.util.logging.
*/
@Deprecated
LOGGER_FILE(
"loggerFile",
null,
"File name output of the Logger"),
/**
* This property is no longer used by the driver and will be ignored.
* @deprecated Logging is configured via java.util.logging.
*/
@Deprecated
LOGGER_LEVEL(
"loggerLevel",
null,
"Logger level of the driver",
false,
new String[]{"OFF", "DEBUG", "TRACE"}),
/**
* Specify how long to wait for establishment of a database connection. The timeout is specified
* in seconds.
*/
LOGIN_TIMEOUT(
"loginTimeout",
"0",
"Specify how long in seconds to wait for establishment of a database connection."),
/**
* Whether to include full server error detail in exception messages.
*/
LOG_SERVER_ERROR_DETAIL(
"logServerErrorDetail",
"true",
"Include full server error detail in exception messages. If disabled then only the error itself will be included."),
/**
* When connections that are not explicitly closed are garbage collected, log the stacktrace from
* the opening of the connection to trace the leak source.
*/
LOG_UNCLOSED_CONNECTIONS(
"logUnclosedConnections",
"false",
"When connections that are not explicitly closed are garbage collected, log the stacktrace from the opening of the connection to trace the leak source"),
/**
* Specifies size of buffer during fetching result set. Can be specified as specified size or
* percent of heap memory.
*/
MAX_RESULT_BUFFER(
"maxResultBuffer",
null,
"Specifies size of buffer during fetching result set. Can be specified as specified size or percent of heap memory."),
/**
* Maximum amount of bytes buffered before sending to the backend, default is 8192.
*/
MAX_SEND_BUFFER_SIZE(
"maxSendBufferSize",
"8192",
"Maximum amount of bytes buffered before sending to the backend"),
/**
* Specify 'options' connection initialization parameter.
* The value of this parameter may contain spaces and other special characters or their URL representation.
*/
OPTIONS(
"options",
null,
"Specify 'options' connection initialization parameter."),
/**
* Password to use when authenticating.
*/
PASSWORD(
"password",
null,
"Password to use when authenticating.",
false),
/**
* Database name to connect to (may be specified directly in the JDBC URL).
*/
PG_DBNAME(
"PGDBNAME",
null,
"Database name to connect to (may be specified directly in the JDBC URL)",
true),
/**
* Hostname of the PostgreSQL server (may be specified directly in the JDBC URL).
*/
PG_HOST(
"PGHOST",
"localhost",
"Hostname of the PostgreSQL server (may be specified directly in the JDBC URL)",
false),
/**
* Port of the PostgreSQL server (may be specified directly in the JDBC URL).
*/
PG_PORT(
"PGPORT",
"5432",
"Port of the PostgreSQL server (may be specified directly in the JDBC URL)"),
/**
* Specifies which mode is used to execute queries to database: simple means ('Q' execute, no parse, no bind, text mode only),
* extended means always use bind/execute messages, extendedForPrepared means extended for prepared statements only,
* extendedCacheEverything means use extended protocol and try cache every statement (including Statement.execute(String sql)) in a query cache.
*
* This mode is meant for debugging purposes and/or for cases when extended protocol cannot be used (e.g. logical replication protocol)
*/
PREFER_QUERY_MODE(
"preferQueryMode",
"extended",
"Specifies which mode is used to execute queries to database: simple means ('Q' execute, no parse, no bind, text mode only), "
+ "extended means always use bind/execute messages, extendedForPrepared means extended for prepared statements only, "
+ "extendedCacheEverything means use extended protocol and try cache every statement (including Statement.execute(String sql)) in a query cache.", false,
new String[]{"extended", "extendedForPrepared", "extendedCacheEverything", "simple"}),
/**
* Specifies the maximum number of entries in cache of prepared statements. A value of {@code 0}
* disables the cache.
*/
PREPARED_STATEMENT_CACHE_QUERIES(
"preparedStatementCacheQueries",
"256",
"Specifies the maximum number of entries in per-connection cache of prepared statements. A value of {@code 0} disables the cache."),
/**
* Specifies the maximum size (in megabytes) of the prepared statement cache. A value of {@code 0}
* disables the cache.
*/
PREPARED_STATEMENT_CACHE_SIZE_MIB(
"preparedStatementCacheSizeMiB",
"5",
"Specifies the maximum size (in megabytes) of a per-connection prepared statement cache. A value of {@code 0} disables the cache."),
/**
* Sets the default threshold for enabling server-side prepare. A value of {@code -1} stands for
* forceBinary
*/
PREPARE_THRESHOLD(
"prepareThreshold",
"5",
"Statement prepare threshold. A value of {@code -1} stands for forceBinary"),
/**
* Force use of a particular protocol version when connecting, if set, disables protocol version
* fallback.
*/
PROTOCOL_VERSION(
"protocolVersion",
"3",
"Force use of a particular protocol version when connecting, currently only version 3 is supported.",
false,
new String[]{"3"}),
/**
* Quote returning columns.
* There are some ORM's that quote everything, including returning columns
* If we quote them, then we end up sending ""colname"" to the backend
* which will not be found
*/
QUOTE_RETURNING_IDENTIFIERS(
"quoteReturningIdentifiers",
"true",
"Quote identifiers provided in returning array",
false),
/**
* Puts this connection in read-only mode.
*/
READ_ONLY(
"readOnly",
"false",
"Puts this connection in read-only mode"),
/**
* Connection parameter to control behavior when
* {@link Connection#setReadOnly(boolean)} is set to {@code true}.
*/
READ_ONLY_MODE(
"readOnlyMode",
"transaction",
"Controls the behavior when a connection is set to be read only, one of 'ignore', 'transaction', or 'always' "
+ "When 'ignore', setting readOnly has no effect. "
+ "When 'transaction' setting readOnly to 'true' will cause transactions to BEGIN READ ONLY if autocommit is 'false'. "
+ "When 'always' setting readOnly to 'true' will set the session to READ ONLY if autoCommit is 'true' "
+ "and the transaction to BEGIN READ ONLY if autocommit is 'false'.",
false,
new String[]{"ignore", "transaction", "always"}),
/**
* Socket read buffer size (SO_RECVBUF). A value of {@code -1}, which is the default, means system
* default.
*/
RECEIVE_BUFFER_SIZE(
"receiveBufferSize",
"-1",
"Socket read buffer size"),
/**
* Connection parameter passed in the startup message. This parameter accepts two values; "true"
* and "database". Passing "true" tells the backend to go into walsender mode, wherein a small set
* of replication commands can be issued instead of SQL statements. Only the simple query protocol
* can be used in walsender mode. Passing "database" as the value instructs walsender to connect
* to the database specified in the dbname parameter, which will allow the connection to be used
* for logical replication from that database.
*
* Parameter should be use together with {@link PGProperty#ASSUME_MIN_SERVER_VERSION} with
* parameter >= 9.4 (backend >= 9.4)
*/
REPLICATION(
"replication",
null,
"Connection parameter passed in startup message, one of 'true' or 'database' "
+ "Passing 'true' tells the backend to go into walsender mode, "
+ "wherein a small set of replication commands can be issued instead of SQL statements. "
+ "Only the simple query protocol can be used in walsender mode. "
+ "Passing 'database' as the value instructs walsender to connect "
+ "to the database specified in the dbname parameter, "
+ "which will allow the connection to be used for logical replication "
+ "from that database. "
+ "(backend >= 9.4)"),
/**
* Configure optimization to enable batch insert re-writing.
*/
REWRITE_BATCHED_INSERTS(
"reWriteBatchedInserts",
"false",
"Enable optimization to rewrite and collapse compatible INSERT statements that are batched."),
/**
* Socket write buffer size (SO_SNDBUF). A value of {@code -1}, which is the default, means system
* default.
*/
SEND_BUFFER_SIZE(
"sendBufferSize",
"-1",
"Socket write buffer size"),
/**
* Service name to use for additional parameters. It specifies a service name in "pg_service
* .conf" that holds additional connection parameters. This allows applications to specify only
* a service name so connection parameters can be centrally maintained.
*/
SERVICE(
"service",
null,
"Service name to be searched in pg_service.conf resource"),
/**
* Socket factory used to create socket. A null value, which is the default, means system default.
*/
SOCKET_FACTORY(
"socketFactory",
null,
"Specify a socket factory for socket creation"),
/**
* The String argument to give to the constructor of the Socket Factory.
*/
SOCKET_FACTORY_ARG(
"socketFactoryArg",
null,
"Argument forwarded to constructor of SocketFactory class."),
/**
* The timeout value used for socket read operations. If reading from the server takes longer than
* this value, the connection is closed. This can be used as both a brute force global query
* timeout and a method of detecting network problems. The timeout is specified in seconds and a
* value of zero means that it is disabled.
*/
SOCKET_TIMEOUT(
"socketTimeout",
"0",
"The timeout value in seconds max(2147484) used for socket read operations."),
/**
* Control use of SSL: empty or {@code true} values imply {@code sslmode==verify-full}
*/
SSL(
"ssl",
null,
"Control use of SSL (any non-null value causes SSL to be required)"),
/**
* File containing the SSL Certificate. Default will be the file {@code postgresql.crt} in {@code
* $HOME/.postgresql} (*nix) or {@code %APPDATA%\postgresql} (windows).
*/
SSL_CERT(
"sslcert",
null,
"The location of the client's SSL certificate"),
/**
* Classname of the SSL Factory to use (instance of {@link javax.net.ssl.SSLSocketFactory}).
*/
SSL_FACTORY(
"sslfactory",
"org.postgresql.ssl.LibPQFactory",
"Provide a SSLSocketFactory class when using SSL."),
/**
* The String argument to give to the constructor of the SSL Factory.
*/
SSL_FACTORY_ARG(
"sslfactoryarg",
null,
"Argument forwarded to constructor of SSLSocketFactory class."),
/**
* Classname of the SSL HostnameVerifier to use (instance of {@link javax.net.ssl.HostnameVerifier}).
*/
SSL_HOSTNAME_VERIFIER(
"sslhostnameverifier",
null,
"A class, implementing javax.net.ssl.HostnameVerifier that can verify the server"),
/**
* File containing the SSL Key. Default will be the file {@code postgresql.pk8} in {@code $HOME/.postgresql} (*nix)
* or {@code %APPDATA%\postgresql} (windows).
*/
SSL_KEY(
"sslkey",
null,
"The location of the client's PKCS#8 SSL key"),
/**
* Parameter governing the use of SSL. The allowed values are {@code disable}, {@code allow},
* {@code prefer}, {@code require}, {@code verify-ca}, {@code verify-full}.
* If {@code ssl} property is empty or set to {@code true} it implies {@code verify-full}.
* Default mode is "require"
*/
SSL_MODE(
"sslmode",
null,
"Parameter governing the use of SSL",
false,
new String[]{"disable", "allow", "prefer", "require", "verify-ca", "verify-full"}),
/**
* Normally a GSS connection is attempted first. If this is set to {@code direct}
* then the GSS connection attempt will not be made
*/
SSL_NEGOTIATION(
"sslNegotiation",
"postgres",
"This option controls whether the driver will perform its protocol\n"
+ "negotiation to request encryption from the server or will just\n"
+ "directly make a standard SSL connection. Traditional PostgreSQL\n"
+ "protocol negotiation is the default and the most flexible with\n"
+ "different server configurations. If the server is known to support\n"
+ "direct SSL connections then the latter requires one\n"
+ "fewer round trip reducing connection latency and also allows the use\n"
+ "of protocol agnostic SSL network tools.",
false,
new String[]{"postgres", "direct"}),
/**
* The SSL password to use in the default CallbackHandler.
*/
SSL_PASSWORD(
"sslpassword",
null,
"The password for the client's ssl key (ignored if sslpasswordcallback is set)"),
/**
* The classname instantiating {@link javax.security.auth.callback.CallbackHandler} to use.
*/
SSL_PASSWORD_CALLBACK(
"sslpasswordcallback",
null,
"A class, implementing javax.security.auth.callback.CallbackHandler that can handle PasswordCallback for the ssl password."),
/**
* After requesting an upgrade to SSL from the server there are reports of the server not responding due to a failover
* without a timeout here, the client can wait forever. This timeout will be set before the request and reset after
*/
SSL_RESPONSE_TIMEOUT(
"sslResponseTimeout",
"5000",
"Time in milliseconds we wait for a response from the server after requesting SSL upgrade"),
/**
* File containing the root certificate when validating server ({@code sslmode} = {@code
* verify-ca} or {@code verify-full}). Default will be the file {@code root.crt} in {@code
* $HOME/.postgresql} (*nix) or {@code %APPDATA%\postgresql} (windows).
*/
SSL_ROOT_CERT(
"sslrootcert",
null,
"The location of the root certificate for authenticating the server."),
/**
* Specifies the name of the SSPI service class that forms the service class part of the SPN. The
* default, {@code POSTGRES}, is almost always correct.
*/
SSPI_SERVICE_CLASS(
"sspiServiceClass",
"POSTGRES",
"The Windows SSPI service class for SPN"),
/**
* Bind String to either {@code unspecified} or {@code varchar}. Default is {@code varchar} for
* 8.0+ backends.
*/
STRING_TYPE(
"stringtype",
null,
"The type to bind String parameters as (usually 'varchar', 'unspecified' allows implicit casting to other types)",
false,
new String[]{"unspecified", "varchar"}),
TARGET_SERVER_TYPE(
"targetServerType",
"any",
"Specifies what kind of server to connect",
false,
new String []{"any", "primary", "master", "slave", "secondary", "preferSlave", "preferSecondary", "preferPrimary"}),
/**
* Enable or disable TCP keep-alive. The default is {@code false}.
*/
TCP_KEEP_ALIVE(
"tcpKeepAlive",
"false",
"Enable or disable TCP keep-alive. The default is {@code false}."),
TCP_NO_DELAY(
"tcpNoDelay",
"true",
"Enable or disable TCP no delay. The default is (@code true}."
),
/**
* Specifies the length to return for types of unknown length.
*/
UNKNOWN_LENGTH(
"unknownLength",
Integer.toString(Integer.MAX_VALUE),
"Specifies the length to return for types of unknown length"),
/**
* Username to connect to the database as.
*/
USER(
"user",
null,
"Username to connect to the database as.",
true),
/**
* Use SPNEGO in SSPI authentication requests.
*/
USE_SPNEGO(
"useSpnego",
"false",
"Use SPNEGO in SSPI authentication requests"),
/**
* Factory class to instantiate factories for XML processing.
* The default factory disables external entity processing.
* Legacy behavior with external entity processing can be enabled by specifying a value of LEGACY_INSECURE.
* Or specify a custom class that implements {@link org.postgresql.xml.PGXmlFactoryFactory}.
*/
XML_FACTORY_FACTORY(
"xmlFactoryFactory",
"",
"Factory class to instantiate factories for XML processing"),
;
private final String name;
private final /* @Nullable */ String defaultValue;
private final boolean required;
private final String description;
@SuppressWarnings("ImmutableEnumChecker")
private final String /* @Nullable */ [] choices;
PGProperty(String name, /* @Nullable */ String defaultValue, String description) {
this(name, defaultValue, description, false);
}
PGProperty(String name, /* @Nullable */ String defaultValue, String description, boolean required) {
this(name, defaultValue, description, required, (String[]) null);
}
PGProperty(String name, /* @Nullable */ String defaultValue, String description, boolean required,
String /* @Nullable */ [] choices) {
this.name = name;
this.defaultValue = defaultValue;
this.required = required;
this.description = description;
this.choices = choices;
}
private static final Map PROPS_BY_NAME = new HashMap<>();
static {
for (PGProperty prop : PGProperty.values()) {
if (PROPS_BY_NAME.put(prop.getName(), prop) != null) {
throw new IllegalStateException("Duplicate PGProperty name: " + prop.getName());
}
}
}
/**
* Returns the name of the connection parameter. The name is the key that must be used in JDBC URL
* or in Driver properties
*
* @return the name of the connection parameter
*/
public String getName() {
return name;
}
/**
* Returns the default value for this connection parameter.
*
* @return the default value for this connection parameter or null
*/
public /* @Nullable */ String getDefaultValue() {
return defaultValue;
}
/**
* Returns whether this parameter is required.
*
* @return whether this parameter is required
*/
public boolean isRequired() {
return required;
}
/**
* Returns the description for this connection parameter.
*
* @return the description for this connection parameter
*/
public String getDescription() {
return description;
}
/**
* Returns the available values for this connection parameter.
*
* @return the available values for this connection parameter or null
*/
public String /* @Nullable */ [] getChoices() {
return choices;
}
/**
* Returns the value of the connection parameter from the given {@link Properties} or the
* default value.
*
* @param properties properties to take actual value from
* @return evaluated value for this connection parameter
*/
public /* @Nullable */ String getOrDefault(Properties properties) {
return properties.getProperty(name, defaultValue);
}
/**
* Returns the value of the connection parameter from the given {@link Properties} or the
* default value
* @param properties properties to take actual value from
* @return evaluated value for this connection parameter or null
* @deprecated use {@link #getOrDefault(Properties)} instead
*/
@Deprecated
public /* @Nullable */ String get(Properties properties) {
return getOrDefault(properties);
}
/**
* Returns the value of the connection parameter from the given {@link Properties} or null if there
* is no default value
* @param properties properties object to get value from
* @return evaluated value for this connection parameter
*/
public /* @Nullable */ String getOrNull(Properties properties) {
return properties.getProperty(name);
}
/**
* Set the value for this connection parameter in the given {@link Properties}.
*
* @param properties properties in which the value should be set
* @param value value for this connection parameter
*/
public void set(Properties properties, /* @Nullable */ String value) {
if (value == null) {
properties.remove(name);
} else {
properties.setProperty(name, value);
}
}
/**
* Return the boolean value for this connection parameter in the given {@link Properties}.
*
* @param properties properties to take actual value from
* @return evaluated value for this connection parameter converted to boolean
*/
public boolean getBoolean(Properties properties) {
return Boolean.parseBoolean(getOrDefault(properties));
}
/**
* Return the int value for this connection parameter in the given {@link Properties}. Prefer the
* use of {@link #getInt(Properties)} anywhere you can throw an {@link java.sql.SQLException}.
*
* @param properties properties to take actual value from
* @return evaluated value for this connection parameter converted to int
* @throws NumberFormatException if it cannot be converted to int.
*/
@SuppressWarnings("nullness:argument")
public int getIntNoCheck(Properties properties) {
String value = getOrDefault(properties);
//noinspection ConstantConditions
return Integer.parseInt(value);
}
/**
* Return the int value for this connection parameter in the given {@link Properties}.
*
* @param properties properties to take actual value from
* @return evaluated value for this connection parameter converted to int
* @throws PSQLException if it cannot be converted to int.
*/
@SuppressWarnings("nullness:argument")
public int getInt(Properties properties) throws PSQLException {
String value = getOrDefault(properties);
try {
//noinspection ConstantConditions
return Integer.parseInt(value);
} catch (NumberFormatException nfe) {
throw new PSQLException(GT.tr("{0} parameter value must be an integer but was: {1}",
getName(), value), PSQLState.INVALID_PARAMETER_VALUE, nfe);
}
}
/**
* Return the {@link Integer} value for this connection parameter in the given {@link Properties}.
*
* @param properties properties to take actual value from
* @return evaluated value for this connection parameter converted to Integer or null
* @throws PSQLException if unable to parse property as integer
*/
public /* @Nullable */ Integer getInteger(Properties properties) throws PSQLException {
String value = getOrDefault(properties);
if (value == null) {
return null;
}
try {
return Integer.parseInt(value);
} catch (NumberFormatException nfe) {
throw new PSQLException(GT.tr("{0} parameter value must be an integer but was: {1}",
getName(), value), PSQLState.INVALID_PARAMETER_VALUE, nfe);
}
}
/**
* Set the boolean value for this connection parameter in the given {@link Properties}.
*
* @param properties properties in which the value should be set
* @param value boolean value for this connection parameter
*/
public void set(Properties properties, boolean value) {
properties.setProperty(name, Boolean.toString(value));
}
/**
* Set the int value for this connection parameter in the given {@link Properties}.
*
* @param properties properties in which the value should be set
* @param value int value for this connection parameter
*/
public void set(Properties properties, int value) {
properties.setProperty(name, Integer.toString(value));
}
/**
* Test whether this property is present in the given {@link Properties}.
*
* @param properties set of properties to check current in
* @return true if the parameter is specified in the given properties
*/
public boolean isPresent(Properties properties) {
return getSetString(properties) != null;
}
/**
* Convert this connection parameter and the value read from the given {@link Properties} into a
* {@link DriverPropertyInfo}.
*
* @param properties properties to take actual value from
* @return a DriverPropertyInfo representing this connection parameter
*/
public DriverPropertyInfo toDriverPropertyInfo(Properties properties) {
DriverPropertyInfo propertyInfo = new DriverPropertyInfo(name, getOrDefault(properties));
propertyInfo.required = required;
propertyInfo.description = description;
propertyInfo.choices = choices;
return propertyInfo;
}
public static /* @Nullable */ PGProperty forName(String name) {
return PROPS_BY_NAME.get(name);
}
/**
* Return the property if exists but avoiding the default. Allowing the caller to detect the lack
* of a property.
*
* @param properties properties bundle
* @return the value of a set property
*/
public /* @Nullable */ String getSetString(Properties properties) {
Object o = properties.get(name);
if (o instanceof String) {
return (String) o;
}
return null;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/PGRefCursorResultSet.java 0100664 0000000 0000000 00000001362 00000250600 026520 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.ResultSet;
/**
* A ref cursor based result set.
* Note: as of 8.0, this interface is only present for backwards- compatibility purposes. New
* code should call {@link ResultSet#getString} to obtain the underlying cursor name.
*/
public interface PGRefCursorResultSet {
/**
* @return the name of the cursor.
* @deprecated As of 8.0, replaced with calling getString() on the ResultSet that this ResultSet
* was obtained from.
*/
@Deprecated
/* @Nullable */ String getRefCursor();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/PGResultSetMetaData.java 0100664 0000000 0000000 00000003126 00000250600 026266 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
import org.postgresql.core.Field;
import java.sql.SQLException;
public interface PGResultSetMetaData {
/**
* Returns the underlying column name of a query result, or "" if it is unable to be determined.
*
* @param column column position (1-based)
* @return underlying column name of a query result
* @throws SQLException if something wrong happens
* @since 8.0
*/
String getBaseColumnName(int column) throws SQLException;
/**
* Returns the underlying table name of query result, or "" if it is unable to be determined.
*
* @param column column position (1-based)
* @return underlying table name of query result
* @throws SQLException if something wrong happens
* @since 8.0
*/
String getBaseTableName(int column) throws SQLException;
/**
* Returns the underlying schema name of query result, or "" if it is unable to be determined.
*
* @param column column position (1-based)
* @return underlying schema name of query result
* @throws SQLException if something wrong happens
* @since 8.0
*/
String getBaseSchemaName(int column) throws SQLException;
/**
* Is a column Text or Binary?
*
* @param column column position (1-based)
* @return 0 if column data format is TEXT, or 1 if BINARY
* @throws SQLException if something wrong happens
* @see Field#BINARY_FORMAT
* @see Field#TEXT_FORMAT
* @since 9.4
*/
int getFormat(int column) throws SQLException;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/PGStatement.java 0100664 0000000 0000000 00000006442 00000250600 024703 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql;
import java.sql.SQLException;
/**
* This interface defines the public PostgreSQL extensions to java.sql.Statement. All Statements
* constructed by the PostgreSQL driver implement PGStatement.
*/
public interface PGStatement {
// We can't use Long.MAX_VALUE or Long.MIN_VALUE for java.sql.date
// because this would break the 'normalization contract' of the
// java.sql.Date API.
// The follow values are the nearest MAX/MIN values with hour,
// minute, second, millisecond set to 0 - this is used for
// -infinity / infinity representation in Java
long DATE_POSITIVE_INFINITY = 9223372036825200000L;
long DATE_NEGATIVE_INFINITY = -9223372036832400000L;
long DATE_POSITIVE_SMALLER_INFINITY = 185543533774800000L;
long DATE_NEGATIVE_SMALLER_INFINITY = -185543533774800000L;
/**
* Returns the Last inserted/updated oid.
*
* @return OID of last insert
* @throws SQLException if something goes wrong
* @since 7.3
*/
long getLastOID() throws SQLException;
/**
* Turn on the use of prepared statements in the server (server side prepared statements are
* unrelated to jdbc PreparedStatements) As of build 302, this method is equivalent to
* setPrepareThreshold(1)
.
*
* @param flag use server prepare
* @throws SQLException if something goes wrong
* @since 7.3
* @deprecated As of build 302, replaced by {@link #setPrepareThreshold(int)}
*/
@Deprecated
void setUseServerPrepare(boolean flag) throws SQLException;
/**
* Checks if this statement will be executed as a server-prepared statement. A return value of
* true
indicates that the next execution of the statement will be done as a
* server-prepared statement, assuming the underlying protocol supports it.
*
* @return true if the next reuse of this statement will use a server-prepared statement
*/
boolean isUseServerPrepare();
/**
* Sets the reuse threshold for using server-prepared statements.
*
* If threshold
is a non-zero value N, the Nth and subsequent reuses of a
* PreparedStatement will use server-side prepare.
*
* If threshold
is zero, server-side prepare will not be used.
*
* The reuse threshold is only used by PreparedStatement and CallableStatement objects; it is
* ignored for plain Statements.
*
* @param threshold the new threshold for this statement
* @throws SQLException if an exception occurs while changing the threshold
* @since build 302
*/
void setPrepareThreshold(int threshold) throws SQLException;
/**
* Gets the server-side prepare reuse threshold in use for this statement.
*
* @return the current threshold
* @see #setPrepareThreshold(int)
* @since build 302
*/
int getPrepareThreshold();
/**
* Turn on/off adaptive fetch for statement. Existing resultSets won't be affected by change
* here.
*
* @param adaptiveFetch desired state of adaptive fetch.
*/
void setAdaptiveFetch(boolean adaptiveFetch);
/**
* Get state of adaptive fetch for statement.
*
* @return state of adaptive fetch (turned on or off)
*/
boolean getAdaptiveFetch();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/package-info.java 0100664 0000000 0000000 00000001126 00000250600 025026 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
/* @DefaultQualifier(value = NonNull.class, locations = TypeUseLocation.FIELD) */
/* @DefaultQualifier(value = NonNull.class, locations = TypeUseLocation.PARAMETER) */
/* @DefaultQualifier(value = NonNull.class, locations = TypeUseLocation.RETURN) */
package org.postgresql;
// import org.checkerframework.checker.nullness.qual.NonNull;
// import org.checkerframework.framework.qual.DefaultQualifier;
// import org.checkerframework.framework.qual.TypeUseLocation;
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/ 0040775 0000000 0000000 00000000000 00000250600 022614 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/CopyDual.java 0100664 0000000 0000000 00000000556 00000250600 025202 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.copy;
/**
* Bidirectional via copy stream protocol. Via bidirectional copy protocol work PostgreSQL
* replication.
*
* @see CopyIn
* @see CopyOut
*/
public interface CopyDual extends CopyIn, CopyOut {
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/CopyIn.java 0100664 0000000 0000000 00000003235 00000250600 024660 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.copy;
import org.postgresql.util.ByteStreamWriter;
import java.sql.SQLException;
/**
* Copy bulk data from client into a PostgreSQL table very fast.
*/
public interface CopyIn extends CopyOperation {
/**
* Writes specified part of given byte array to an open and writable copy operation.
*
* @param buf array of bytes to write
* @param off offset of first byte to write (normally zero)
* @param siz number of bytes to write (normally buf.length)
* @throws SQLException if the operation fails
*/
void writeToCopy(byte[] buf, int off, int siz) throws SQLException;
/**
* Writes a ByteStreamWriter to an open and writable copy operation.
*
* @param from the source of bytes, e.g. a ByteBufferByteStreamWriter
* @throws SQLException if the operation fails
*/
void writeToCopy(ByteStreamWriter from) throws SQLException;
/**
* Force any buffered output to be sent over the network to the backend. In general this is a
* useless operation as it will get pushed over in due time or when endCopy is called. Some
* specific modified server versions (Truviso) want this data sooner. If you are unsure if you
* need to use this method, don't.
*
* @throws SQLException if the operation fails.
*/
void flushCopy() throws SQLException;
/**
* Finishes copy operation successfully.
*
* @return number of updated rows for server 8.2 or newer (see getHandledRowCount())
* @throws SQLException if the operation fails.
*/
long endCopy() throws SQLException;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/CopyManager.java 0100664 0000000 0000000 00000021575 00000250600 025673 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.copy;
import org.postgresql.core.BaseConnection;
import org.postgresql.core.Encoding;
import org.postgresql.core.QueryExecutor;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.Reader;
import java.io.Writer;
import java.sql.SQLException;
/**
* API for PostgreSQL COPY bulk data transfer.
*/
public class CopyManager {
// I don't know what the best buffer size is, so we let people specify it if
// they want, and if they don't know, we don't make them guess, so that if we
// do figure it out we can just set it here and they reap the rewards.
// Note that this is currently being used for both a number of bytes and a number
// of characters.
static final int DEFAULT_BUFFER_SIZE = 65536;
private final Encoding encoding;
private final QueryExecutor queryExecutor;
private final BaseConnection connection;
public CopyManager(BaseConnection connection) throws SQLException {
this.encoding = connection.getEncoding();
this.queryExecutor = connection.getQueryExecutor();
this.connection = connection;
}
public CopyIn copyIn(String sql) throws SQLException {
CopyOperation op = queryExecutor.startCopy(sql, connection.getAutoCommit());
if (op == null || op instanceof CopyIn) {
return (CopyIn) op;
} else {
op.cancelCopy();
throw new PSQLException(GT.tr("Requested CopyIn but got {0}", op.getClass().getName()),
PSQLState.WRONG_OBJECT_TYPE);
}
}
public CopyOut copyOut(String sql) throws SQLException {
CopyOperation op = queryExecutor.startCopy(sql, connection.getAutoCommit());
if (op == null || op instanceof CopyOut) {
return (CopyOut) op;
} else {
op.cancelCopy();
throw new PSQLException(GT.tr("Requested CopyOut but got {0}", op.getClass().getName()),
PSQLState.WRONG_OBJECT_TYPE);
}
}
public CopyDual copyDual(String sql) throws SQLException {
CopyOperation op = queryExecutor.startCopy(sql, connection.getAutoCommit());
if (op == null || op instanceof CopyDual) {
return (CopyDual) op;
} else {
op.cancelCopy();
throw new PSQLException(GT.tr("Requested CopyDual but got {0}", op.getClass().getName()),
PSQLState.WRONG_OBJECT_TYPE);
}
}
/**
* Pass results of a COPY TO STDOUT query from database into a Writer.
*
* @param sql COPY TO STDOUT statement
* @param to the Writer to write the results to (row by row).
* The Writer is not closed at the end of the Copy Out operation.
* @return number of rows updated for server 8.2 or newer; -1 for older
* @throws SQLException on database usage errors
* @throws IOException upon writer or database connection failure
*/
public long copyOut(final String sql, Writer to) throws SQLException, IOException {
byte[] buf;
CopyOut cp = copyOut(sql);
try {
while ((buf = cp.readFromCopy()) != null) {
to.write(encoding.decode(buf));
}
return cp.getHandledRowCount();
} catch (IOException ioEX) {
// if not handled this way the close call will hang, at least in 8.2
if (cp.isActive()) {
cp.cancelCopy();
}
try { // read until exhausted or operation cancelled SQLException
while ((buf = cp.readFromCopy()) != null) {
}
} catch (SQLException sqlEx) {
// typically after several kB
}
throw ioEX;
} finally { // see to it that we do not leave the connection locked
if (cp.isActive()) {
cp.cancelCopy();
}
}
}
/**
* Pass results of a COPY TO STDOUT query from database into an OutputStream.
*
* @param sql COPY TO STDOUT statement
* @param to the stream to write the results to (row by row)
* The stream is not closed at the end of the operation. This is intentional so the
* caller can continue to write to the output stream
* @return number of rows updated for server 8.2 or newer; -1 for older
* @throws SQLException on database usage errors
* @throws IOException upon output stream or database connection failure
*/
public long copyOut(final String sql, OutputStream to) throws SQLException, IOException {
byte[] buf;
CopyOut cp = copyOut(sql);
try {
while ((buf = cp.readFromCopy()) != null) {
to.write(buf);
}
return cp.getHandledRowCount();
} catch (IOException ioEX) {
// if not handled this way the close call will hang, at least in 8.2
if (cp.isActive()) {
cp.cancelCopy();
}
try { // read until exhausted or operation cancelled SQLException
while ((buf = cp.readFromCopy()) != null) {
}
} catch (SQLException sqlEx) {
// typically after several kB
}
throw ioEX;
} finally { // see to it that we do not leave the connection locked
if (cp.isActive()) {
cp.cancelCopy();
}
}
}
/**
* Use COPY FROM STDIN for very fast copying from a Reader into a database table.
*
* @param sql COPY FROM STDIN statement
* @param from a CSV file or such
* @return number of rows updated for server 8.2 or newer; -1 for older
* @throws SQLException on database usage issues
* @throws IOException upon reader or database connection failure
*/
public long copyIn(final String sql, Reader from) throws SQLException, IOException {
return copyIn(sql, from, DEFAULT_BUFFER_SIZE);
}
/**
* Use COPY FROM STDIN for very fast copying from a Reader into a database table.
*
* @param sql COPY FROM STDIN statement
* @param from a CSV file or such
* @param bufferSize number of characters to buffer and push over network to server at once
* @return number of rows updated for server 8.2 or newer; -1 for older
* @throws SQLException on database usage issues
* @throws IOException upon reader or database connection failure
*/
public long copyIn(final String sql, Reader from, int bufferSize)
throws SQLException, IOException {
char[] cbuf = new char[bufferSize];
int len;
CopyIn cp = copyIn(sql);
try {
while ((len = from.read(cbuf)) >= 0) {
if (len > 0) {
byte[] buf = encoding.encode(new String(cbuf, 0, len));
cp.writeToCopy(buf, 0, buf.length);
}
}
return cp.endCopy();
} finally { // see to it that we do not leave the connection locked
if (cp.isActive()) {
cp.cancelCopy();
}
}
}
/**
* Use COPY FROM STDIN for very fast copying from an InputStream into a database table.
*
* @param sql COPY FROM STDIN statement
* @param from a CSV file or such
* @return number of rows updated for server 8.2 or newer; -1 for older
* @throws SQLException on database usage issues
* @throws IOException upon input stream or database connection failure
*/
public long copyIn(final String sql, InputStream from) throws SQLException, IOException {
return copyIn(sql, from, DEFAULT_BUFFER_SIZE);
}
/**
* Use COPY FROM STDIN for very fast copying from an InputStream into a database table.
*
* @param sql COPY FROM STDIN statement
* @param from a CSV file or such
* @param bufferSize number of bytes to buffer and push over network to server at once
* @return number of rows updated for server 8.2 or newer; -1 for older
* @throws SQLException on database usage issues
* @throws IOException upon input stream or database connection failure
*/
public long copyIn(final String sql, InputStream from, int bufferSize)
throws SQLException, IOException {
byte[] buf = new byte[bufferSize];
int len;
CopyIn cp = copyIn(sql);
try {
while ((len = from.read(buf)) >= 0) {
if (len > 0) {
cp.writeToCopy(buf, 0, len);
}
}
return cp.endCopy();
} finally { // see to it that we do not leave the connection locked
if (cp.isActive()) {
cp.cancelCopy();
}
}
}
/**
* Use COPY FROM STDIN for very fast copying from an ByteStreamWriter into a database table.
*
* @param sql COPY FROM STDIN statement
* @param from the source of bytes, e.g. a ByteBufferByteStreamWriter
* @return number of rows updated for server 8.2 or newer; -1 for older
* @throws SQLException on database usage issues
* @throws IOException upon input stream or database connection failure
*/
public long copyIn(String sql, ByteStreamWriter from)
throws SQLException, IOException {
CopyIn cp = copyIn(sql);
try {
cp.writeToCopy(from);
return cp.endCopy();
} finally { // see to it that we do not leave the connection locked
if (cp.isActive()) {
cp.cancelCopy();
}
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/CopyOperation.java 0100664 0000000 0000000 00000002407 00000250600 026252 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.copy;
import java.sql.SQLException;
/**
* Exchange bulk data between client and PostgreSQL database tables. See CopyIn and CopyOut for full
* interfaces for corresponding copy directions.
*/
public interface CopyOperation {
/**
* @return number of fields in each row for this operation
*/
int getFieldCount();
/**
* @return overall format of each row: 0 = textual, 1 = binary
*/
int getFormat();
/**
* @param field number of field (0..fieldCount()-1)
* @return format of requested field: 0 = textual, 1 = binary
*/
int getFieldFormat(int field);
/**
* @return is connection reserved for this Copy operation?
*/
boolean isActive();
/**
* Cancels this copy operation, discarding any exchanged data.
*
* @throws SQLException if cancelling fails
*/
void cancelCopy() throws SQLException;
/**
* After successful end of copy, returns the number of database records handled in that operation.
* Only implemented in PostgreSQL server version 8.2 and up. Otherwise, returns -1.
*
* @return number of handled rows or -1
*/
long getHandledRowCount();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/CopyOut.java 0100664 0000000 0000000 00000002226 00000250600 025060 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.copy;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
public interface CopyOut extends CopyOperation {
/**
* Blocks wait for a row of data to be received from server on an active copy operation.
*
* @return byte array received from server, null if server complete copy operation
* @throws SQLException if something goes wrong for example socket timeout
*/
byte /* @Nullable */ [] readFromCopy() throws SQLException;
/**
* Wait for a row of data to be received from server on an active copy operation.
*
* @param block {@code true} if need wait data from server otherwise {@code false} and will read
* pending message from server
* @return byte array received from server, if pending message from server absent and use no
* blocking mode return null
* @throws SQLException if something goes wrong for example socket timeout
*/
byte /* @Nullable */ [] readFromCopy(boolean block) throws SQLException;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/PGCopyInputStream.java 0100664 0000000 0000000 00000010501 00000250600 027006 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.copy;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.PGConnection;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.io.InputStream;
import java.sql.SQLException;
import java.util.Arrays;
/**
* InputStream for reading from a PostgreSQL COPY TO STDOUT operation.
*/
public class PGCopyInputStream extends InputStream implements CopyOut {
private /* @Nullable */ CopyOut op;
private byte /* @Nullable */ [] buf;
private int at;
private int len;
/**
* Uses given connection for specified COPY TO STDOUT operation.
*
* @param connection database connection to use for copying (protocol version 3 required)
* @param sql COPY TO STDOUT statement
* @throws SQLException if initializing the operation fails
*/
public PGCopyInputStream(PGConnection connection, String sql) throws SQLException {
this(connection.getCopyAPI().copyOut(sql));
}
/**
* Use given CopyOut operation for reading.
*
* @param op COPY TO STDOUT operation
*/
public PGCopyInputStream(CopyOut op) {
this.op = op;
}
private CopyOut getOp() {
return castNonNull(op);
}
private byte /* @Nullable */ [] fillBuffer() throws IOException {
if (at >= len) {
try {
buf = getOp().readFromCopy();
} catch (SQLException sqle) {
throw new IOException(GT.tr("Copying from database failed: {0}", sqle.getMessage()), sqle);
}
if (buf == null) {
at = -1;
} else {
at = 0;
len = buf.length;
}
}
return buf;
}
private void checkClosed() throws IOException {
if (op == null) {
throw new IOException(GT.tr("This copy stream is closed."));
}
}
@Override
public int available() throws IOException {
checkClosed();
return buf != null ? len - at : 0;
}
@Override
public int read() throws IOException {
checkClosed();
byte[] buf = fillBuffer();
return buf != null ? (buf[at++] & 0xFF) : -1;
}
@Override
public int read(byte[] buf) throws IOException {
return read(buf, 0, buf.length);
}
@Override
public int read(byte[] buf, int off, int siz) throws IOException {
checkClosed();
int got = 0;
byte[] data = fillBuffer();
for (; got < siz && data != null; data = fillBuffer()) {
int length = Math.min(siz - got, len - at);
System.arraycopy(data, at, buf, off + got, length);
at += length;
got += length;
}
return got == 0 && data == null ? -1 : got;
}
@Override
public byte /* @Nullable */ [] readFromCopy() throws SQLException {
byte[] result = null;
try {
byte[] buf = fillBuffer();
if (buf != null) {
if (at > 0 || len < buf.length) {
result = Arrays.copyOfRange(buf, at, len);
} else {
result = buf;
}
// Mark the buffer as fully read
at = len;
}
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Read from copy failed."), PSQLState.CONNECTION_FAILURE, ioe);
}
return result;
}
@Override
public byte /* @Nullable */ [] readFromCopy(boolean block) throws SQLException {
return readFromCopy();
}
@Override
public void close() throws IOException {
// Don't complain about a double close.
CopyOut op = this.op;
if (op == null) {
return;
}
if (op.isActive()) {
try {
op.cancelCopy();
} catch (SQLException se) {
throw new IOException("Failed to close copy reader.", se);
}
}
this.op = null;
}
@Override
public void cancelCopy() throws SQLException {
getOp().cancelCopy();
}
@Override
public int getFormat() {
return getOp().getFormat();
}
@Override
public int getFieldFormat(int field) {
return getOp().getFieldFormat(field);
}
@Override
public int getFieldCount() {
return getOp().getFieldCount();
}
@Override
public boolean isActive() {
return op != null && op.isActive();
}
@Override
public long getHandledRowCount() {
return getOp().getHandledRowCount();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/copy/PGCopyOutputStream.java 0100664 0000000 0000000 00000012315 00000250600 027214 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.copy;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.PGConnection;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.GT;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.io.OutputStream;
import java.sql.SQLException;
/**
* OutputStream for buffered input into a PostgreSQL COPY FROM STDIN operation.
*/
public class PGCopyOutputStream extends OutputStream implements CopyIn {
private /* @Nullable */ CopyIn op;
private final byte[] copyBuffer;
private final byte[] singleByteBuffer = new byte[1];
private int at;
/**
* Uses given connection for specified COPY FROM STDIN operation.
*
* @param connection database connection to use for copying (protocol version 3 required)
* @param sql COPY FROM STDIN statement
* @throws SQLException if initializing the operation fails
*/
public PGCopyOutputStream(PGConnection connection, String sql) throws SQLException {
this(connection, sql, CopyManager.DEFAULT_BUFFER_SIZE);
}
/**
* Uses given connection for specified COPY FROM STDIN operation.
*
* @param connection database connection to use for copying (protocol version 3 required)
* @param sql COPY FROM STDIN statement
* @param bufferSize try to send this many bytes at a time
* @throws SQLException if initializing the operation fails
*/
public PGCopyOutputStream(PGConnection connection, String sql, int bufferSize)
throws SQLException {
this(connection.getCopyAPI().copyIn(sql), bufferSize);
}
/**
* Use given CopyIn operation for writing.
*
* @param op COPY FROM STDIN operation
*/
public PGCopyOutputStream(CopyIn op) {
this(op, CopyManager.DEFAULT_BUFFER_SIZE);
}
/**
* Use given CopyIn operation for writing.
*
* @param op COPY FROM STDIN operation
* @param bufferSize try to send this many bytes at a time
*/
public PGCopyOutputStream(CopyIn op, int bufferSize) {
this.op = op;
copyBuffer = new byte[bufferSize];
}
private CopyIn getOp() {
return castNonNull(op);
}
@Override
public void write(int b) throws IOException {
checkClosed();
if (b < 0 || b > 255) {
throw new IOException(GT.tr("Cannot write to copy a byte of value {0}", b));
}
singleByteBuffer[0] = (byte) b;
write(singleByteBuffer, 0, 1);
}
@Override
public void write(byte[] buf) throws IOException {
write(buf, 0, buf.length);
}
@Override
public void write(byte[] buf, int off, int siz) throws IOException {
checkClosed();
try {
writeToCopy(buf, off, siz);
} catch (SQLException se) {
throw new IOException("Write to copy failed.", se);
}
}
private void checkClosed() throws IOException {
if (op == null) {
throw new IOException(GT.tr("This copy stream is closed."));
}
}
@Override
public void close() throws IOException {
// Don't complain about a double close.
CopyIn op = this.op;
if (op == null) {
return;
}
if (op.isActive()) {
try {
endCopy();
} catch (SQLException se) {
throw new IOException("Ending write to copy failed.", se);
}
}
this.op = null;
}
@Override
public void flush() throws IOException {
checkClosed();
try {
getOp().writeToCopy(copyBuffer, 0, at);
at = 0;
getOp().flushCopy();
} catch (SQLException e) {
throw new IOException("Unable to flush stream", e);
}
}
@Override
public void writeToCopy(byte[] buf, int off, int siz) throws SQLException {
if (at > 0
&& siz > copyBuffer.length - at) { // would not fit into rest of our buf, so flush buf
getOp().writeToCopy(copyBuffer, 0, at);
at = 0;
}
if (siz > copyBuffer.length) { // would still not fit into buf, so just pass it through
getOp().writeToCopy(buf, off, siz);
} else { // fits into our buf, so save it there
System.arraycopy(buf, off, copyBuffer, at, siz);
at += siz;
}
}
@Override
public void writeToCopy(ByteStreamWriter from) throws SQLException {
if (at > 0) {
// flush existing buffer so order is preserved
getOp().writeToCopy(copyBuffer, 0, at);
at = 0;
}
getOp().writeToCopy(from);
}
@Override
public int getFormat() {
return getOp().getFormat();
}
@Override
public int getFieldFormat(int field) {
return getOp().getFieldFormat(field);
}
@Override
public void cancelCopy() throws SQLException {
getOp().cancelCopy();
}
@Override
public int getFieldCount() {
return getOp().getFieldCount();
}
@Override
public boolean isActive() {
return op != null && getOp().isActive();
}
@Override
public void flushCopy() throws SQLException {
getOp().flushCopy();
}
@Override
public long endCopy() throws SQLException {
if (at > 0) {
getOp().writeToCopy(copyBuffer, 0, at);
}
getOp().endCopy();
return getHandledRowCount();
}
@Override
public long getHandledRowCount() {
return getOp().getHandledRowCount();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ 0040775 0000000 0000000 00000000000 00000250600 022572 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/AsciiStringInterner.java 0100664 0000000 0000000 00000026055 00000250600 027370 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import static org.postgresql.util.internal.Nullness.castNonNull;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.lang.ref.Reference;
import java.lang.ref.ReferenceQueue;
import java.lang.ref.SoftReference;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
/**
* Provides the canonicalization/interning of {@code String} instances which contain only ascii characters,
* keyed by the {@code byte[]} representation (in ascii).
*
*
* The values are stored in {@link SoftReference}s, allowing them to be garbage collected if not in use and there is
* memory pressure.
*
*
*
* NOTE: Instances are safe for concurrent use.
*
*
* @author Brett Okken
*/
final class AsciiStringInterner {
private abstract static class BaseKey {
private final int hash;
BaseKey(int hash) {
this.hash = hash;
}
@Override
public final int hashCode() {
return hash;
}
@Override
public final boolean equals(/* @Nullable */ Object obj) {
if (obj == this) {
return true;
}
if (!(obj instanceof BaseKey)) {
return false;
}
final BaseKey other = (BaseKey) obj;
return equalsBytes(other);
}
abstract boolean equalsBytes(BaseKey other);
abstract boolean equals(byte[] other, int offset, int length);
abstract void appendString(StringBuilder sb);
}
/**
* Only used for lookups, never to actually store entries.
*/
private static class TempKey extends BaseKey {
final byte[] bytes;
final int offset;
final int length;
TempKey(int hash, byte[] bytes, int offset, int length) {
super(hash);
this.bytes = bytes;
this.offset = offset;
this.length = length;
}
@Override
boolean equalsBytes(BaseKey other) {
return other.equals(bytes, offset, length);
}
@Override
public boolean equals(byte[] other, int offset, int length) {
return arrayEquals(this.bytes, this.offset, this.length, other, offset, length);
}
@Override
void appendString(StringBuilder sb) {
for (int i = offset, j = offset + length; i < j; i++) {
sb.append((char) bytes[i]);
}
}
}
/**
* Instance used for inserting values into the cache. The {@code byte[]} must be a copy
* that will never be mutated.
*/
private static final class Key extends BaseKey {
final byte[] key;
Key(byte[] key, int hash) {
super(hash);
this.key = key;
}
/**
* {@inheritDoc}
*/
@Override
boolean equalsBytes(BaseKey other) {
return other.equals(key, 0, key.length);
}
@Override
public boolean equals(byte[] other, int offset, int length) {
return arrayEquals(this.key, 0, this.key.length, other, offset, length);
}
/**
* {@inheritDoc}
*/
@Override
void appendString(StringBuilder sb) {
for (int i = 0; i < key.length; i++) {
sb.append((char) key[i]);
}
}
}
/**
* Custom {@link SoftReference} implementation which maintains a reference to the key in the cache,
* which allows aggressive cleaning when garbage collector collects the {@code String} instance.
*/
private final class StringReference extends SoftReference {
private final BaseKey key;
StringReference(BaseKey key, String referent) {
super(referent, refQueue);
this.key = key;
}
void dispose() {
cache.remove(key, this);
}
}
/**
* Contains the canonicalized values, keyed by the ascii {@code byte[]}.
*/
final ConcurrentMap> cache = new ConcurrentHashMap<>(128);
/**
* Used for {@link Reference} as values in {@code cache}.
*/
final ReferenceQueue refQueue = new ReferenceQueue<>();
/**
* Preemptively populates a value into the cache. This is intended to be used with {@code String} constants
* which are frequently used. While this can work with other {@code String} values, if val is ever
* garbage collected, it will not be actively removed from this instance.
*
* @param val The value to intern. Must not be {@code null}.
* @return Indication if val is an ascii String and placed into cache.
*/
public boolean putString(String val) {
//ask for utf-8 so that we can detect if any of the characters are not ascii
final byte[] copy = val.getBytes(StandardCharsets.UTF_8);
final int hash = hashKey(copy, 0, copy.length);
if (hash == 0) {
return false;
}
final Key key = new Key(copy, hash);
//we are assuming this is a java interned string from , so this is unlikely to ever be
//reclaimed. so there is no value in using the custom StringReference or hand off to
//the refQueue.
//on the outside chance it actually does get reclaimed, it will just hang around as an
//empty reference in the map unless/until attempted to be retrieved
cache.put(key, new SoftReference(val));
return true;
}
/**
* Produces a {@link String} instance for the given bytes . If all are valid ascii (i.e. {@code >= 0})
* either an existing value will be returned, or the newly created {@code String} will be stored before being
* returned.
*
*
* If non-ascii bytes are discovered, the encoding will be used to
* {@link Encoding#decode(byte[], int, int) decode} and that value will be returned (but not stored).
*
*
* @param bytes The bytes of the String. Must not be {@code null}.
* @param offset Offset into bytes to start.
* @param length The number of bytes in bytes which are relevant.
* @param encoding To use if non-ascii bytes seen.
* @return Decoded {@code String} from bytes .
* @throws IOException If error decoding from Encoding .
*/
public String getString(byte[] bytes, int offset, int length, Encoding encoding) throws IOException {
if (length == 0) {
return "";
}
final int hash = hashKey(bytes, offset, length);
// 0 indicates the presence of a non-ascii character - defer to encoding to create the string
if (hash == 0) {
return encoding.decode(bytes, offset, length);
}
cleanQueue();
// create a TempKey with the byte[] given
final TempKey tempKey = new TempKey(hash, bytes, offset, length);
SoftReference ref = cache.get(tempKey);
if (ref != null) {
final String val = ref.get();
if (val != null) {
return val;
}
}
// in order to insert we need to create a "real" key with copy of bytes that will not be changed
final byte[] copy = Arrays.copyOfRange(bytes, offset, offset + length);
final Key key = new Key(copy, hash);
final String value = new String(copy, StandardCharsets.US_ASCII);
// handle case where a concurrent thread has populated the map or existing value has cleared reference
ref = cache.compute(key, (k, v) -> {
if (v == null) {
return new StringReference(key, value);
}
final String val = v.get();
return val != null ? v : new StringReference(key, value);
});
return castNonNull(ref.get());
}
/**
* Produces a {@link String} instance for the given bytes .
*
*
* If all are valid ascii (i.e. {@code >= 0}) and a corresponding {@code String} value exists, it
* will be returned. If no value exists, a {@code String} will be created, but not stored.
*
*
*
* If non-ascii bytes are discovered, the encoding will be used to
* {@link Encoding#decode(byte[], int, int) decode} and that value will be returned (but not stored).
*
*
* @param bytes The bytes of the String. Must not be {@code null}.
* @param offset Offset into bytes to start.
* @param length The number of bytes in bytes which are relevant.
* @param encoding To use if non-ascii bytes seen.
* @return Decoded {@code String} from bytes .
* @throws IOException If error decoding from Encoding .
*/
public String getStringIfPresent(byte[] bytes, int offset, int length, Encoding encoding) throws IOException {
if (length == 0) {
return "";
}
final int hash = hashKey(bytes, offset, length);
// 0 indicates the presence of a non-ascii character - defer to encoding to create the string
if (hash == 0) {
return encoding.decode(bytes, offset, length);
}
cleanQueue();
// create a TempKey with the byte[] given
final TempKey tempKey = new TempKey(hash, bytes, offset, length);
SoftReference ref = cache.get(tempKey);
if (ref != null) {
final String val = ref.get();
if (val != null) {
return val;
}
}
return new String(bytes, offset, length, StandardCharsets.US_ASCII);
}
/**
* Process any entries in {@link #refQueue} to purge from the {@link #cache}.
* @see StringReference#dispose()
*/
private void cleanQueue() {
Reference> ref;
while ((ref = refQueue.poll()) != null) {
((StringReference) ref).dispose();
}
}
/**
* Generates a hash value for the relevant entries in bytes as long as all values are ascii ({@code >= 0}).
* @return hash code for relevant bytes, or {@code 0} if non-ascii bytes present.
*/
private static int hashKey(byte[] bytes, int offset, int length) {
int result = 1;
for (int i = offset, j = offset + length; i < j; i++) {
final byte b = bytes[i];
// bytes are signed values. all ascii values are positive
if (b < 0) {
return 0;
}
result = 31 * result + b;
}
return result;
}
/**
* Performs equality check between a and b (with corresponding offset/length values).
*
*
* The {@code static boolean equals(byte[].class, int, int, byte[], int, int} method in {@link java.util.Arrays}
* is optimized for longer {@code byte[]} instances than is expected to be seen here.
*
*/
static boolean arrayEquals(byte[] a, int aOffset, int aLength, byte[] b, int bOffset, int bLength) {
if (aLength != bLength) {
return false;
}
//TODO: in jdk9, could use VarHandle to read 4 bytes at a time as an int for comparison
// or 8 bytes as a long - though we likely expect short values here
for (int i = 0; i < aLength; i++) {
if (a[aOffset + i] != b[bOffset + i]) {
return false;
}
}
return true;
}
/**
* {@inheritDoc}
*/
@Override
public String toString() {
final StringBuilder sb = new StringBuilder(32 + (8 * cache.size()));
sb.append("AsciiStringInterner [");
cache.forEach((k, v) -> {
sb.append('\'');
k.appendString(sb);
sb.append("', ");
});
//replace trailing ', ' with ']';
final int length = sb.length();
if (length > 21) {
sb.setLength(sb.length() - 2);
}
sb.append(']');
return sb.toString();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/BaseConnection.java 0100664 0000000 0000000 00000020063 00000250600 026325 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.PGConnection;
import org.postgresql.PGProperty;
import org.postgresql.jdbc.FieldMetadata;
import org.postgresql.jdbc.TimestampUtils;
import org.postgresql.util.LruCache;
import org.postgresql.xml.PGXmlFactoryFactory;
// import org.checkerframework.checker.nullness.qual.Nullable;
// import org.checkerframework.checker.nullness.qual.PolyNull;
// import org.checkerframework.dataflow.qual.Pure;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.TimerTask;
import java.util.logging.Logger;
/**
* Driver-internal connection interface. Application code should not use this interface.
*/
public interface BaseConnection extends PGConnection, Connection {
/**
* Cancel the current query executing on this connection.
*
* @throws SQLException if something goes wrong.
*/
@Override
void cancelQuery() throws SQLException;
/**
* Execute a SQL query that returns a single resultset. Never causes a new transaction to be
* started regardless of the autocommit setting.
*
* @param s the query to execute
* @return the (non-null) returned resultset
* @throws SQLException if something goes wrong.
*/
ResultSet execSQLQuery(String s) throws SQLException;
ResultSet execSQLQuery(String s, int resultSetType, int resultSetConcurrency)
throws SQLException;
/**
* Execute a SQL query that does not return results. Never causes a new transaction to be started
* regardless of the autocommit setting.
*
* @param s the query to execute
* @throws SQLException if something goes wrong.
*/
void execSQLUpdate(String s) throws SQLException;
/**
* Get the QueryExecutor implementation for this connection.
*
* @return the (non-null) executor
*/
QueryExecutor getQueryExecutor();
/**
* Internal protocol for work with physical and logical replication. Physical replication available
* only since PostgreSQL version 9.1. Logical replication available only since PostgreSQL version 9.4.
*
* @return not null replication protocol
*/
ReplicationProtocol getReplicationProtocol();
/**
* Construct and return an appropriate object for the given type and value. This only considers
* the types registered via {@link org.postgresql.PGConnection#addDataType(String, Class)} and
* {@link org.postgresql.PGConnection#addDataType(String, String)}.
*
* If no class is registered as handling the given type, then a generic
* {@link org.postgresql.util.PGobject} instance is returned.
*
* value or byteValue must be non-null
* @param type the backend typename
* @param value the type-specific string representation of the value
* @param byteValue the type-specific binary representation of the value
* @return an appropriate object; never null.
* @throws SQLException if something goes wrong
*/
Object getObject(String type, /* @Nullable */ String value, byte /* @Nullable */ [] byteValue)
throws SQLException;
/* @Pure */
Encoding getEncoding() throws SQLException;
TypeInfo getTypeInfo();
/**
* Check if we have at least a particular server version.
*
* The input version is of the form xxyyzz, matching a PostgreSQL version like xx.yy.zz. So 9.0.12
* is 90012.
*
* @param ver the server version to check, of the form xxyyzz eg 90401
* @return true if the server version is at least "ver".
*/
boolean haveMinimumServerVersion(int ver);
/**
* Check if we have at least a particular server version.
*
* The input version is of the form xxyyzz, matching a PostgreSQL version like xx.yy.zz. So 9.0.12
* is 90012.
*
* @param ver the server version to check
* @return true if the server version is at least "ver".
*/
boolean haveMinimumServerVersion(Version ver);
/**
* Encode a string using the database's client_encoding (usually UTF8, but can vary on older
* server versions). This is used when constructing synthetic resultsets (for example, in metadata
* methods).
*
* @param str the string to encode
* @return an encoded representation of the string
* @throws SQLException if something goes wrong.
*/
byte /* @PolyNull */ [] encodeString(/* @PolyNull */ String str) throws SQLException;
/**
* Escapes a string for use as string-literal within an SQL command. The method chooses the
* applicable escaping rules based on the value of {@link #getStandardConformingStrings()}.
*
* @param str a string value
* @return the escaped representation of the string
* @throws SQLException if the string contains a {@code \0} character
*/
String escapeString(String str) throws SQLException;
/**
* Returns whether the server treats string-literals according to the SQL standard or if it uses
* traditional PostgreSQL escaping rules. Versions up to 8.1 always treated backslashes as escape
* characters in string-literals. Since 8.2, this depends on the value of the
* {@code standard_conforming_strings} server variable.
*
* @return true if the server treats string literals according to the SQL standard
* @see QueryExecutor#getStandardConformingStrings()
*/
boolean getStandardConformingStrings();
// Ew. Quick hack to give access to the connection-specific utils implementation.
@Deprecated
TimestampUtils getTimestampUtils();
// Get the per-connection logger.
Logger getLogger();
// Get the bind-string-as-varchar config flag
boolean getStringVarcharFlag();
/**
* Get the current transaction state of this connection.
*
* @return current transaction state of this connection
*/
TransactionState getTransactionState();
/**
* Returns true if value for the given oid should be sent using binary transfer. False if value
* should be sent using text transfer.
*
* @param oid The oid to check.
* @return True for binary transfer, false for text transfer.
*/
boolean binaryTransferSend(int oid);
/**
* Return whether to disable column name sanitation.
*
* @return true column sanitizer is disabled
*/
boolean isColumnSanitiserDisabled();
/**
* Schedule a TimerTask for later execution. The task will be scheduled with the shared Timer for
* this connection.
*
* @param timerTask timer task to schedule
* @param milliSeconds delay in milliseconds
*/
void addTimerTask(TimerTask timerTask, long milliSeconds);
/**
* Invoke purge() on the underlying shared Timer so that internal resources will be released.
*/
void purgeTimerTasks();
/**
* Return metadata cache for given connection.
*
* @return metadata cache
*/
LruCache getFieldMetadataCache();
CachedQuery createQuery(String sql, boolean escapeProcessing, boolean isParameterized,
String... columnNames)
throws SQLException;
/**
* By default, the connection resets statement cache in case deallocate all/discard all
* message is observed.
* This API allows to disable that feature for testing purposes.
*
* @param flushCacheOnDeallocate true if statement cache should be reset when "deallocate/discard" message observed
*/
void setFlushCacheOnDeallocate(boolean flushCacheOnDeallocate);
/**
* Indicates if statements to backend should be hinted as read only.
*
* @return Indication if hints to backend (such as when transaction begins)
* should be read only.
* @see PGProperty#READ_ONLY_MODE
*/
boolean hintReadOnly();
/**
* Retrieve the factory to instantiate XML processing factories.
*
* @return The factory to use to instantiate XML processing factories
* @throws SQLException if the class cannot be found or instantiated.
*/
PGXmlFactoryFactory getXmlFactoryFactory() throws SQLException;
/**
* Indicates if error details from server used in included in logging and exceptions.
*
* @return true if should be included and passed on to other exceptions
*/
boolean getLogServerErrorDetail();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/BaseQueryKey.java 0100664 0000000 0000000 00000003766 00000250600 026017 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.util.CanEstimateSize;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* This class is used as a cache key for simple statements that have no "returning columns".
* Prepared statements that have no returning columns use just {@code String sql} as a key.
* Simple and Prepared statements that have returning columns use {@link QueryWithReturningColumnsKey}
* as a cache key.
*/
class BaseQueryKey implements CanEstimateSize {
public final String sql;
public final boolean isParameterized;
public final boolean escapeProcessing;
BaseQueryKey(String sql, boolean isParameterized, boolean escapeProcessing) {
this.sql = sql;
this.isParameterized = isParameterized;
this.escapeProcessing = escapeProcessing;
}
@Override
public String toString() {
return "BaseQueryKey{"
+ "sql='" + sql + '\''
+ ", isParameterized=" + isParameterized
+ ", escapeProcessing=" + escapeProcessing
+ '}';
}
@Override
public long getSize() {
if (sql == null) { // just in case
return 16;
}
return 16 + sql.length() * 2L; // 2 bytes per char, revise with Java 9's compact strings
}
@Override
public boolean equals(/* @Nullable */ Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
BaseQueryKey that = (BaseQueryKey) o;
if (isParameterized != that.isParameterized) {
return false;
}
if (escapeProcessing != that.escapeProcessing) {
return false;
}
return sql != null ? sql.equals(that.sql) : that.sql == null;
}
@Override
public int hashCode() {
int result = sql != null ? sql.hashCode() : 0;
result = 31 * result + (isParameterized ? 1 : 0);
result = 31 * result + (escapeProcessing ? 1 : 0);
return result;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/BaseStatement.java 0100664 0000000 0000000 00000005213 00000250600 026172 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.PGStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.List;
/**
* Driver-internal statement interface. Application code should not use this interface.
*/
public interface BaseStatement extends PGStatement, Statement {
/**
* Create a synthetic resultset from data provided by the driver.
*
* @param fields the column metadata for the resultset
* @param tuples the resultset data
* @return the new ResultSet
* @throws SQLException if something goes wrong
*/
ResultSet createDriverResultSet(Field[] fields, List tuples) throws SQLException;
/**
* Create a resultset from data retrieved from the server.
*
* @param originalQuery the query that generated this resultset; used when dealing with updateable
* resultsets
* @param fields the column metadata for the resultset
* @param tuples the resultset data
* @param cursor the cursor to use to retrieve more data from the server; if null, no additional
* data is present.
* @return the new ResultSet
* @throws SQLException if something goes wrong
*/
ResultSet createResultSet(Query originalQuery, Field[] fields, List tuples,
ResultCursor cursor) throws SQLException;
/**
* Execute a query, passing additional query flags.
*
* @param sql the query to execute (JDBC-style query)
* @param flags additional {@link QueryExecutor} flags for execution; these are bitwise-ORed into
* the default flags.
* @return true if there is a result set
* @throws SQLException if something goes wrong.
*/
boolean executeWithFlags(String sql, int flags) throws SQLException;
/**
* Execute a query, passing additional query flags.
*
* @param cachedQuery the query to execute (native to PostgreSQL)
* @param flags additional {@link QueryExecutor} flags for execution; these are bitwise-ORed into
* the default flags.
* @return true if there is a result set
* @throws SQLException if something goes wrong.
*/
boolean executeWithFlags(CachedQuery cachedQuery, int flags) throws SQLException;
/**
* Execute a prepared query, passing additional query flags.
*
* @param flags additional {@link QueryExecutor} flags for execution; these are bitwise-ORed into
* the default flags.
* @return true if there is a result set
* @throws SQLException if something goes wrong.
*/
boolean executeWithFlags(int flags) throws SQLException;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/CachedQuery.java 0100664 0000000 0000000 00000004075 00000250600 025635 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2015, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.util.CanEstimateSize;
/**
* Stores information on the parsed JDBC query. It is used to cut parsing overhead when executing
* the same query through {@link java.sql.Connection#prepareStatement(String)}.
*/
public class CachedQuery implements CanEstimateSize {
/**
* Cache key. {@link String} or {@code org.postgresql.util.CanEstimateSize}.
*/
public final Object key;
public final Query query;
public final boolean isFunction;
private int executeCount;
public CachedQuery(Object key, Query query, boolean isFunction) {
assert key instanceof String || key instanceof CanEstimateSize
: "CachedQuery.key should either be String or implement CanEstimateSize."
+ " Actual class is " + key.getClass();
this.key = key;
this.query = query;
this.isFunction = isFunction;
}
public void increaseExecuteCount() {
if (executeCount < Integer.MAX_VALUE) {
executeCount++;
}
}
public void increaseExecuteCount(int inc) {
int newValue = executeCount + inc;
if (newValue > 0) { // if overflows, just ignore the update
executeCount = newValue;
}
}
/**
* Number of times this statement has been used.
*
* @return number of times this statement has been used
*/
public int getExecuteCount() {
return executeCount;
}
@Override
public long getSize() {
long queryLength;
if (key instanceof String) {
queryLength = ((String) key).length() * 2L; // 2 bytes per char, revise with Java 9's compact strings
} else {
queryLength = ((CanEstimateSize) key).getSize();
}
return queryLength * 2 /* original query and native sql */
+ 100L /* entry in hash map, CachedQuery wrapper, etc */;
}
@Override
public String toString() {
return "CachedQuery{"
+ "executeCount=" + executeCount
+ ", query=" + query
+ ", isFunction=" + isFunction
+ '}';
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/CachedQueryCreateAction.java 0100664 0000000 0000000 00000005147 00000250600 030120 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2015, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.jdbc.PreferQueryMode;
import org.postgresql.util.LruCache;
import java.sql.SQLException;
import java.util.List;
/**
* Creates an instance of {@link CachedQuery} for a given connection.
*/
class CachedQueryCreateAction implements LruCache.CreateAction {
private static final String[] EMPTY_RETURNING = new String[0];
private final QueryExecutor queryExecutor;
CachedQueryCreateAction(QueryExecutor queryExecutor) {
this.queryExecutor = queryExecutor;
}
@Override
public CachedQuery create(Object key) throws SQLException {
assert key instanceof String || key instanceof BaseQueryKey
: "Query key should be String or BaseQueryKey. Given " + key.getClass() + ", sql: "
+ key;
BaseQueryKey queryKey;
String parsedSql;
if (key instanceof BaseQueryKey) {
queryKey = (BaseQueryKey) key;
parsedSql = queryKey.sql;
} else {
queryKey = null;
parsedSql = (String) key;
}
if (key instanceof String || castNonNull(queryKey).escapeProcessing) {
parsedSql =
Parser.replaceProcessing(parsedSql, true, queryExecutor.getStandardConformingStrings());
}
boolean isFunction;
if (key instanceof CallableQueryKey) {
JdbcCallParseInfo callInfo =
Parser.modifyJdbcCall(parsedSql, queryExecutor.getStandardConformingStrings(),
queryExecutor.getServerVersionNum(), queryExecutor.getEscapeSyntaxCallMode());
parsedSql = callInfo.getSql();
isFunction = callInfo.isFunction();
} else {
isFunction = false;
}
boolean isParameterized = key instanceof String || castNonNull(queryKey).isParameterized;
boolean splitStatements = isParameterized || queryExecutor.getPreferQueryMode().compareTo(PreferQueryMode.EXTENDED) >= 0;
String[] returningColumns;
if (key instanceof QueryWithReturningColumnsKey) {
returningColumns = ((QueryWithReturningColumnsKey) key).columnNames;
} else {
returningColumns = EMPTY_RETURNING;
}
List queries = Parser.parseJdbcSql(parsedSql,
queryExecutor.getStandardConformingStrings(), isParameterized, splitStatements,
queryExecutor.isReWriteBatchedInsertsEnabled(), queryExecutor.getQuoteReturningIdentifiers(),
returningColumns
);
Query query = queryExecutor.wrap(queries);
return new CachedQuery(key, query, isFunction);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/CallableQueryKey.java 0100664 0000000 0000000 00000002107 00000250600 026630 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2015, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* Serves as a cache key for {@link java.sql.CallableStatement}.
* Callable statements require some special parsing before use (due to JDBC {@code {?= call...}}
* syntax, thus a special cache key class is used to trigger proper parsing for callable statements.
*/
class CallableQueryKey extends BaseQueryKey {
CallableQueryKey(String sql) {
super(sql, true, true);
}
@Override
public String toString() {
return "CallableQueryKey{"
+ "sql='" + sql + '\''
+ ", isParameterized=" + isParameterized
+ ", escapeProcessing=" + escapeProcessing
+ '}';
}
@Override
public int hashCode() {
return super.hashCode() * 31;
}
@Override
public boolean equals(/* @Nullable */ Object o) {
// Nothing interesting here, overriding equals to make hashCode and equals paired
return super.equals(o);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/CommandCompleteParser.java 0100664 0000000 0000000 00000005735 00000250600 027670 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2018, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* Parses {@code oid} and {@code rows} from a {@code CommandComplete (B)} message (end of Execute).
*/
public final class CommandCompleteParser {
private long oid;
private long rows;
public CommandCompleteParser() {
}
public long getOid() {
return oid;
}
public long getRows() {
return rows;
}
void set(long oid, long rows) {
this.oid = oid;
this.rows = rows;
}
/**
* Parses {@code CommandComplete (B)} message.
* Status is in the format of "COMMAND OID ROWS" where both 'OID' and 'ROWS' are optional
* and COMMAND can have spaces within it, like CREATE TABLE.
*
* @param status COMMAND OID ROWS message
* @throws PSQLException in case the status cannot be parsed
*/
public void parse(String status) throws PSQLException {
// Assumption: command neither starts nor ends with a digit
if (!Parser.isDigitAt(status, status.length() - 1)) {
// For CALL statements, JDBC requires an update count of -1
set(0, "CALL".equals(status) ? -1 : 0);
return;
}
// Scan backwards, while searching for a maximum of two number groups
// COMMAND OID ROWS
// COMMAND ROWS
long oid = 0;
long rows = 0;
try {
int lastSpace = status.lastIndexOf(' ');
// Status ends with a digit => it is ROWS
if (Parser.isDigitAt(status, lastSpace + 1)) {
rows = Parser.parseLong(status, lastSpace + 1, status.length());
if (Parser.isDigitAt(status, lastSpace - 1)) {
int penultimateSpace = status.lastIndexOf(' ', lastSpace - 1);
if (Parser.isDigitAt(status, penultimateSpace + 1)) {
oid = Parser.parseLong(status, penultimateSpace + 1, lastSpace);
}
}
}
} catch (NumberFormatException e) {
// This should only occur if the oid or rows are out of 0..Long.MAX_VALUE range
throw new PSQLException(
GT.tr("Unable to parse the count in command completion tag: {0}.", status),
PSQLState.CONNECTION_FAILURE, e);
}
set(oid, rows);
}
@Override
public String toString() {
return "CommandStatus{"
+ "oid=" + oid
+ ", rows=" + rows
+ '}';
}
@Override
public boolean equals(/* @Nullable */ Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
CommandCompleteParser that = (CommandCompleteParser) o;
if (oid != that.oid) {
return false;
}
return rows == that.rows;
}
@Override
public int hashCode() {
int result = (int) (oid ^ (oid >>> 32));
result = 31 * result + (int) (rows ^ (rows >>> 32));
return result;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ConnectionFactory.java 0100664 0000000 0000000 00000007032 00000250600 027063 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import org.postgresql.PGProperty;
import org.postgresql.core.v3.ConnectionFactoryImpl;
import org.postgresql.util.GT;
import org.postgresql.util.HostSpec;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.sql.SQLException;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* Handles protocol-specific connection setup.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
public abstract class ConnectionFactory {
private static final Logger LOGGER = Logger.getLogger(ConnectionFactory.class.getName());
/**
* Establishes and initializes a new connection.
*
* If the "protocolVersion" property is specified, only that protocol version is tried. Otherwise,
* all protocols are tried in order, falling back to older protocols as necessary.
*
* Currently, protocol versions 3 (7.4+) is supported.
*
* @param hostSpecs at least one host and port to connect to; multiple elements for round-robin
* failover
* @param info extra properties controlling the connection; notably, "password" if present
* supplies the password to authenticate with.
* @return the new, initialized, connection
* @throws SQLException if the connection could not be established.
*/
public static QueryExecutor openConnection(HostSpec[] hostSpecs,
Properties info) throws SQLException {
String protoName = PGProperty.PROTOCOL_VERSION.getOrDefault(info);
if (protoName != null && !protoName.isEmpty()
&& (protoName.equalsIgnoreCase("3")
|| protoName.equalsIgnoreCase("3.0")
|| protoName.equalsIgnoreCase("3.2"))) {
ConnectionFactory connectionFactory = new ConnectionFactoryImpl();
QueryExecutor queryExecutor = connectionFactory.openConnectionImpl(
hostSpecs, info);
if (queryExecutor != null) {
return queryExecutor;
}
}
throw new PSQLException(
GT.tr("A connection could not be made using the requested protocol {0}.", protoName),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
/**
* Implementation of {@link #openConnection} for a particular protocol version. Implemented by
* subclasses of {@link ConnectionFactory}.
*
* @param hostSpecs at least one host and port to connect to; multiple elements for round-robin
* failover
* @param info extra properties controlling the connection; notably, "password" if present
* supplies the password to authenticate with.
* @return the new, initialized, connection, or null
if this protocol version is not
* supported by the server.
* @throws SQLException if the connection could not be established for a reason other than
* protocol version incompatibility.
*/
public abstract QueryExecutor openConnectionImpl(HostSpec[] hostSpecs, Properties info) throws SQLException;
/**
* Safely close the given stream.
*
* @param newStream The stream to close.
*/
protected void closeStream(/* @Nullable */ PGStream newStream) {
if (newStream != null) {
try {
newStream.close();
} catch (IOException e) {
LOGGER.log(Level.WARNING, "Failed to closed stream with error: {0}", e);
}
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Encoding.java 0100664 0000000 0000000 00000030307 00000250600 025163 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.PolyNull;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.Reader;
import java.io.Writer;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* Representation of a particular character encoding.
*/
public class Encoding {
private static final Logger LOGGER = Logger.getLogger(Encoding.class.getName());
private static final Encoding DEFAULT_ENCODING = new Encoding();
private static final Encoding UTF8_ENCODING = new Encoding(StandardCharsets.UTF_8, true);
/*
* Preferred JVM encodings for backend encodings.
*/
private static final HashMap encodings = new HashMap<>();
static {
//Note: this list should match the set of supported server
// encodings found in backend/util/mb/encnames.c
encodings.put("SQL_ASCII", new String[]{"ASCII", "US-ASCII"});
encodings.put("UNICODE", new String[]{"UTF-8", "UTF8"});
encodings.put("UTF8", new String[]{"UTF-8", "UTF8"});
encodings.put("LATIN1", new String[]{"ISO8859_1"});
encodings.put("LATIN2", new String[]{"ISO8859_2"});
encodings.put("LATIN3", new String[]{"ISO8859_3"});
encodings.put("LATIN4", new String[]{"ISO8859_4"});
encodings.put("ISO_8859_5", new String[]{"ISO8859_5"});
encodings.put("ISO_8859_6", new String[]{"ISO8859_6"});
encodings.put("ISO_8859_7", new String[]{"ISO8859_7"});
encodings.put("ISO_8859_8", new String[]{"ISO8859_8"});
encodings.put("LATIN5", new String[]{"ISO8859_9"});
encodings.put("LATIN7", new String[]{"ISO8859_13"});
encodings.put("LATIN9", new String[]{"ISO8859_15_FDIS"});
encodings.put("EUC_JP", new String[]{"EUC_JP"});
encodings.put("EUC_CN", new String[]{"EUC_CN"});
encodings.put("EUC_KR", new String[]{"EUC_KR"});
encodings.put("JOHAB", new String[]{"Johab"});
encodings.put("EUC_TW", new String[]{"EUC_TW"});
encodings.put("SJIS", new String[]{"MS932", "SJIS"});
encodings.put("BIG5", new String[]{"Big5", "MS950", "Cp950"});
encodings.put("GBK", new String[]{"GBK", "MS936"});
encodings.put("UHC", new String[]{"MS949", "Cp949", "Cp949C"});
encodings.put("TCVN", new String[]{"Cp1258"});
encodings.put("WIN1256", new String[]{"Cp1256"});
encodings.put("WIN1250", new String[]{"Cp1250"});
encodings.put("WIN874", new String[]{"MS874", "Cp874"});
encodings.put("WIN", new String[]{"Cp1251"});
encodings.put("ALT", new String[]{"Cp866"});
// We prefer KOI8-U, since it is a superset of KOI8-R.
encodings.put("KOI8", new String[]{"KOI8_U", "KOI8_R"});
// If the database isn't encoding-aware then we can't have
// any preferred encodings.
encodings.put("UNKNOWN", new String[0]);
// The following encodings do not have a java equivalent
encodings.put("MULE_INTERNAL", new String[0]);
encodings.put("LATIN6", new String[0]);
encodings.put("LATIN8", new String[0]);
encodings.put("LATIN10", new String[0]);
}
static final AsciiStringInterner INTERNER = new AsciiStringInterner();
private final Charset encoding;
private final boolean fastASCIINumbers;
/**
* Uses the default charset of the JVM.
*/
private Encoding() {
this(Charset.defaultCharset());
}
/**
* Subclasses may use this constructor if they know in advance of their ASCII number
* compatibility.
*
* @param encoding charset to use
* @param fastASCIINumbers whether this encoding is compatible with ASCII numbers.
*/
protected Encoding(Charset encoding, boolean fastASCIINumbers) {
if (encoding == null) {
throw new NullPointerException("Null encoding charset not supported");
}
this.encoding = encoding;
this.fastASCIINumbers = fastASCIINumbers;
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, "Creating new Encoding {0} with fastASCIINumbers {1}",
new Object[]{encoding, fastASCIINumbers});
}
}
/**
* Use the charset passed as parameter and tests at creation time whether the specified encoding
* is compatible with ASCII numbers.
*
* @param encoding charset to use
*/
protected Encoding(Charset encoding) {
this(encoding, testAsciiNumbers(encoding));
}
/**
* Returns true if this encoding has characters '-' and '0'..'9' in exactly same position as
* ascii.
*
* @return true if the bytes can be scanned directly for ascii numbers.
*/
public boolean hasAsciiNumbers() {
return fastASCIINumbers;
}
/**
* Construct an Encoding for a given JVM encoding.
*
* @param jvmEncoding the name of the JVM encoding
* @return an Encoding instance for the specified encoding, or an Encoding instance for the
* default JVM encoding if the specified encoding is unavailable.
*/
public static Encoding getJVMEncoding(String jvmEncoding) {
if ("UTF-8".equals(jvmEncoding)) {
return UTF8_ENCODING;
}
if (Charset.isSupported(jvmEncoding)) {
return new Encoding(Charset.forName(jvmEncoding));
}
return DEFAULT_ENCODING;
}
/**
* Construct an Encoding for a given database encoding.
*
* @param databaseEncoding the name of the database encoding
* @return an Encoding instance for the specified encoding, or an Encoding instance for the
* default JVM encoding if the specified encoding is unavailable.
*/
public static Encoding getDatabaseEncoding(String databaseEncoding) {
if ("UTF8".equals(databaseEncoding) || "UNICODE".equals(databaseEncoding)) {
return UTF8_ENCODING;
}
// If the backend encoding is known and there is a suitable
// encoding in the JVM we use that. Otherwise we fall back
// to the default encoding of the JVM.
String[] candidates = encodings.get(databaseEncoding);
if (candidates != null) {
for (String candidate : candidates) {
LOGGER.log(Level.FINEST, "Search encoding candidate {0}", candidate);
if (Charset.isSupported(candidate)) {
return new Encoding(Charset.forName(candidate));
}
}
}
// Try the encoding name directly -- maybe the charset has been
// provided by the user.
if (Charset.isSupported(databaseEncoding)) {
return new Encoding(Charset.forName(databaseEncoding));
}
// Fall back to default JVM encoding.
LOGGER.log(Level.FINEST, "{0} encoding not found, returning default encoding", databaseEncoding);
return DEFAULT_ENCODING;
}
/**
* Indicates that string should be staged as a canonicalized value.
*
*
* This is intended for use with {@code String} constants.
*
*
* @param string The string to maintain canonicalized reference to. Must not be {@code null}.
* @see Encoding#decodeCanonicalized(byte[], int, int)
*/
public static void canonicalize(String string) {
INTERNER.putString(string);
}
/**
* Get the name of the (JVM) encoding used.
*
* @return the JVM encoding name used by this instance.
*/
public String name() {
return encoding.name();
}
/**
* Encode a string to an array of bytes.
*
* @param s the string to encode
* @return a bytearray containing the encoded string
* @throws IOException if something goes wrong
*/
public byte /* @PolyNull */ [] encode(/* @PolyNull */ String s) throws IOException {
if (s == null) {
return null;
}
return s.getBytes(encoding);
}
/**
* Decode an array of bytes possibly into a canonicalized string.
*
*
* Only ascii compatible encoding support canonicalization and only ascii {@code String} values are eligible
* to be canonicalized.
*
*
* @param encodedString a byte array containing the string to decode
* @param offset the offset in encodedString
of the first byte of the encoded
* representation
* @param length the length, in bytes, of the encoded representation
* @return the decoded string
* @throws IOException if something goes wrong
*/
public String decodeCanonicalized(byte[] encodedString, int offset, int length) throws IOException {
if (length == 0) {
return "";
}
// if fastASCIINumbers is false, then no chance of the byte[] being ascii compatible characters
return fastASCIINumbers ? INTERNER.getString(encodedString, offset, length, this)
: decode(encodedString, offset, length);
}
public String decodeCanonicalizedIfPresent(byte[] encodedString, int offset, int length) throws IOException {
if (length == 0) {
return "";
}
// if fastASCIINumbers is false, then no chance of the byte[] being ascii compatible characters
return fastASCIINumbers ? INTERNER.getStringIfPresent(encodedString, offset, length, this)
: decode(encodedString, offset, length);
}
/**
* Decode an array of bytes possibly into a canonicalized string.
*
*
* Only ascii compatible encoding support canonicalization and only ascii {@code String} values are eligible
* to be canonicalized.
*
*
* @param encodedString a byte array containing the string to decode
* @return the decoded string
* @throws IOException if something goes wrong
*/
public String decodeCanonicalized(byte[] encodedString) throws IOException {
return decodeCanonicalized(encodedString, 0, encodedString.length);
}
/**
* Decode an array of bytes into a string.
*
* @param encodedString a byte array containing the string to decode
* @param offset the offset in encodedString
of the first byte of the encoded
* representation
* @param length the length, in bytes, of the encoded representation
* @return the decoded string
* @throws IOException if something goes wrong
*/
public String decode(byte[] encodedString, int offset, int length) throws IOException {
return new String(encodedString, offset, length, encoding);
}
/**
* Decode an array of bytes into a string.
*
* @param encodedString a byte array containing the string to decode
* @return the decoded string
* @throws IOException if something goes wrong
*/
public String decode(byte[] encodedString) throws IOException {
return decode(encodedString, 0, encodedString.length);
}
/**
* Get a Reader that decodes the given InputStream using this encoding.
*
* @param in the underlying stream to decode from
* @return a non-null Reader implementation.
* @throws IOException if something goes wrong
*/
public Reader getDecodingReader(InputStream in) throws IOException {
return new InputStreamReader(in, encoding);
}
/**
* Get a Writer that encodes to the given OutputStream using this encoding.
*
* @param out the underlying stream to encode to
* @return a non-null Writer implementation.
* @throws IOException if something goes wrong
*/
public Writer getEncodingWriter(OutputStream out) throws IOException {
return new OutputStreamWriter(out, encoding);
}
/**
* Get an Encoding using the default encoding for the JVM.
*
* @return an Encoding instance
*/
public static Encoding defaultEncoding() {
return DEFAULT_ENCODING;
}
@Override
public String toString() {
return encoding.name();
}
/**
* Checks whether this encoding is compatible with ASCII for the number characters '-' and
* '0'..'9'. Where compatible means that they are encoded with exactly same values.
*
* @return If faster ASCII number parsing can be used with this encoding.
*/
private static boolean testAsciiNumbers(Charset encoding) {
// TODO: test all postgres supported encoding to see if there are
// any which do _not_ have ascii numbers in same location
// at least all the encoding listed in the encodings hashmap have
// working ascii numbers
String test = "-0123456789";
byte[] bytes = test.getBytes(encoding);
String res = new String(bytes, StandardCharsets.US_ASCII);
return test.equals(res);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/EncodingPredictor.java 0100664 0000000 0000000 00000011603 00000250600 027035 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
/**
* Predicts encoding for error messages based on some heuristics.
*
*
* For certain languages, it is known how "FATAL" is translated
* For Japanese, several common words are hardcoded
* Then try various LATIN encodings
*
*/
public class EncodingPredictor {
/**
* In certain cases the encoding is not known for sure (e.g. before authentication).
* In such cases, backend might send messages in "native to database" encoding,
* thus pgjdbc has to guess the encoding nad
*/
public static class DecodeResult {
public final String result;
public final /* @Nullable */ String encoding; // JVM name
DecodeResult(String result, /* @Nullable */ String encoding) {
this.result = result;
this.encoding = encoding;
}
}
static class Translation {
public final /* @Nullable */ String fatalText;
private final String /* @Nullable */ [] texts;
public final String language;
public final String[] encodings;
Translation(/* @Nullable */ String fatalText, String /* @Nullable */ [] texts,
String language, String... encodings) {
this.fatalText = fatalText;
this.texts = texts;
this.language = language;
this.encodings = encodings;
}
}
private static final Translation[] FATAL_TRANSLATIONS =
new Translation[]{
new Translation("ВАЖНО", null, "ru", "WIN", "ALT", "KOI8"),
new Translation("致命错误", null, "zh_CN", "EUC_CN", "GBK", "BIG5"),
new Translation("KATASTROFALNY", null, "pl", "LATIN2"),
new Translation("FATALE", null, "it", "LATIN1", "LATIN9"),
new Translation("FATAL", new String[]{"は存在しません" /* ~ does not exist */,
"ロール" /* ~ role */, "ユーザ" /* ~ user */}, "ja", "EUC_JP", "SJIS"),
new Translation(null, null, "fr/de/es/pt_BR", "LATIN1", "LATIN3", "LATIN4", "LATIN5",
"LATIN7", "LATIN9"),
};
public static /* @Nullable */ DecodeResult decode(byte[] bytes, int offset, int length) {
Encoding defaultEncoding = Encoding.defaultEncoding();
for (Translation tr : FATAL_TRANSLATIONS) {
for (String encoding : tr.encodings) {
Encoding encoder = Encoding.getDatabaseEncoding(encoding);
if (encoder == defaultEncoding) {
continue;
}
// If there is a translation for "FATAL", then try typical encodings for that language
if (tr.fatalText != null) {
byte[] encoded;
try {
byte[] tmp = encoder.encode(tr.fatalText);
encoded = new byte[tmp.length + 2];
encoded[0] = 'S';
encoded[encoded.length - 1] = 0;
System.arraycopy(tmp, 0, encoded, 1, tmp.length);
} catch (IOException e) {
continue;// should not happen
}
if (!arrayContains(bytes, offset, length, encoded, 0, encoded.length)) {
continue;
}
}
// No idea how to tell Japanese from Latin languages, thus just hard-code certain Japanese words
if (tr.texts != null) {
boolean foundOne = false;
for (String text : tr.texts) {
try {
byte[] textBytes = encoder.encode(text);
if (arrayContains(bytes, offset, length, textBytes, 0, textBytes.length)) {
foundOne = true;
break;
}
} catch (IOException e) {
// do not care, will try other encodings
}
}
if (!foundOne) {
// Error message does not have key parts, will try other encodings
continue;
}
}
try {
String decoded = encoder.decode(bytes, offset, length);
if (decoded.indexOf(65533) != -1) {
// bad character in string, try another encoding
continue;
}
return new DecodeResult(decoded, encoder.name());
} catch (IOException e) {
// do not care
}
}
}
return null;
}
private static boolean arrayContains(
byte[] first, int firstOffset, int firstLength,
byte[] second, int secondOffset, int secondLength
) {
if (firstLength < secondLength) {
return false;
}
for (int i = 0; i < firstLength; i++) {
for (; i < firstLength && first[firstOffset + i] != second[secondOffset]; i++) {
// find the first matching byte
}
int j = 1;
for (; j < secondLength && first[firstOffset + i + j] == second[secondOffset + j]; j++) {
// compare arrays
}
if (j == secondLength) {
return true;
}
}
return false;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Field.java 0100664 0000000 0000000 00000011612 00000250600 024456 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.jdbc.FieldMetadata;
// import org.checkerframework.checker.nullness.qual.Nullable;
// import org.checkerframework.dataflow.qual.Pure;
import java.util.Locale;
public class Field {
// The V3 protocol defines two constants for the format of data
public static final int TEXT_FORMAT = 0;
public static final int BINARY_FORMAT = 1;
private final int length; // Internal Length of this field
private final int oid; // OID of the type
private final int mod; // type modifier of this field
private String columnLabel; // Column label
private int format = TEXT_FORMAT; // In the V3 protocol each field has a format
// 0 = text, 1 = binary
// In the V2 protocol all fields in a
// binary cursor are binary and all
// others are text
private final int tableOid; // OID of table ( zero if no table )
private final int positionInTable;
// Cache fields filled in by AbstractJdbc2ResultSetMetaData.fetchFieldMetaData.
// Don't use unless that has been called.
private /* @Nullable */ FieldMetadata metadata;
private int sqlType;
private String pgType = NOT_YET_LOADED;
// New string to avoid clashes with other strings
private static final String NOT_YET_LOADED = new String("pgType is not yet loaded");
/**
* Construct a field based on the information fed to it.
*
* @param name the name (column name and label) of the field
* @param oid the OID of the field
* @param length the length of the field
* @param mod modifier
*/
public Field(String name, int oid, int length, int mod) {
this(name, oid, length, mod, 0, 0);
}
/**
* Constructor without mod parameter.
*
* @param name the name (column name and label) of the field
* @param oid the OID of the field
*/
public Field(String name, int oid) {
this(name, oid, 0, -1);
}
/**
* Construct a field based on the information fed to it.
* @param columnLabel the column label of the field
* @param oid the OID of the field
* @param length the length of the field
* @param mod modifier
* @param tableOid the OID of the columns' table
* @param positionInTable the position of column in the table (first column is 1, second column is 2, etc...)
*/
public Field(String columnLabel, int oid, int length, int mod, int tableOid,
int positionInTable) {
this.columnLabel = columnLabel;
this.oid = oid;
this.length = length;
this.mod = mod;
this.tableOid = tableOid;
this.positionInTable = positionInTable;
this.metadata = tableOid == 0 ? new FieldMetadata(columnLabel) : null;
}
/**
* Returns the oid of this Field's data type.
* @return the oid of this Field's data type
*/
/* @Pure */
public int getOID() {
return oid;
}
/**
* Returns the mod of this Field's data type
* @return the mod of this Field's data type
*/
public int getMod() {
return mod;
}
/**
* Returns the column label of this Field's data type.
* @return the column label of this Field's data type
*/
public String getColumnLabel() {
return columnLabel;
}
/**
* Returns the length of this Field's data type.
* @return the length of this Field's data type
*/
public int getLength() {
return length;
}
/**
* Returns the format of this Field's data (text=0, binary=1).
* @return the format of this Field's data (text=0, binary=1)
*/
public int getFormat() {
return format;
}
/**
* Sets the format of this Field's data (text=0, binary=1).
* @param format the format of this Field's data (text=0, binary=1)
*/
public void setFormat(int format) {
this.format = format;
}
/**
* Returns the columns' table oid, zero if no oid available.
* @return the columns' table oid, zero if no oid available
*/
public int getTableOid() {
return tableOid;
}
public int getPositionInTable() {
return positionInTable;
}
public /* @Nullable */ FieldMetadata getMetadata() {
return metadata;
}
public void setMetadata(FieldMetadata metadata) {
this.metadata = metadata;
}
@Override
public String toString() {
return "Field(" + (columnLabel != null ? columnLabel : "")
+ "," + Oid.toString(oid)
+ "," + length
+ "," + (format == TEXT_FORMAT ? 'T' : 'B')
+ ")";
}
public void setSQLType(int sqlType) {
this.sqlType = sqlType;
}
public int getSQLType() {
return sqlType;
}
public void setPGType(String pgType) {
this.pgType = pgType;
}
public String getPGType() {
return pgType;
}
@SuppressWarnings("ReferenceEquality")
public boolean isTypeInitialized() {
//noinspection StringEquality
return pgType != NOT_YET_LOADED;
}
public void upperCaseLabel() {
columnLabel = columnLabel.toUpperCase(Locale.ROOT);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/FixedLengthOutputStream.java 0100664 0000000 0000000 00000002400 00000250600 030224 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import java.io.IOException;
import java.io.OutputStream;
/**
* A stream that refuses to write more than a maximum number of bytes.
*/
public class FixedLengthOutputStream extends OutputStream {
private final int size;
private final OutputStream target;
private int written;
public FixedLengthOutputStream(int size, OutputStream target) {
this.size = size;
this.target = target;
}
@Override
public void write(int b) throws IOException {
verifyAllowed(1);
written++;
target.write(b);
}
@Override
public void write(byte[] buf, int offset, int len) throws IOException {
if ((offset < 0) || (len < 0) || ((offset + len) > buf.length)) {
throw new IndexOutOfBoundsException();
} else if (len == 0) {
return;
}
verifyAllowed(len);
target.write(buf, offset, len);
written += len;
}
public int remaining() {
return size - written;
}
private void verifyAllowed(int wanted) throws IOException {
if (remaining() < wanted) {
throw new IOException("Attempt to write more than the specified " + size + " bytes");
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/JavaVersion.java 0100664 0000000 0000000 00000001626 00000250600 025666 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2017, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
public enum JavaVersion {
// Note: order is important,
v1_8,
other;
private static final JavaVersion RUNTIME_VERSION = from(System.getProperty("java.version"));
/**
* Returns enum value that represents current runtime. For instance, when using -jre7.jar via Java
* 8, this would return v18
*
* @return enum value that represents current runtime.
*/
public static JavaVersion getRuntimeVersion() {
return RUNTIME_VERSION;
}
/**
* Java version string like in {@code "java.version"} property.
*
* @param version string like 1.6, 1.7, etc
* @return JavaVersion enum
*/
public static JavaVersion from(String version) {
if (version.startsWith("1.8")) {
return v1_8;
}
return other;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/JdbcCallParseInfo.java 0100664 0000000 0000000 00000001506 00000250600 026701 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2015, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
/**
* Contains parse flags from {@link Parser#modifyJdbcCall(String, boolean, int, EscapeSyntaxCallMode)}.
*/
public class JdbcCallParseInfo {
private final String sql;
private final boolean isFunction;
public JdbcCallParseInfo(String sql, boolean isFunction) {
this.sql = sql;
this.isFunction = isFunction;
}
/**
* SQL in a native for certain backend version.
*
* @return SQL in a native for certain backend version
*/
public String getSql() {
return sql;
}
/**
* Returns if given SQL is a function.
*
* @return {@code true} if given SQL is a function
*/
public boolean isFunction() {
return isFunction;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/NativeQuery.java 0100664 0000000 0000000 00000010640 00000250600 025707 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import org.postgresql.core.v3.SqlSerializationContext;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* Represents a query that is ready for execution by backend. The main difference from JDBC is ? are
* replaced with $1, $2, etc.
*/
public class NativeQuery {
private static final String[] BIND_NAMES = new String[128 * 10];
private static final int[] NO_BINDS = new int[0];
public final String nativeSql;
public final int[] bindPositions;
public final SqlCommand command;
public final boolean multiStatement;
static {
for (int i = 1; i < BIND_NAMES.length; i++) {
BIND_NAMES[i] = "$" + i;
}
}
public NativeQuery(String nativeSql, SqlCommand dml) {
this(nativeSql, NO_BINDS, true, dml);
}
public NativeQuery(String nativeSql, int /* @Nullable */ [] bindPositions, boolean multiStatement, SqlCommand dml) {
this.nativeSql = nativeSql;
this.bindPositions =
bindPositions == null || bindPositions.length == 0 ? NO_BINDS : bindPositions;
this.multiStatement = multiStatement;
this.command = dml;
}
/**
* Returns string representation of the query, substituting particular parameter values for
* parameter placeholders.
*
* @param parameters a ParameterList returned by this Query's {@link Query#createParameterList}
* method, or {@code null} to leave the parameter placeholders unsubstituted.
* @return a human-readable representation of this query
*/
public String toString(/* @Nullable */ ParameterList parameters) {
return toString(parameters, SqlSerializationContext.of(true, true));
}
/**
* Returns string representation of the query, substituting particular parameter values for
* parameter placeholders.
*
* @param parameters a ParameterList returned by this Query's {@link Query#createParameterList}
* method, or {@code null} to leave the parameter placeholders unsubstituted.
* @param context specifies configuration for converting the parameters to string
* @return a human-readable representation of this query
*/
public String toString(/* @Nullable */ ParameterList parameters, SqlSerializationContext context) {
if (bindPositions.length == 0) {
return nativeSql;
}
int queryLength = nativeSql.length();
String[] params = new String[bindPositions.length];
for (int i = 1; i <= bindPositions.length; i++) {
String param = parameters == null ? "?" : parameters.toString(i, context);
params[i - 1] = param;
queryLength += param.length() - bindName(i).length();
}
StringBuilder sbuf = new StringBuilder(queryLength);
sbuf.append(nativeSql, 0, bindPositions[0]);
for (int i = 1; i <= bindPositions.length; i++) {
sbuf.append(params[i - 1]);
int nextBind = i < bindPositions.length ? bindPositions[i] : nativeSql.length();
sbuf.append(nativeSql, bindPositions[i - 1] + bindName(i).length(), nextBind);
}
return sbuf.toString();
}
/**
* Returns $1, $2, etc names of bind variables used by backend.
*
* @param index index of a bind variable
* @return bind variable name
*/
public static String bindName(int index) {
return index < BIND_NAMES.length ? BIND_NAMES[index] : "$" + index;
}
public static StringBuilder appendBindName(StringBuilder sb, int index) {
if (index < BIND_NAMES.length) {
return sb.append(bindName(index));
}
sb.append('$');
sb.append(index);
return sb;
}
/**
* Calculate the text length required for the given number of bind variables
* including dollars.
* Do this to avoid repeated calls to
* AbstractStringBuilder.expandCapacity(...) and Arrays.copyOf
*
* @param bindCount total number of parameters in a query
* @return int total character length for $xyz kind of binds
*/
public static int calculateBindLength(int bindCount) {
int res = 0;
int bindLen = 2; // $1
int maxBindsOfLen = 9; // $0 .. $9
while (bindCount > 0) {
int numBinds = Math.min(maxBindsOfLen, bindCount);
bindCount -= numBinds;
res += bindLen * numBinds;
bindLen++;
maxBindsOfLen *= 10; // $0..$9 (9 items) -> $10..$99 (90 items)
}
return res;
}
public SqlCommand getCommand() {
return command;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Notification.java 0100664 0000000 0000000 00000001577 00000250600 026072 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.PGNotification;
public class Notification implements PGNotification {
private final String name;
private final String parameter;
private final int pid;
public Notification(String name, int pid) {
this(name, pid, "");
}
public Notification(String name, int pid, String parameter) {
this.name = name;
this.pid = pid;
this.parameter = parameter;
}
/*
* Returns name of this notification
*/
@Override
public String getName() {
return name;
}
/*
* Returns the process id of the backend process making this notification
*/
@Override
public int getPID() {
return pid;
}
@Override
public String getParameter() {
return parameter;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Oid.java 0100664 0000000 0000000 00000012137 00000250600 024151 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.lang.reflect.Field;
import java.util.HashMap;
import java.util.Locale;
import java.util.Map;
/**
* Provides constants for well-known backend OIDs for the types we commonly use.
*/
public class Oid {
public static final int UNSPECIFIED = 0;
public static final int INT2 = 21;
public static final int INT2_ARRAY = 1005;
public static final int INT4 = 23;
public static final int INT4_ARRAY = 1007;
public static final int INT8 = 20;
public static final int INT8_ARRAY = 1016;
public static final int TEXT = 25;
public static final int TEXT_ARRAY = 1009;
public static final int NUMERIC = 1700;
public static final int NUMERIC_ARRAY = 1231;
public static final int FLOAT4 = 700;
public static final int FLOAT4_ARRAY = 1021;
public static final int FLOAT8 = 701;
public static final int FLOAT8_ARRAY = 1022;
public static final int BOOL = 16;
public static final int BOOL_ARRAY = 1000;
public static final int DATE = 1082;
public static final int DATE_ARRAY = 1182;
public static final int TIME = 1083;
public static final int TIME_ARRAY = 1183;
public static final int TIMETZ = 1266;
public static final int TIMETZ_ARRAY = 1270;
public static final int TIMESTAMP = 1114;
public static final int TIMESTAMP_ARRAY = 1115;
public static final int TIMESTAMPTZ = 1184;
public static final int TIMESTAMPTZ_ARRAY = 1185;
public static final int BYTEA = 17;
public static final int BYTEA_ARRAY = 1001;
public static final int VARCHAR = 1043;
public static final int VARCHAR_ARRAY = 1015;
public static final int OID = 26;
public static final int OID_ARRAY = 1028;
public static final int BPCHAR = 1042;
public static final int BPCHAR_ARRAY = 1014;
public static final int MONEY = 790;
public static final int MONEY_ARRAY = 791;
public static final int NAME = 19;
public static final int NAME_ARRAY = 1003;
public static final int BIT = 1560;
public static final int BIT_ARRAY = 1561;
public static final int VOID = 2278;
public static final int INTERVAL = 1186;
public static final int INTERVAL_ARRAY = 1187;
public static final int CHAR = 18; // This is not char(N), this is "char" a single byte type.
public static final int CHAR_ARRAY = 1002;
public static final int VARBIT = 1562;
public static final int VARBIT_ARRAY = 1563;
public static final int UUID = 2950;
public static final int UUID_ARRAY = 2951;
public static final int XML = 142;
public static final int XML_ARRAY = 143;
public static final int POINT = 600;
public static final int POINT_ARRAY = 1017;
public static final int BOX = 603;
public static final int BOX_ARRAY = 1020;
public static final int JSONB = 3802;
public static final int JSONB_ARRAY = 3807;
public static final int JSON = 114;
public static final int JSON_ARRAY = 199;
public static final int REF_CURSOR = 1790;
public static final int REF_CURSOR_ARRAY = 2201;
public static final int LINE = 628;
public static final int LSEG = 601;
public static final int PATH = 602;
public static final int POLYGON = 604;
public static final int CIRCLE = 718;
public static final int CIDR = 650;
public static final int INET = 869;
public static final int MACADDR = 829;
public static final int MACADDR8 = 774;
public static final int TSVECTOR = 3614;
public static final int TSQUERY = 3615;
private static final Map OID_TO_NAME = new HashMap<>(100);
private static final Map NAME_TO_OID = new HashMap<>(100);
static {
for (Field field : Oid.class.getFields()) {
try {
int oid = field.getInt(null);
String name = field.getName().toUpperCase(Locale.ROOT);
OID_TO_NAME.put(oid, name);
NAME_TO_OID.put(name, oid);
} catch (IllegalAccessException e) {
// ignore
}
}
}
/**
* Returns the name of the oid as string.
*
* @param oid The oid to convert to name.
* @return The name of the oid or {@code ""} if oid no constant for oid value has been
* defined.
*/
public static String toString(int oid) {
String name = OID_TO_NAME.get(oid);
if (name == null) {
name = "";
}
return name;
}
public static int valueOf(String oid) throws PSQLException {
if (oid.length() > 0 && !Character.isDigit(oid.charAt(0))) {
Integer id = NAME_TO_OID.get(oid);
if (id == null) {
id = NAME_TO_OID.get(oid.toUpperCase(Locale.ROOT));
}
if (id != null) {
return id;
}
} else {
try {
// OID are unsigned 32bit integers, so Integer.parseInt is not enough
return (int) Long.parseLong(oid);
} catch (NumberFormatException ex) {
// Throw exception below if parsing fails
}
}
throw new PSQLException(GT.tr("oid type {0} not known and not a number", oid),
PSQLState.INVALID_PARAMETER_VALUE);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/PGBindException.java 0100664 0000000 0000000 00000000631 00000250600 026414 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import java.io.IOException;
public class PGBindException extends IOException {
private final IOException ioe;
public PGBindException(IOException ioe) {
this.ioe = ioe;
}
public IOException getIOException() {
return ioe;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/PGStream.java 0100664 0000000 0000000 00000065157 00000250600 025132 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2017, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.gss.GSSInputStream;
import org.postgresql.gss.GSSOutputStream;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.GT;
import org.postgresql.util.HostSpec;
import org.postgresql.util.PGPropertyMaxResultBufferParser;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.internal.PgBufferedOutputStream;
import org.postgresql.util.internal.SourceStreamIOException;
// import org.checkerframework.checker.nullness.qual.Nullable;
import org.ietf.jgss.GSSContext;
import org.ietf.jgss.GSSException;
import org.ietf.jgss.MessageProp;
import java.io.Closeable;
import java.io.EOFException;
import java.io.FilterOutputStream;
import java.io.Flushable;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.Writer;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.Socket;
import java.net.SocketException;
import java.net.SocketTimeoutException;
import java.sql.SQLException;
import javax.net.SocketFactory;
/**
* Wrapper around the raw connection to the server that implements some basic primitives
* (reading/writing formatted data, doing string encoding, etc).
*
* In general, instances of PGStream are not threadsafe; the caller must ensure that only one thread
* at a time is accessing a particular PGStream instance.
*/
public class PGStream implements Closeable, Flushable {
private final SocketFactory socketFactory;
private final HostSpec hostSpec;
private final int maxSendBufferSize;
private Socket connection;
private VisibleBufferedInputStream pgInput;
private PgBufferedOutputStream pgOutput;
private /* @Nullable */ ProtocolVersion protocolVersion;
public boolean isGssEncrypted() {
return gssEncrypted;
}
boolean gssEncrypted;
public void setSecContext(GSSContext secContext) throws GSSException {
MessageProp messageProp = new MessageProp(0, true);
pgInput = new VisibleBufferedInputStream(new GSSInputStream(pgInput, secContext, messageProp ), 8192);
// See https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-GSSAPI
// Note that the server will only accept encrypted packets from the client which are less than
// 16kB; gss_wrap_size_limit() should be used by the client to determine the size of
// the unencrypted message which will fit within this limit and larger messages should be
// broken up into multiple gss_wrap() calls
// See https://github.com/postgres/postgres/blob/acecd6746cdc2df5ba8dcc2c2307c6560c7c2492/src/backend/libpq/be-secure-gssapi.c#L348
// Backend includes "int4 messageSize" into 16384 limit, so we subtract 4.
pgOutput = new GSSOutputStream(pgOutput, secContext, messageProp, 16384 - 4);
gssEncrypted = true;
}
private long nextStreamAvailableCheckTime;
// This is a workaround for SSL sockets: sslInputStream.available() might return 0
// so we perform "1ms reads" once in a while
private int minStreamAvailableCheckDelay = 1000;
private Encoding encoding;
private Writer encodingWriter;
private long maxResultBuffer = -1;
private long resultBufferByteCount;
private int maxRowSizeBytes = -1;
/**
* Constructor: Connect to the PostgreSQL back end and return a stream connection.
*
* @param socketFactory socket factory to use when creating sockets
* @param hostSpec the host and port to connect to
* @param timeout timeout in milliseconds, or 0 if no timeout set
* @throws IOException if an IOException occurs below it.
* @deprecated use {@link #PGStream(SocketFactory, org.postgresql.util.HostSpec, int, int)}
*/
@Deprecated
@SuppressWarnings({"method.invocation", "initialization.fields.uninitialized"})
public PGStream(SocketFactory socketFactory, HostSpec hostSpec, int timeout) throws IOException {
this(socketFactory, hostSpec, timeout, 8192);
}
/**
* Constructor: Connect to the PostgreSQL back end and return a stream connection.
*
* @param socketFactory socket factory to use when creating sockets
* @param hostSpec the host and port to connect to
* @param timeout timeout in milliseconds, or 0 if no timeout set
* @param maxSendBufferSize maximum amount of bytes buffered before sending to the backend
* @throws IOException if an IOException occurs below it.
*/
@SuppressWarnings({"method.invocation", "initialization.fields.uninitialized"})
public PGStream(SocketFactory socketFactory, HostSpec hostSpec, int timeout,
int maxSendBufferSize) throws IOException {
this.socketFactory = socketFactory;
this.hostSpec = hostSpec;
this.maxSendBufferSize = maxSendBufferSize;
Socket socket = createSocket(timeout);
changeSocket(socket);
setEncoding(Encoding.getJVMEncoding("UTF-8"));
}
@SuppressWarnings({"method.invocation", "initialization.fields.uninitialized"})
public PGStream(PGStream pgStream, int timeout) throws IOException {
/*
Some defaults
*/
int sendBufferSize = 1024;
int receiveBufferSize = 1024;
int soTimeout = 0;
boolean keepAlive = false;
boolean tcpNoDelay = true;
/*
Get the existing values before closing the stream
*/
try {
sendBufferSize = pgStream.getSocket().getSendBufferSize();
receiveBufferSize = pgStream.getSocket().getReceiveBufferSize();
soTimeout = pgStream.getSocket().getSoTimeout();
keepAlive = pgStream.getSocket().getKeepAlive();
tcpNoDelay = pgStream.getSocket().getTcpNoDelay();
} catch ( SocketException ex ) {
// ignore it
}
//close the existing stream
pgStream.close();
this.socketFactory = pgStream.socketFactory;
this.hostSpec = pgStream.hostSpec;
this.maxSendBufferSize = pgStream.maxSendBufferSize;
Socket socket = createSocket(timeout);
changeSocket(socket);
setEncoding(Encoding.getJVMEncoding("UTF-8"));
// set the buffer sizes and timeout
socket.setReceiveBufferSize(receiveBufferSize);
socket.setSendBufferSize(sendBufferSize);
setNetworkTimeout(soTimeout);
socket.setKeepAlive(keepAlive);
socket.setTcpNoDelay(tcpNoDelay);
}
/**
* Constructor: Connect to the PostgreSQL back end and return a stream connection.
*
* @param socketFactory socket factory
* @param hostSpec the host and port to connect to
* @throws IOException if an IOException occurs below it.
* @deprecated use {@link #PGStream(SocketFactory, org.postgresql.util.HostSpec, int, int)}
*/
@Deprecated
public PGStream(SocketFactory socketFactory, HostSpec hostSpec) throws IOException {
this(socketFactory, hostSpec, 0);
}
public HostSpec getHostSpec() {
return hostSpec;
}
public Socket getSocket() {
return connection;
}
public SocketFactory getSocketFactory() {
return socketFactory;
}
/**
* Check for pending backend messages without blocking. Might return false when there actually are
* messages waiting, depending on the characteristics of the underlying socket. This is used to
* detect asynchronous notifies from the backend, when available.
*
* @return true if there is a pending backend message
* @throws IOException if something wrong happens
*/
public boolean hasMessagePending() throws IOException {
boolean available = false;
// In certain cases, available returns 0, yet there are bytes
if (pgInput.available() > 0) {
return true;
}
long now = System.nanoTime() / 1000000;
if (now < nextStreamAvailableCheckTime && minStreamAvailableCheckDelay != 0) {
// Do not use ".peek" too often
return false;
}
int soTimeout = getNetworkTimeout();
connection.setSoTimeout(1);
try {
if (!pgInput.ensureBytes(1, false)) {
return false;
}
available = pgInput.peek() != -1;
} catch (SocketTimeoutException e) {
return false;
} finally {
connection.setSoTimeout(soTimeout);
}
/*
If none available then set the next check time
In the event that there more async bytes available we will continue to get them all
see issue 1547 https://github.com/pgjdbc/pgjdbc/issues/1547
*/
if (!available) {
nextStreamAvailableCheckTime = now + minStreamAvailableCheckDelay;
}
return available;
}
public void setMinStreamAvailableCheckDelay(int delay) {
this.minStreamAvailableCheckDelay = delay;
}
private Socket createSocket(int timeout) throws IOException {
Socket socket = null;
try {
socket = socketFactory.createSocket();
String localSocketAddress = hostSpec.getLocalSocketAddress();
if (localSocketAddress != null) {
socket.bind(new InetSocketAddress(InetAddress.getByName(localSocketAddress), 0));
}
if (!socket.isConnected()) {
// When using a SOCKS proxy, the host might not be resolvable locally,
// thus we defer resolution until the traffic reaches the proxy. If there
// is no proxy, we must resolve the host to an IP to connect the socket.
InetSocketAddress address = hostSpec.shouldResolve()
? new InetSocketAddress(hostSpec.getHost(), hostSpec.getPort())
: InetSocketAddress.createUnresolved(hostSpec.getHost(), hostSpec.getPort());
socket.connect(address, timeout);
}
return socket;
} catch ( Exception ex ) {
if (socket != null) {
try {
socket.close();
} catch ( Exception ex1 ) {
ex.addSuppressed(ex1);
}
}
throw ex;
}
}
/**
* Switch this stream to using a new socket. Any existing socket is not closed; it's
* assumed that we are changing to a new socket that delegates to the original socket (e.g. SSL).
*
* @param socket the new socket to change to
* @throws IOException if something goes wrong
*/
public void changeSocket(Socket socket) throws IOException {
assert connection != socket : "changeSocket is called with the current socket as argument."
+ " This is a no-op, however, it re-allocates buffered streams, so refrain from"
+ " excessive changeSocket calls";
this.connection = socket;
// Submitted by Jason Venner . Disable Nagle
// as we are selective about flushing output only when we
// really need to.
connection.setTcpNoDelay(true);
pgInput = new VisibleBufferedInputStream(connection.getInputStream(), 8192);
int sendBufferSize = Math.min(maxSendBufferSize, Math.max(8192, socket.getSendBufferSize()));
pgOutput = new PgBufferedOutputStream(connection.getOutputStream(), sendBufferSize);
if (encoding != null) {
setEncoding(encoding);
}
}
public Encoding getEncoding() {
return encoding;
}
/**
* Change the encoding used by this connection.
*
* @param encoding the new encoding to use
* @throws IOException if something goes wrong
*/
public void setEncoding(Encoding encoding) throws IOException {
if (this.encoding != null && this.encoding.name().equals(encoding.name())) {
return;
}
// Close down any old writer.
if (encodingWriter != null) {
encodingWriter.close();
}
this.encoding = encoding;
// Intercept flush() downcalls from the writer; our caller
// will call PGStream.flush() as needed.
OutputStream interceptor = new FilterOutputStream(pgOutput) {
@Override
public void flush() throws IOException {
}
@Override
public void close() throws IOException {
super.flush();
}
};
encodingWriter = encoding.getEncodingWriter(interceptor);
}
/**
* Get a Writer instance that encodes directly onto the underlying stream.
*
* The returned Writer should not be closed, as it's a shared object. Writer.flush needs to be
* called when switching between use of the Writer and use of the PGStream write methods, but it
* won't actually flush output all the way out -- call {@link #flush} to actually ensure all
* output has been pushed to the server.
*
* @return the shared Writer instance
* @throws IOException if something goes wrong.
*/
public Writer getEncodingWriter() throws IOException {
if (encodingWriter == null) {
throw new IOException("No encoding has been set on this connection");
}
return encodingWriter;
}
/**
* Sends a single character to the back end.
*
* @param val the character to be sent
* @throws IOException if an I/O error occurs
*/
public void sendChar(int val) throws IOException {
pgOutput.write(val);
}
/**
* Sends a 4-byte integer to the back end.
*
* @param val the integer to be sent
* @throws IOException if an I/O error occurs
*/
public void sendInteger4(int val) throws IOException {
pgOutput.writeInt4(val);
}
/**
* Sends a 2-byte integer (short) to the back end.
*
* @param val the integer to be sent
* @throws IOException if an I/O error occurs or {@code val} cannot be encoded in 2 bytes
*/
public void sendInteger2(int val) throws IOException {
if (val < 0 || val > 65535) {
throw new IllegalArgumentException("Tried to send an out-of-range integer as a 2-byte unsigned int value: " + val);
}
pgOutput.writeInt2(val);
}
/**
* Send an array of bytes to the backend.
*
* @param buf The array of bytes to be sent
* @throws IOException if an I/O error occurs
*/
public void send(byte[] buf) throws IOException {
pgOutput.write(buf);
}
/**
* Send a fixed-size array of bytes to the backend. If {@code buf.length < siz}, pad with zeros.
* If {@code buf.length > siz}, truncate the array.
*
* @param buf the array of bytes to be sent
* @param siz the number of bytes to be sent
* @throws IOException if an I/O error occurs
*/
public void send(byte[] buf, int siz) throws IOException {
send(buf, 0, siz);
}
/**
* Send a fixed-size array of bytes to the backend. If {@code length < siz}, pad with zeros. If
* {@code length > siz}, truncate the array.
*
* @param buf the array of bytes to be sent
* @param off offset in the array to start sending from
* @param siz the number of bytes to be sent
* @throws IOException if an I/O error occurs
*/
public void send(byte[] buf, int off, int siz) throws IOException {
int bufamt = buf.length - off;
pgOutput.write(buf, off, Math.min(bufamt, siz));
if (siz > bufamt) {
pgOutput.writeZeros(siz - bufamt);
}
}
/**
* Send a fixed-size array of bytes to the backend. If {@code length < siz}, pad with zeros. If
* {@code length > siz}, truncate the array.
*
* @param writer the stream writer to invoke to send the bytes
* @throws IOException if an I/O error occurs
*/
public void send(ByteStreamWriter writer) throws IOException {
final FixedLengthOutputStream fixedLengthStream = new FixedLengthOutputStream(writer.getLength(), pgOutput);
try {
writer.writeTo(new ByteStreamWriter.ByteStreamTarget() {
@Override
public OutputStream getOutputStream() {
return fixedLengthStream;
}
});
} catch (IOException ioe) {
throw ioe;
} catch (Exception re) {
throw new IOException("Error writing bytes to stream", re);
}
pgOutput.writeZeros(fixedLengthStream.remaining());
}
/**
* Receives a single character from the backend, without advancing the current protocol stream
* position.
*
* @return the character received
* @throws IOException if an I/O Error occurs
*/
public int peekChar() throws IOException {
int c = pgInput.peek();
if (c < 0) {
throw new EOFException();
}
return c;
}
/**
* Receives a single character from the backend.
*
* @return the character received
* @throws IOException if an I/O Error occurs
*/
public int receiveChar() throws IOException {
int c = pgInput.read();
if (c < 0) {
throw new EOFException();
}
return c;
}
/**
* Receives a four byte integer from the backend.
*
* @return the integer received from the backend
* @throws IOException if an I/O error occurs
*/
public int receiveInteger4() throws IOException {
return pgInput.readInt4();
}
/**
* Receives a two byte integer from the backend as an unsigned integer (0..65535).
*
* @return the integer received from the backend
* @throws IOException if an I/O error occurs
*/
public int receiveInteger2() throws IOException {
return pgInput.readInt2();
}
/**
* Receives a fixed-size string from the backend.
*
* @param len the length of the string to receive, in bytes.
* @return the decoded string
* @throws IOException if something wrong happens
*/
public String receiveString(int len) throws IOException {
if (!pgInput.ensureBytes(len)) {
throw new EOFException();
}
String res = encoding.decode(pgInput.getBuffer(), pgInput.getIndex(), len);
pgInput.skip(len);
return res;
}
/**
* Receives a fixed-size string from the backend, and tries to avoid "UTF-8 decode failed"
* errors.
*
* @param len the length of the string to receive, in bytes.
* @return the decoded string
* @throws IOException if something wrong happens
*/
public EncodingPredictor.DecodeResult receiveErrorString(int len) throws IOException {
if (!pgInput.ensureBytes(len)) {
throw new EOFException();
}
EncodingPredictor.DecodeResult res;
try {
String value = encoding.decode(pgInput.getBuffer(), pgInput.getIndex(), len);
// no autodetect warning as the message was converted on its own
res = new EncodingPredictor.DecodeResult(value, null);
} catch (IOException e) {
res = EncodingPredictor.decode(pgInput.getBuffer(), pgInput.getIndex(), len);
if (res == null) {
Encoding enc = Encoding.defaultEncoding();
String value = enc.decode(pgInput.getBuffer(), pgInput.getIndex(), len);
res = new EncodingPredictor.DecodeResult(value, enc.name());
}
}
pgInput.skip(len);
return res;
}
/**
* Receives a null-terminated string from the backend. If we don't see a null, then we assume
* something has gone wrong.
*
* @return string from back end
* @throws IOException if an I/O error occurs, or end of file
*/
public String receiveString() throws IOException {
int len = pgInput.scanCStringLength();
String res = encoding.decode(pgInput.getBuffer(), pgInput.getIndex(), len - 1);
pgInput.skip(len);
return res;
}
/**
* Receives a null-terminated string from the backend and attempts to decode to a
* {@link Encoding#decodeCanonicalized(byte[], int, int) canonical} {@code String}.
* If we don't see a null, then we assume something has gone wrong.
*
* @return string from back end
* @throws IOException if an I/O error occurs, or end of file
* @see Encoding#decodeCanonicalized(byte[], int, int)
*/
public String receiveCanonicalString() throws IOException {
int len = pgInput.scanCStringLength();
String res = encoding.decodeCanonicalized(pgInput.getBuffer(), pgInput.getIndex(), len - 1);
pgInput.skip(len);
return res;
}
/**
* Receives a null-terminated string from the backend and attempts to decode to a
* {@link Encoding#decodeCanonicalizedIfPresent(byte[], int, int) canonical} {@code String}.
* If we don't see a null, then we assume something has gone wrong.
*
* @return string from back end
* @throws IOException if an I/O error occurs, or end of file
* @see Encoding#decodeCanonicalizedIfPresent(byte[], int, int)
*/
public String receiveCanonicalStringIfPresent() throws IOException {
int len = pgInput.scanCStringLength();
String res = encoding.decodeCanonicalizedIfPresent(pgInput.getBuffer(), pgInput.getIndex(), len - 1);
pgInput.skip(len);
return res;
}
/**
* Read a tuple from the back end. A tuple is a two dimensional array of bytes. This variant reads
* the V3 protocol's tuple representation.
*
* @return tuple from the back end
* @throws IOException if a data I/O error occurs
* @throws SQLException if read more bytes than set maxResultBuffer
*/
public Tuple receiveTupleV3() throws IOException, OutOfMemoryError, SQLException {
int messageSize = receiveInteger4(); // MESSAGE SIZE
int nf = receiveInteger2();
//size = messageSize - 4 bytes of message size - 2 bytes of field count - 4 bytes for each column length
int dataToReadSize = messageSize - 4 - 2 - 4 * nf;
setMaxRowSizeBytes(dataToReadSize);
byte[][] answer = new byte[nf][];
increaseByteCounter(dataToReadSize);
OutOfMemoryError oom = null;
for (int i = 0; i < nf; i++) {
int size = receiveInteger4();
if (size != -1) {
try {
answer[i] = new byte[size];
receive(answer[i], 0, size);
} catch (OutOfMemoryError oome) {
oom = oome;
skip(size);
}
}
}
if (oom != null) {
throw oom;
}
return new Tuple(answer);
}
/**
* Reads in a given number of bytes from the backend.
*
* @param siz number of bytes to read
* @return array of bytes received
* @throws IOException if a data I/O error occurs
*/
public byte[] receive(int siz) throws IOException {
byte[] answer = new byte[siz];
receive(answer, 0, siz);
return answer;
}
/**
* Reads in a given number of bytes from the backend.
*
* @param buf buffer to store result
* @param off offset in buffer
* @param siz number of bytes to read
* @throws IOException if a data I/O error occurs
*/
public void receive(byte[] buf, int off, int siz) throws IOException {
int s = 0;
while (s < siz) {
int w = pgInput.read(buf, off + s, siz - s);
if (w < 0) {
throw new EOFException();
}
s += w;
}
}
public void skip(int size) throws IOException {
long s = 0;
while (s < size) {
s += pgInput.skip(size - s);
}
}
/**
* Copy data from an input stream to the connection.
*
* @param inStream the stream to read data from
* @param remaining the number of bytes to copy
* @throws IOException if error occurs when writing the data to the output stream
* @throws SourceStreamIOException if error occurs when reading the data from the input stream
*/
public void sendStream(InputStream inStream, int remaining) throws IOException {
pgOutput.write(inStream, remaining);
}
/**
* Writes the given amount of zero bytes to the output stream
* @param length the number of zeros to write
* @throws IOException in case writing to the output stream fails
* @throws SourceStreamIOException in case reading from the source stream fails
*/
public void sendZeros(int length) throws IOException {
pgOutput.writeZeros(length);
}
/**
* Flush any pending output to the backend.
*
* @throws IOException if an I/O error occurs
*/
@Override
public void flush() throws IOException {
if (encodingWriter != null) {
encodingWriter.flush();
}
pgOutput.flush();
}
/**
* Consume an expected EOF from the backend.
*
* @throws IOException if an I/O error occurs
* @throws SQLException if we get something other than an EOF
*/
public void receiveEOF() throws SQLException, IOException {
int c = pgInput.read();
if (c < 0) {
return;
}
throw new PSQLException(GT.tr("Expected an EOF from server, got: {0}", c),
PSQLState.COMMUNICATION_ERROR);
}
/**
* Closes the connection.
*
* @throws IOException if an I/O Error occurs
*/
@Override
public void close() throws IOException {
if (encodingWriter != null) {
encodingWriter.close();
}
pgOutput.close();
pgInput.close();
connection.close();
}
public void setNetworkTimeout(int milliseconds) throws IOException {
connection.setSoTimeout(milliseconds);
pgInput.setTimeoutRequested(milliseconds != 0);
}
public int getNetworkTimeout() throws IOException {
return connection.getSoTimeout();
}
/**
* Method to set MaxResultBuffer inside PGStream.
*
* @param value value of new max result buffer as string (cause we can expect % or chars to use
* multiplier)
* @throws PSQLException exception returned when occurred parsing problem.
*/
public void setMaxResultBuffer(/* @Nullable */ String value) throws PSQLException {
maxResultBuffer = PGPropertyMaxResultBufferParser.parseProperty(value);
}
/**
* Get MaxResultBuffer from PGStream.
*
* @return size of MaxResultBuffer
*/
public long getMaxResultBuffer() {
return maxResultBuffer;
}
/**
* The idea behind this method is to keep in maxRowSize the size of biggest read data row. As
* there may be many data rows send after each other for a query, then value in maxRowSize would
* contain value noticed so far, because next data rows and their sizes are not read for that
* moment. We want it increasing, because the size of the biggest among data rows will be used
* during computing new adaptive fetch size for the query.
*
* @param rowSizeBytes new value to be set as maxRowSizeBytes
*/
public void setMaxRowSizeBytes(int rowSizeBytes) {
if (rowSizeBytes > maxRowSizeBytes) {
maxRowSizeBytes = rowSizeBytes;
}
}
/**
* Get actual max row size noticed so far.
*
* @return value of max row size
*/
public int getMaxRowSizeBytes() {
return maxRowSizeBytes;
}
/**
* Clear value of max row size noticed so far.
*/
public void clearMaxRowSizeBytes() {
maxRowSizeBytes = -1;
}
/**
* Clear count of byte buffer.
*/
public void clearResultBufferCount() {
resultBufferByteCount = 0;
}
public /* @Nullable */ ProtocolVersion getProtocolVersion() {
return protocolVersion;
}
public void setProtocolVersion(ProtocolVersion protocolVersion) {
this.protocolVersion = protocolVersion;
}
/**
* Increase actual count of buffer. If buffer count is bigger than max result buffer limit, then
* gonna return an exception.
*
* @param value size of bytes to add to byte buffer.
* @throws SQLException exception returned when result buffer count is bigger than max result
* buffer.
*/
private void increaseByteCounter(long value) throws SQLException {
if (maxResultBuffer != -1) {
resultBufferByteCount += value;
if (resultBufferByteCount > maxResultBuffer) {
throw new PSQLException(GT.tr(
"Result set exceeded maxResultBuffer limit. Received: {0}; Current limit: {1}",
String.valueOf(resultBufferByteCount), String.valueOf(maxResultBuffer)), PSQLState.COMMUNICATION_ERROR);
}
}
}
public boolean isClosed() {
return connection.isClosed();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ParameterList.java 0100664 0000000 0000000 00000022226 00000250600 026212 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import org.postgresql.core.v3.SqlSerializationContext;
import org.postgresql.util.ByteStreamWriter;
// import org.checkerframework.checker.index.qual.NonNegative;
// import org.checkerframework.checker.index.qual.Positive;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.InputStream;
import java.sql.SQLException;
/**
* Abstraction of a list of parameters to be substituted into a Query. The protocol-specific details
* of how to efficiently store and stream the parameters is hidden behind implementations of this
* interface.
*
* In general, instances of ParameterList are associated with a particular Query object (the one
* that created them) and shouldn't be used against another Query.
*
* Parameter indexes are 1-based to match JDBC's PreparedStatement, i.e. the first parameter has
* index 1.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
public interface ParameterList {
void registerOutParameter(/* @Positive */ int index, int sqlType) throws SQLException;
/**
* Get the number of parameters in this list. This value never changes for a particular instance,
* and might be zero.
*
* @return the number of parameters in this list.
*/
/* @NonNegative */ int getParameterCount();
/**
* Get the number of IN parameters in this list.
*
* @return the number of IN parameters in this list
*/
/* @NonNegative */ int getInParameterCount();
/**
* Get the number of OUT parameters in this list.
*
* @return the number of OUT parameters in this list
*/
/* @NonNegative */ int getOutParameterCount();
/**
* Return the oids of the parameters in this list. May be null for a ParameterList that does not
* support typing of parameters.
*
* @return oids of the parameters
*/
int[] getTypeOIDs();
/**
* Binds an integer value to a parameter. The type of the parameter is implicitly 'int4'.
*
* @param index the 1-based parameter index to bind.
* @param value the integer value to use.
* @throws SQLException on error or if index
is out of range
*/
void setIntParameter(/* @Positive */ int index, int value) throws SQLException;
/**
* Binds a String value that is an unquoted literal to the server's query parser (for example, a
* bare integer) to a parameter. Associated with the parameter is a typename for the parameter
* that should correspond to an entry in pg_types.
*
* @param index the 1-based parameter index to bind.
* @param value the unquoted literal string to use.
* @param oid the type OID of the parameter, or 0
to infer the type.
* @throws SQLException on error or if index
is out of range
*/
void setLiteralParameter(/* @Positive */ int index,
String value, int oid) throws SQLException;
/**
* Binds a String value that needs to be quoted for the server's parser to understand (for
* example, a timestamp) to a parameter. Associated with the parameter is a typename for the
* parameter that should correspond to an entry in pg_types.
*
* @param index the 1-based parameter index to bind.
* @param value the quoted string to use.
* @param oid the type OID of the parameter, or 0
to infer the type.
* @throws SQLException on error or if index
is out of range
*/
void setStringParameter(/* @Positive */ int index, String value, int oid) throws SQLException;
/**
* Binds a binary bytea value stored as a bytearray to a parameter. The parameter's type is
* implicitly set to 'bytea'. The bytearray's contains should remain unchanged until query
* execution has completed.
*
* @param index the 1-based parameter index to bind.
* @param data an array containing the raw data value
* @param offset the offset within data
of the start of the parameter data.
* @param length the number of bytes of parameter data within data
to use.
* @throws SQLException on error or if index
is out of range
*/
void setBytea(/* @Positive */ int index, byte[] data,
/* @NonNegative */ int offset, /* @NonNegative */ int length) throws SQLException;
/**
* Binds a binary bytea value stored as an InputStream. The parameter's type is implicitly set to
* 'bytea'. The stream should remain valid until query execution has completed.
*
* @param index the 1-based parameter index to bind.
* @param stream a stream containing the parameter data.
* @param length the number of bytes of parameter data to read from stream
.
* @throws SQLException on error or if index
is out of range
*/
void setBytea(/* @Positive */ int index, InputStream stream, /* @NonNegative */ int length) throws SQLException;
/**
* Binds a binary bytea value stored as an InputStream. The parameter's type is implicitly set to
* 'bytea'. The stream should remain valid until query execution has completed.
*
* @param index the 1-based parameter index to bind.
* @param stream a stream containing the parameter data.
* @throws SQLException on error or if index
is out of range
*/
void setBytea(/* @Positive */ int index, InputStream stream) throws SQLException;
/**
* Binds a binary bytea value stored as a ByteStreamWriter. The parameter's type is implicitly set to
* 'bytea'. The stream should remain valid until query execution has completed.
*
* @param index the 1-based parameter index to bind.
* @param writer a writer that can write the bytes for the parameter
* @throws SQLException on error or if index
is out of range
*/
void setBytea(/* @Positive */ int index, ByteStreamWriter writer) throws SQLException;
/**
* Binds a text value stored as an InputStream that is a valid UTF-8 byte stream.
* Any byte-order marks (BOM) in the stream are passed to the backend.
* The parameter's type is implicitly set to 'text'.
* The stream should remain valid until query execution has completed.
*
* @param index the 1-based parameter index to bind.
* @param stream a stream containing the parameter data.
* @throws SQLException on error or if index
is out of range
*/
void setText(/* @Positive */ int index, InputStream stream) throws SQLException;
/**
* Binds given byte[] value to a parameter. The bytes must already be in correct format matching
* the OID.
*
* @param index the 1-based parameter index to bind.
* @param value the bytes to send.
* @param oid the type OID of the parameter.
* @throws SQLException on error or if index
is out of range
*/
void setBinaryParameter(/* @Positive */ int index, byte[] value, int oid) throws SQLException;
/**
* Binds a SQL NULL value to a parameter. Associated with the parameter is a typename for the
* parameter that should correspond to an entry in pg_types.
*
* @param index the 1-based parameter index to bind.
* @param oid the type OID of the parameter, or 0
to infer the type.
* @throws SQLException on error or if index
is out of range
*/
void setNull(/* @Positive */ int index, int oid) throws SQLException;
/**
* Perform a shallow copy of this ParameterList, returning a new instance (still suitable for
* passing to the owning Query). If this ParameterList is immutable, copy() may return the same
* immutable object.
*
* @return a new ParameterList instance
*/
ParameterList copy();
/**
* Unbind all parameter values bound in this list.
*/
void clear();
/**
* Return a human-readable representation of a particular parameter in this ParameterList. If the
* parameter is not bound or is of type bytea sourced from an InputStream, returns "?".
* This method will NOT consume InputStreams, instead "?" will be returned.
*
* @param index the 1-based parameter index to bind.
* @param standardConformingStrings true if \ is not an escape character in strings literals
* @return a string representation of the parameter.
*/
String toString(/* @Positive */ int index, boolean standardConformingStrings);
/**
* Return the string literal representation of a particular parameter in this ParameterList. If the
* parameter is not bound, returns "?".
* This method will consume all InputStreams to produce the result.
*
* @param index the 1-based parameter index to bind.
* @param context specifies configuration for converting the parameters to string
* @return a string representation of the parameter.
*/
String toString(/* @Positive */ int index, SqlSerializationContext context);
/**
* Use this operation to append more parameters to the current list.
* @param list of parameters to append with.
* @throws SQLException fault raised if driver or back end throw an exception
*/
void appendAll(ParameterList list) throws SQLException ;
/**
* Returns the bound parameter values.
* @return Object array containing the parameter values.
*/
/* @Nullable */ Object /* @Nullable */ [] getValues();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Parser.java 0100664 0000000 0000000 00000156114 00000250600 024676 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2006, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.jdbc.EscapeSyntaxCallMode;
import org.postgresql.jdbc.EscapedFunctions2;
import org.postgresql.util.GT;
import org.postgresql.util.IntList;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
/**
* Basic query parser infrastructure.
* Note: This class should not be considered as pgjdbc public API.
*
* @author Michael Paesold (mpaesold@gmx.at)
* @author Christopher Deckers (chrriis@gmail.com)
*/
public class Parser {
/**
* Parses JDBC query into PostgreSQL's native format. Several queries might be given if separated
* by semicolon.
*
* @param query jdbc query to parse
* @param standardConformingStrings whether to allow backslashes to be used as escape characters
* in single quote literals
* @param withParameters whether to replace ?, ? with $1, $2, etc
* @param splitStatements whether to split statements by semicolon
* @param isBatchedReWriteConfigured whether re-write optimization is enabled
* @param quoteReturningIdentifiers whether to quote identifiers returned using returning clause
* @param returningColumnNames for simple insert, update, delete add returning with given column names
* @return list of native queries
* @throws SQLException if unable to add returning clause (invalid column names)
*/
public static List parseJdbcSql(String query, boolean standardConformingStrings,
boolean withParameters, boolean splitStatements,
boolean isBatchedReWriteConfigured,
boolean quoteReturningIdentifiers,
String... returningColumnNames) throws SQLException {
if (!withParameters && !splitStatements
&& returningColumnNames != null && returningColumnNames.length == 0) {
return Collections.singletonList(new NativeQuery(query,
SqlCommand.createStatementTypeInfo(SqlCommandType.BLANK)));
}
int fragmentStart = 0;
int inParen = 0;
char[] aChars = query.toCharArray();
StringBuilder nativeSql = new StringBuilder(query.length() + 10);
IntList bindPositions = null; // initialized on demand
List nativeQueries = null;
boolean isCurrentReWriteCompatible = false;
boolean isValuesFound = false;
int valuesParenthesisOpenPosition = -1;
int valuesParenthesisClosePosition = -1;
boolean valuesParenthesisCloseFound = false;
boolean isInsertPresent = false;
boolean isReturningPresent = false;
boolean isReturningPresentPrev = false;
boolean isBeginPresent = false;
boolean isBeginAtomicPresent = false;
SqlCommandType currentCommandType = SqlCommandType.BLANK;
SqlCommandType prevCommandType = SqlCommandType.BLANK;
int numberOfStatements = 0;
boolean whitespaceOnly = true;
int keyWordCount = 0;
int keywordStart = -1;
int keywordEnd = -1;
/*
loop through looking for keywords, single quotes, double quotes, comments, dollar quotes,
parenthesis, ? and ;
for single/double/dollar quotes, and comments we just want to move the index
*/
for (int i = 0; i < aChars.length; i++) {
char aChar = aChars[i];
boolean isKeyWordChar = false;
// ';' is ignored as it splits the queries. We do have to deal with ; in BEGIN ATOMIC functions
whitespaceOnly &= aChar == ';' || Character.isWhitespace(aChar);
keywordEnd = i; // parseSingleQuotes, parseDoubleQuotes, etc move index so we keep old value
switch (aChar) {
case '\'': // single-quotes
i = Parser.parseSingleQuotes(aChars, i, standardConformingStrings);
break;
case '"': // double-quotes
i = Parser.parseDoubleQuotes(aChars, i);
break;
case '-': // possibly -- style comment
i = Parser.parseLineComment(aChars, i);
break;
case '/': // possibly /* */ style comment
i = Parser.parseBlockComment(aChars, i);
break;
case '$': // possibly dollar quote start
i = Parser.parseDollarQuotes(aChars, i);
break;
// case '(' moved below to parse "values(" properly
case ')':
inParen--;
if (inParen == 0 && isValuesFound && !valuesParenthesisCloseFound) {
// If original statement is multi-values like VALUES (...), (...), ... then
// search for the latest closing paren
valuesParenthesisClosePosition = nativeSql.length() + i - fragmentStart;
}
break;
case '?':
nativeSql.append(aChars, fragmentStart, i - fragmentStart);
if (i + 1 < aChars.length && aChars[i + 1] == '?') /* replace ?? with ? */ {
nativeSql.append('?');
i++; // make sure the coming ? is not treated as a bind
} else {
if (!withParameters) {
nativeSql.append('?');
} else {
if (bindPositions == null) {
bindPositions = new IntList();
}
bindPositions.add(nativeSql.length());
int bindIndex = bindPositions.size();
nativeSql.append(NativeQuery.bindName(bindIndex));
}
}
fragmentStart = i + 1;
break;
case ';':
// we don't split the queries if BEGIN ATOMIC is present
if (!isBeginAtomicPresent && inParen == 0) {
if (!whitespaceOnly) {
numberOfStatements++;
nativeSql.append(aChars, fragmentStart, i - fragmentStart);
whitespaceOnly = true;
}
fragmentStart = i + 1;
if (nativeSql.length() > 0) {
if (addReturning(nativeSql, currentCommandType, returningColumnNames, isReturningPresent, quoteReturningIdentifiers)) {
isReturningPresent = true;
}
if (splitStatements) {
if (nativeQueries == null) {
nativeQueries = new ArrayList<>();
}
if (!isValuesFound || !isCurrentReWriteCompatible || valuesParenthesisClosePosition == -1
|| (bindPositions != null
&& valuesParenthesisClosePosition < bindPositions.get(bindPositions.size() - 1))) {
valuesParenthesisOpenPosition = -1;
valuesParenthesisClosePosition = -1;
}
nativeQueries.add(new NativeQuery(nativeSql.toString(),
toIntArray(bindPositions), false,
SqlCommand.createStatementTypeInfo(
currentCommandType, isBatchedReWriteConfigured, valuesParenthesisOpenPosition,
valuesParenthesisClosePosition,
isReturningPresent, nativeQueries.size())));
}
}
prevCommandType = currentCommandType;
isReturningPresentPrev = isReturningPresent;
currentCommandType = SqlCommandType.BLANK;
isReturningPresent = false;
if (splitStatements) {
// Prepare for next query
if (bindPositions != null) {
bindPositions.clear();
}
nativeSql.setLength(0);
isValuesFound = false;
isCurrentReWriteCompatible = false;
valuesParenthesisOpenPosition = -1;
valuesParenthesisClosePosition = -1;
valuesParenthesisCloseFound = false;
}
}
break;
default:
if (keywordStart >= 0) {
// When we are inside a keyword, we need to detect keyword end boundary
// Note that isKeyWordChar is initialized to false before the switch, so
// all other characters would result in isKeyWordChar=false
isKeyWordChar = isIdentifierContChar(aChar);
break;
}
// Not in keyword, so just detect next keyword start
isKeyWordChar = isIdentifierStartChar(aChar);
if (isKeyWordChar) {
keywordStart = i;
if (valuesParenthesisOpenPosition != -1 && inParen == 0) {
// When the statement already has multi-values, stop looking for more of them
// Since values(?,?),(?,?),... should not contain keywords in the middle
valuesParenthesisCloseFound = true;
}
}
break;
}
if (keywordStart >= 0 && (i == aChars.length - 1 || !isKeyWordChar)) {
int wordLength = (isKeyWordChar ? i + 1 : keywordEnd) - keywordStart;
if (currentCommandType == SqlCommandType.BLANK) {
if (wordLength == 6 && parseCreateKeyword(aChars, keywordStart)) {
currentCommandType = SqlCommandType.CREATE;
} else if (wordLength == 5 && parseAlterKeyword(aChars, keywordStart)) {
currentCommandType = SqlCommandType.ALTER;
} else if (wordLength == 6 && parseUpdateKeyword(aChars, keywordStart)) {
currentCommandType = SqlCommandType.UPDATE;
} else if (wordLength == 6 && parseDeleteKeyword(aChars, keywordStart)) {
currentCommandType = SqlCommandType.DELETE;
} else if (wordLength == 4 && parseMoveKeyword(aChars, keywordStart)) {
currentCommandType = SqlCommandType.MOVE;
} else if (wordLength == 6 && parseSelectKeyword(aChars, keywordStart)) {
currentCommandType = SqlCommandType.SELECT;
} else if (wordLength == 4 && parseWithKeyword(aChars, keywordStart)) {
currentCommandType = SqlCommandType.WITH;
} else if (wordLength == 6 && parseInsertKeyword(aChars, keywordStart)) {
if (!isInsertPresent && (nativeQueries == null || nativeQueries.isEmpty())) {
// Only allow rewrite for insert command starting with the insert keyword.
// Else, too many risks of wrong interpretation.
isCurrentReWriteCompatible = keyWordCount == 0;
isInsertPresent = true;
currentCommandType = SqlCommandType.INSERT;
} else {
isCurrentReWriteCompatible = false;
}
}
} else if (currentCommandType == SqlCommandType.WITH
&& inParen == 0) {
SqlCommandType command = parseWithCommandType(aChars, i, keywordStart, wordLength);
if (command != null) {
currentCommandType = command;
}
} else if (currentCommandType == SqlCommandType.CREATE) {
/*
We are looking for BEGIN ATOMIC
*/
if (wordLength == 5 && parseBeginKeyword(aChars, keywordStart)) {
isBeginPresent = true;
} else {
// found begin, now look for atomic
if (isBeginPresent) {
if (wordLength == 6 && parseAtomicKeyword(aChars, keywordStart)) {
isBeginAtomicPresent = true;
}
// either way we reset beginFound
isBeginPresent = false;
}
}
}
if (inParen != 0 || aChar == ')') {
// RETURNING and VALUES cannot be present in parentheses
} else if (wordLength == 9 && parseReturningKeyword(aChars, keywordStart)) {
isReturningPresent = true;
} else if (wordLength == 6 && parseValuesKeyword(aChars, keywordStart)) {
isValuesFound = true;
}
keywordStart = -1;
keyWordCount++;
}
if (aChar == '(') {
inParen++;
if (inParen == 1 && isValuesFound && valuesParenthesisOpenPosition == -1) {
valuesParenthesisOpenPosition = nativeSql.length() + i - fragmentStart;
}
}
}
if (!isValuesFound || !isCurrentReWriteCompatible || valuesParenthesisClosePosition == -1
|| (bindPositions != null
&& valuesParenthesisClosePosition < bindPositions.get(bindPositions.size() - 1))) {
valuesParenthesisOpenPosition = -1;
valuesParenthesisClosePosition = -1;
}
if (fragmentStart < aChars.length && !whitespaceOnly) {
nativeSql.append(aChars, fragmentStart, aChars.length - fragmentStart);
} else {
if (numberOfStatements > 1) {
isReturningPresent = false;
currentCommandType = SqlCommandType.BLANK;
} else if (numberOfStatements == 1) {
isReturningPresent = isReturningPresentPrev;
currentCommandType = prevCommandType;
}
}
if (nativeSql.length() == 0) {
return nativeQueries != null ? nativeQueries : Collections.emptyList();
}
if (addReturning(nativeSql, currentCommandType, returningColumnNames, isReturningPresent, quoteReturningIdentifiers)) {
isReturningPresent = true;
}
NativeQuery lastQuery = new NativeQuery(nativeSql.toString(),
toIntArray(bindPositions), !splitStatements,
SqlCommand.createStatementTypeInfo(currentCommandType,
isBatchedReWriteConfigured, valuesParenthesisOpenPosition, valuesParenthesisClosePosition,
isReturningPresent, (nativeQueries == null ? 0 : nativeQueries.size())));
if (nativeQueries == null) {
return Collections.singletonList(lastQuery);
}
if (!whitespaceOnly) {
nativeQueries.add(lastQuery);
}
return nativeQueries;
}
private static /* @Nullable */ SqlCommandType parseWithCommandType(char[] aChars, int i, int keywordStart,
int wordLength) {
// This parses `with x as (...) ...`
// Corner case is `with select as (insert ..) select * from select
SqlCommandType command;
if (wordLength == 6 && parseUpdateKeyword(aChars, keywordStart)) {
command = SqlCommandType.UPDATE;
} else if (wordLength == 6 && parseDeleteKeyword(aChars, keywordStart)) {
command = SqlCommandType.DELETE;
} else if (wordLength == 6 && parseInsertKeyword(aChars, keywordStart)) {
command = SqlCommandType.INSERT;
} else if (wordLength == 6 && parseSelectKeyword(aChars, keywordStart)) {
command = SqlCommandType.SELECT;
} else {
return null;
}
// update/delete/insert/select keyword detected
// Check if `AS` follows
int nextInd = i;
// The loop should skip whitespace and comments
for (; nextInd < aChars.length; nextInd++) {
char nextChar = aChars[nextInd];
if (nextChar == '-') {
nextInd = Parser.parseLineComment(aChars, nextInd);
} else if (nextChar == '/') {
nextInd = Parser.parseBlockComment(aChars, nextInd);
} else if (!Character.isWhitespace(nextChar)) {
break;
}
}
if (nextInd + 2 >= aChars.length
|| (!parseAsKeyword(aChars, nextInd)
|| isIdentifierContChar(aChars[nextInd + 2]))) {
return command;
}
return null;
}
private static boolean addReturning(StringBuilder nativeSql, SqlCommandType currentCommandType,
String[] returningColumnNames, boolean isReturningPresent, boolean quoteReturningIdentifiers) throws SQLException {
if (isReturningPresent || returningColumnNames.length == 0) {
return false;
}
if (currentCommandType != SqlCommandType.INSERT
&& currentCommandType != SqlCommandType.UPDATE
&& currentCommandType != SqlCommandType.DELETE
&& currentCommandType != SqlCommandType.WITH) {
return false;
}
nativeSql.append("\nRETURNING ");
if (returningColumnNames.length == 1 && returningColumnNames[0].charAt(0) == '*') {
nativeSql.append('*');
return true;
}
for (int col = 0; col < returningColumnNames.length; col++) {
String columnName = returningColumnNames[col];
if (col > 0) {
nativeSql.append(", ");
}
/*
If the client quotes identifiers then doing so again would create an error
*/
if (quoteReturningIdentifiers) {
Utils.escapeIdentifier(nativeSql, columnName);
} else {
nativeSql.append(columnName);
}
}
return true;
}
/**
* Converts {@link IntList} to {@code int[]}. A {@code null} collection is converted to
* {@code null} array.
*
* @param list input list
* @return output array
*/
private static int /* @Nullable */ [] toIntArray(/* @Nullable */ IntList list) {
if (list == null) {
return null;
}
return list.toArray();
}
/**
* Find the end of the single-quoted string starting at the given offset.
*
* Note: for {@code 'single '' quote in string'}, this method currently returns the offset of
* first {@code '} character after the initial one. The caller must call the method a second time
* for the second part of the quoted string.
*
* @param query query
* @param offset start offset
* @param standardConformingStrings standard conforming strings
* @return position of the end of the single-quoted string
*/
public static int parseSingleQuotes(final char[] query, int offset,
boolean standardConformingStrings) {
// check for escape string syntax (E'')
if (standardConformingStrings
&& offset >= 2
&& (query[offset - 1] == 'e' || query[offset - 1] == 'E')
&& charTerminatesIdentifier(query[offset - 2])) {
standardConformingStrings = false;
}
if (standardConformingStrings) {
// do NOT treat backslashes as escape characters
while (++offset < query.length) {
if (query[offset] == '\'') {
return offset;
}
}
} else {
// treat backslashes as escape characters
while (++offset < query.length) {
switch (query[offset]) {
case '\\':
++offset;
break;
case '\'':
return offset;
default:
break;
}
}
}
return query.length;
}
/**
* Find the end of the double-quoted string starting at the given offset.
*
* Note: for {@code "double "" quote in string"}, this method currently
* returns the offset of first {@code "} character after the initial one. The caller must
* call the method a second time for the second part of the quoted string.
*
* @param query query
* @param offset start offset
* @return position of the end of the double-quoted string
*/
public static int parseDoubleQuotes(final char[] query, int offset) {
while (++offset < query.length && query[offset] != '"') {
// do nothing
}
return offset;
}
/**
* Test if the dollar character ({@code $}) at the given offset starts a dollar-quoted string and
* return the offset of the ending dollar character.
*
* @param query query
* @param offset start offset
* @return offset of the ending dollar character
*/
public static int parseDollarQuotes(final char[] query, int offset) {
if (offset + 1 < query.length
&& (offset == 0 || !isIdentifierContChar(query[offset - 1]))) {
int endIdx = -1;
if (query[offset + 1] == '$') {
endIdx = offset + 1;
} else if (isDollarQuoteStartChar(query[offset + 1])) {
for (int d = offset + 2; d < query.length; d++) {
if (query[d] == '$') {
endIdx = d;
break;
} else if (!isDollarQuoteContChar(query[d])) {
break;
}
}
}
if (endIdx > 0) {
// found; note: tag includes start and end $ character
int tagIdx = offset;
int tagLen = endIdx - offset + 1;
offset = endIdx; // loop continues at endIdx + 1
for (++offset; offset < query.length; offset++) {
if (query[offset] == '$'
&& subArraysEqual(query, tagIdx, offset, tagLen)) {
offset += tagLen - 1;
break;
}
}
}
}
return offset;
}
/**
* Test if the {@code -} character at {@code offset} starts a {@code --} style line comment,
* and return the position of the first {@code \r} or {@code \n} character.
*
* @param query query
* @param offset start offset
* @return position of the first {@code \r} or {@code \n} character
*/
public static int parseLineComment(final char[] query, int offset) {
if (offset + 1 < query.length && query[offset + 1] == '-') {
while (offset + 1 < query.length) {
offset++;
if (query[offset] == '\r' || query[offset] == '\n') {
break;
}
}
}
return offset;
}
/**
* Test if the {@code /} character at {@code offset} starts a block comment, and return the
* position of the last {@code /} character.
*
* @param query query
* @param offset start offset
* @return position of the last {@code /} character
*/
public static int parseBlockComment(final char[] query, int offset) {
if (offset + 1 < query.length && query[offset + 1] == '*') {
// /* /* */ */ nest, according to SQL spec
int level = 1;
for (offset += 2; offset < query.length; offset++) {
switch (query[offset - 1]) {
case '*':
if (query[offset] == '/') {
--level;
++offset; // don't parse / in */* twice
}
break;
case '/':
if (query[offset] == '*') {
++level;
++offset; // don't parse * in /*/ twice
}
break;
default:
break;
}
if (level == 0) {
--offset; // reset position to last '/' char
break;
}
}
}
return offset;
}
/**
* Parse string to check presence of DELETE keyword regardless of case. The initial character is
* assumed to have been matched.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseDeleteKeyword(final char[] query, int offset) {
if (query.length < (offset + 6)) {
return false;
}
return (query[offset] | 32) == 'd'
&& (query[offset + 1] | 32) == 'e'
&& (query[offset + 2] | 32) == 'l'
&& (query[offset + 3] | 32) == 'e'
&& (query[offset + 4] | 32) == 't'
&& (query[offset + 5] | 32) == 'e';
}
/**
* Parse string to check presence of INSERT keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseInsertKeyword(final char[] query, int offset) {
if (query.length < (offset + 7)) {
return false;
}
return (query[offset] | 32) == 'i'
&& (query[offset + 1] | 32) == 'n'
&& (query[offset + 2] | 32) == 's'
&& (query[offset + 3] | 32) == 'e'
&& (query[offset + 4] | 32) == 'r'
&& (query[offset + 5] | 32) == 't';
}
/**
Parse string to check presence of BEGIN keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseBeginKeyword(final char[] query, int offset) {
if (query.length < (offset + 6)) {
return false;
}
return (query[offset] | 32) == 'b'
&& (query[offset + 1] | 32) == 'e'
&& (query[offset + 2] | 32) == 'g'
&& (query[offset + 3] | 32) == 'i'
&& (query[offset + 4] | 32) == 'n';
}
/**
Parse string to check presence of ATOMIC keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseAtomicKeyword(final char[] query, int offset) {
if (query.length < (offset + 7)) {
return false;
}
return (query[offset] | 32) == 'a'
&& (query[offset + 1] | 32) == 't'
&& (query[offset + 2] | 32) == 'o'
&& (query[offset + 3] | 32) == 'm'
&& (query[offset + 4] | 32) == 'i'
&& (query[offset + 5] | 32) == 'c';
}
/**
* Parse string to check presence of MOVE keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseMoveKeyword(final char[] query, int offset) {
if (query.length < (offset + 4)) {
return false;
}
return (query[offset] | 32) == 'm'
&& (query[offset + 1] | 32) == 'o'
&& (query[offset + 2] | 32) == 'v'
&& (query[offset + 3] | 32) == 'e';
}
/**
* Parse string to check presence of RETURNING keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseReturningKeyword(final char[] query, int offset) {
if (query.length < (offset + 9)) {
return false;
}
return (query[offset] | 32) == 'r'
&& (query[offset + 1] | 32) == 'e'
&& (query[offset + 2] | 32) == 't'
&& (query[offset + 3] | 32) == 'u'
&& (query[offset + 4] | 32) == 'r'
&& (query[offset + 5] | 32) == 'n'
&& (query[offset + 6] | 32) == 'i'
&& (query[offset + 7] | 32) == 'n'
&& (query[offset + 8] | 32) == 'g';
}
/**
* Parse string to check presence of SELECT keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseSelectKeyword(final char[] query, int offset) {
if (query.length < (offset + 6)) {
return false;
}
return (query[offset] | 32) == 's'
&& (query[offset + 1] | 32) == 'e'
&& (query[offset + 2] | 32) == 'l'
&& (query[offset + 3] | 32) == 'e'
&& (query[offset + 4] | 32) == 'c'
&& (query[offset + 5] | 32) == 't';
}
/**
* Parse string to check presence of CREATE keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseAlterKeyword(final char[] query, int offset) {
if (query.length < (offset + 5)) {
return false;
}
return (query[offset] | 32) == 'a'
&& (query[offset + 1] | 32) == 'l'
&& (query[offset + 2] | 32) == 't'
&& (query[offset + 3] | 32) == 'e'
&& (query[offset + 4] | 32) == 'r';
}
/**
* Parse string to check presence of CREATE keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseCreateKeyword(final char[] query, int offset) {
if (query.length < (offset + 6)) {
return false;
}
return (query[offset] | 32) == 'c'
&& (query[offset + 1] | 32) == 'r'
&& (query[offset + 2] | 32) == 'e'
&& (query[offset + 3] | 32) == 'a'
&& (query[offset + 4] | 32) == 't'
&& (query[offset + 5] | 32) == 'e';
}
/**
* Parse string to check presence of UPDATE keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseUpdateKeyword(final char[] query, int offset) {
if (query.length < (offset + 6)) {
return false;
}
return (query[offset] | 32) == 'u'
&& (query[offset + 1] | 32) == 'p'
&& (query[offset + 2] | 32) == 'd'
&& (query[offset + 3] | 32) == 'a'
&& (query[offset + 4] | 32) == 't'
&& (query[offset + 5] | 32) == 'e';
}
/**
* Parse string to check presence of VALUES keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseValuesKeyword(final char[] query, int offset) {
if (query.length < (offset + 6)) {
return false;
}
return (query[offset] | 32) == 'v'
&& (query[offset + 1] | 32) == 'a'
&& (query[offset + 2] | 32) == 'l'
&& (query[offset + 3] | 32) == 'u'
&& (query[offset + 4] | 32) == 'e'
&& (query[offset + 5] | 32) == 's';
}
/**
* Faster version of {@link Long#parseLong(String)} when parsing a substring is required
*
* @param s string to parse
* @param beginIndex begin index
* @param endIndex end index
* @return long value
*/
public static long parseLong(String s, int beginIndex, int endIndex) {
// Fallback to default implementation in case the string is long
if (endIndex - beginIndex > 16) {
return Long.parseLong(s.substring(beginIndex, endIndex));
}
long res = digitAt(s, beginIndex);
for (beginIndex++; beginIndex < endIndex; beginIndex++) {
res = res * 10 + digitAt(s, beginIndex);
}
return res;
}
/**
* Parse string to check presence of WITH keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseWithKeyword(final char[] query, int offset) {
if (query.length < (offset + 4)) {
return false;
}
return (query[offset] | 32) == 'w'
&& (query[offset + 1] | 32) == 'i'
&& (query[offset + 2] | 32) == 't'
&& (query[offset + 3] | 32) == 'h';
}
/**
* Parse string to check presence of AS keyword regardless of case.
*
* @param query char[] of the query statement
* @param offset position of query to start checking
* @return boolean indicates presence of word
*/
public static boolean parseAsKeyword(final char[] query, int offset) {
if (query.length < (offset + 2)) {
return false;
}
return (query[offset] | 32) == 'a'
&& (query[offset + 1] | 32) == 's';
}
/**
* Returns true if a given string {@code s} has digit at position {@code pos}.
* @param s input string
* @param pos position (0-based)
* @return true if input string s has digit at position pos
*/
public static boolean isDigitAt(String s, int pos) {
return pos > 0 && pos < s.length() && Character.isDigit(s.charAt(pos));
}
/**
* Converts digit at position {@code pos} in string {@code s} to integer or throws.
* @param s input string
* @param pos position (0-based)
* @return integer value of a digit at position pos
* @throws NumberFormatException if character at position pos is not an integer
*/
public static int digitAt(String s, int pos) {
int c = s.charAt(pos) - '0';
if (c < 0 || c > 9) {
throw new NumberFormatException("Input string: \"" + s + "\", position: " + pos);
}
return c;
}
/**
* Identifies characters which the backend scanner considers to be whitespace.
*
*
* https://github.com/postgres/postgres/blob/17bb62501787c56e0518e61db13a523d47afd724/src/backend/parser/scan.l#L194-L198
*
*
* @param c character
* @return true if the character is a whitespace character as defined in the backend's parser
*/
public static boolean isSpace(char c) {
return c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f';
}
/**
* Identifies white space characters which the backend uses to determine if a
* {@code String} value needs to be quoted in array representation.
*
*
* https://github.com/postgres/postgres/blob/f2c587067a8eb9cf1c8f009262381a6576ba3dd0/src/backend/utils/adt/arrayfuncs.c#L421-L438
*
*
* @param c
* Character to examine.
* @return Indication if the character is a whitespace which back end will
* escape.
*/
public static boolean isArrayWhiteSpace(char c) {
return c == ' ' || c == '\t' || c == '\n' || c == '\r' || c == '\f' || c == 0x0B;
}
/**
* Returns if the given character is a valid character for an operator in the backend's
* parser.
* @param c character
* @return true if the given character is a valid character for an operator in the backend's
* parser
*/
public static boolean isOperatorChar(char c) {
/*
* Extracted from operators defined by {self} and {op_chars}
* in pgsql/src/backend/parser/scan.l.
*/
return ",()[].;:+-*/%^<>=~!@#&|`?".indexOf(c) != -1;
}
/**
* Checks if a character is valid as the start of an identifier.
* PostgreSQL 9.4 allows column names like _, ‿, ⁀, ⁔, ︳, ︴, ﹍, ﹎, ﹏, _, so
* it is assumed isJavaIdentifierPart is good enough for PostgreSQL.
*
* @param c the character to check
* @return true if valid as first character of an identifier; false if not
* @see Identifiers and Key Words
*/
public static boolean isIdentifierStartChar(char c) {
/*
* PostgreSQL's implementation is located in
* pgsql/src/backend/parser/scan.l:
* ident_start [A-Za-z\200-\377_]
* ident_cont [A-Za-z\200-\377_0-9\$]
* however it is not clear how that interacts with unicode, so we just use Java's implementation.
*/
return Character.isJavaIdentifierStart(c);
}
/**
* Checks if a character is valid as the second or later character of an identifier.
*
* @param c the character to check
* @return true if valid as second or later character of an identifier; false if not
*/
public static boolean isIdentifierContChar(char c) {
return Character.isJavaIdentifierPart(c);
}
/**
* Returns true if the character terminates an identifier.
* @param c character
* @return true if the character terminates an identifier
*/
public static boolean charTerminatesIdentifier(char c) {
return c == '"' || isSpace(c) || isOperatorChar(c);
}
/**
* Checks if a character is valid as the start of a dollar quoting tag.
*
* @param c the character to check
* @return true if valid as first character of a dollar quoting tag; false if not
*/
public static boolean isDollarQuoteStartChar(char c) {
/*
* The allowed dollar quote start and continuation characters
* must stay in sync with what the backend defines in
* pgsql/src/backend/parser/scan.l
*
* The quoted string starts with $foo$ where "foo" is an optional string
* in the form of an identifier, except that it may not contain "$",
* and extends to the first occurrence of an identical string.
* There is *no* processing of the quoted text.
*/
return c != '$' && isIdentifierStartChar(c);
}
/**
* Checks if a character is valid as the second or later character of a dollar quoting tag.
*
* @param c the character to check
* @return true if valid as second or later character of a dollar quoting tag; false if not
*/
public static boolean isDollarQuoteContChar(char c) {
return c != '$' && isIdentifierContChar(c);
}
/**
* Compares two sub-arrays of the given character array for equalness. If the length is zero, the
* result is true if and only if the offsets are within the bounds of the array.
*
* @param arr a char array
* @param offA first sub-array start offset
* @param offB second sub-array start offset
* @param len length of the sub arrays to compare
* @return true if the sub-arrays are equal; false if not
*/
private static boolean subArraysEqual(final char[] arr,
final int offA, final int offB,
final int len) {
if (offA < 0 || offB < 0
|| offA >= arr.length || offB >= arr.length
|| offA + len > arr.length || offB + len > arr.length) {
return false;
}
for (int i = 0; i < len; i++) {
if (arr[offA + i] != arr[offB + i]) {
return false;
}
}
return true;
}
/**
* Converts JDBC-specific callable statement escapes {@code { [? =] call [(?,
* [?,..])] }} into the PostgreSQL format which is {@code select (?, [?, ...]) as
* result} or {@code select * from (?, [?, ...]) as result} (7.3)
*
* @param jdbcSql sql text with JDBC escapes
* @param stdStrings if backslash in single quotes should be regular character or escape one
* @param serverVersion server version
* @param escapeSyntaxCallMode mode specifying whether JDBC escape call syntax is transformed into a CALL/SELECT statement
* @return SQL in appropriate for given server format
* @throws SQLException if given SQL is malformed
*/
public static JdbcCallParseInfo modifyJdbcCall(String jdbcSql, boolean stdStrings,
int serverVersion, EscapeSyntaxCallMode escapeSyntaxCallMode) throws SQLException {
// Mini-parser for JDBC function-call syntax (only)
// TODO: Merge with escape processing (and parameter parsing?) so we only parse each query once.
// RE: frequently used statements are cached (see {@link org.postgresql.jdbc.PgConnection#borrowQuery}), so this "merge" is not that important.
String sql = jdbcSql;
boolean isFunction = false;
boolean outParamBeforeFunc = false;
int len = jdbcSql.length();
int state = 1;
boolean inQuotes = false;
boolean inEscape = false;
int startIndex = -1;
int endIndex = -1;
boolean syntaxError = false;
int i = 0;
while (i < len && !syntaxError) {
char ch = jdbcSql.charAt(i);
switch (state) {
case 1: // Looking for { at start of query
if (ch == '{') {
++i;
++state;
} else if (Character.isWhitespace(ch)) {
++i;
} else {
// Not function-call syntax. Skip the rest of the string.
i = len;
}
break;
case 2: // After {, looking for ? or =, skipping whitespace
if (ch == '?') {
outParamBeforeFunc =
isFunction = true; // { ? = call ... } -- function with one out parameter
++i;
++state;
} else if (ch == 'c' || ch == 'C') { // { call ... } -- proc with no out parameters
state += 3; // Don't increase 'i'
} else if (Character.isWhitespace(ch)) {
++i;
} else {
// "{ foo ...", doesn't make sense, complain.
syntaxError = true;
}
break;
case 3: // Looking for = after ?, skipping whitespace
if (ch == '=') {
++i;
++state;
} else if (Character.isWhitespace(ch)) {
++i;
} else {
syntaxError = true;
}
break;
case 4: // Looking for 'call' after '? =' skipping whitespace
if (ch == 'c' || ch == 'C') {
++state; // Don't increase 'i'.
} else if (Character.isWhitespace(ch)) {
++i;
} else {
syntaxError = true;
}
break;
case 5: // Should be at 'call ' either at start of string or after ?=
if ((ch == 'c' || ch == 'C') && i + 4 <= len && "call"
.equalsIgnoreCase(jdbcSql.substring(i, i + 4))) {
isFunction = true;
i += 4;
++state;
} else if (Character.isWhitespace(ch)) {
++i;
} else {
syntaxError = true;
}
break;
case 6: // Looking for whitespace char after 'call'
if (Character.isWhitespace(ch)) {
// Ok, we found the start of the real call.
++i;
++state;
startIndex = i;
} else {
syntaxError = true;
}
break;
case 7: // In "body" of the query (after "{ [? =] call ")
if (ch == '\'') {
inQuotes = !inQuotes;
++i;
} else if (inQuotes && ch == '\\' && !stdStrings) {
// Backslash in string constant, skip next character.
i += 2;
} else if (!inQuotes && ch == '{') {
inEscape = !inEscape;
++i;
} else if (!inQuotes && ch == '}') {
if (!inEscape) {
// Should be end of string.
endIndex = i;
++i;
++state;
} else {
inEscape = false;
}
} else if (!inQuotes && ch == ';') {
syntaxError = true;
} else {
// Everything else is ok.
++i;
}
break;
case 8: // At trailing end of query, eating whitespace
if (Character.isWhitespace(ch)) {
++i;
} else {
syntaxError = true;
}
break;
default:
throw new IllegalStateException("somehow got into bad state " + state);
}
}
// We can only legally end in a couple of states here.
if (i == len && !syntaxError) {
if (state == 1) {
// Not an escaped syntax.
// Detect PostgreSQL native CALL.
// (OUT parameter registration, needed for stored procedures with INOUT arguments, will fail without this)
i = 0;
while (i < len && Character.isWhitespace(jdbcSql.charAt(i))) {
i++; // skip any preceding whitespace
}
if (i < len - 5) { // 5 == length of "call" + 1 whitespace
//Check for CALL followed by whitespace
char ch = jdbcSql.charAt(i);
if ((ch == 'c' || ch == 'C') && "call".equalsIgnoreCase(jdbcSql.substring(i, i + 4))
&& Character.isWhitespace(jdbcSql.charAt(i + 4))) {
isFunction = true;
}
}
return new JdbcCallParseInfo(sql, isFunction);
}
if (state != 8) {
syntaxError = true; // Ran out of query while still parsing
}
}
if (syntaxError) {
throw new PSQLException(
GT.tr("Malformed function or procedure escape syntax at offset {0}.", i),
PSQLState.STATEMENT_NOT_ALLOWED_IN_FUNCTION_CALL);
}
String prefix;
String suffix;
if (escapeSyntaxCallMode == EscapeSyntaxCallMode.SELECT || serverVersion < 110000
|| (outParamBeforeFunc && escapeSyntaxCallMode == EscapeSyntaxCallMode.CALL_IF_NO_RETURN)) {
prefix = "select * from ";
suffix = " as result";
} else {
prefix = "call ";
suffix = "";
}
String s = jdbcSql.substring(startIndex, endIndex);
int prefixLength = prefix.length();
StringBuilder sb = new StringBuilder(prefixLength + jdbcSql.length() + suffix.length() + 10);
sb.append(prefix);
sb.append(s);
int opening = s.indexOf('(') + 1;
if (opening == 0) {
// here the function call has no parameters declaration eg : "{ ? = call pack_getValue}"
sb.append(outParamBeforeFunc ? "(?)" : "()");
} else if (outParamBeforeFunc) {
// move the single out parameter into the function call
// so that it can be treated like all other parameters
boolean needComma = false;
// the following loop will check if the function call has parameters
// eg "{ ? = call pack_getValue(?) }" vs "{ ? = call pack_getValue() }
for (int j = opening + prefixLength; j < sb.length(); j++) {
char c = sb.charAt(j);
if (c == ')') {
break;
}
if (!Character.isWhitespace(c)) {
needComma = true;
break;
}
}
// insert the return parameter as the first parameter of the function call
if (needComma) {
sb.insert(opening + prefixLength, "?,");
} else {
sb.insert(opening + prefixLength, "?");
}
}
if (!suffix.isEmpty()) {
sql = sb.append(suffix).toString();
} else {
sql = sb.toString();
}
return new JdbcCallParseInfo(sql, isFunction);
}
/**
* Filter the SQL string of Java SQL Escape clauses.
*
* Currently implemented Escape clauses are those mentioned in 11.3 in the specification.
* Basically we look through the sql string for {d xxx}, {t xxx}, {ts xxx}, {oj xxx} or {fn xxx}
* in non-string sql code. When we find them, we just strip the escape part leaving only the xxx
* part. So, something like "select * from x where d={d '2001-10-09'}" would return "select * from
* x where d= '2001-10-09'".
*
* @param sql the original query text
* @param replaceProcessingEnabled whether replace_processing_enabled is on
* @param standardConformingStrings whether standard_conforming_strings is on
* @return PostgreSQL-compatible SQL
* @throws SQLException if given SQL is wrong
*/
public static String replaceProcessing(String sql, boolean replaceProcessingEnabled,
boolean standardConformingStrings) throws SQLException {
if (replaceProcessingEnabled) {
// Since escape codes can only appear in SQL CODE, we keep track
// of if we enter a string or not.
int len = sql.length();
char[] chars = sql.toCharArray();
StringBuilder newsql = new StringBuilder(len);
int i = 0;
while (i < len) {
i = parseSql(chars, i, newsql, false, standardConformingStrings);
// We need to loop here in case we encounter invalid
// SQL, consider: SELECT a FROM t WHERE (1 > 0)) ORDER BY a
// We can't ending replacing after the extra closing paren
// because that changes a syntax error to a valid query
// that isn't what the user specified.
if (i < len) {
newsql.append(chars[i]);
i++;
}
}
return newsql.toString();
} else {
return sql;
}
}
/**
* parse the given sql from index i, appending it to the given buffer until we hit an unmatched
* right parentheses or end of string. When the stopOnComma flag is set we also stop processing
* when a comma is found in sql text that isn't inside nested parenthesis.
*
* @param sql the original query text
* @param i starting position for replacing
* @param newsql where to write the replaced output
* @param stopOnComma should we stop after hitting the first comma in sql text?
* @param stdStrings whether standard_conforming_strings is on
* @return the position we stopped processing at
* @throws SQLException if given SQL is wrong
*/
@SuppressWarnings("LabelledBreakTarget")
private static int parseSql(char[] sql, int i, StringBuilder newsql, boolean stopOnComma,
boolean stdStrings) throws SQLException {
SqlParseState state = SqlParseState.IN_SQLCODE;
int len = sql.length;
int nestedParenthesis = 0;
boolean endOfNested = false;
// because of the ++i loop
i--;
while (!endOfNested && ++i < len) {
char c = sql[i];
state_switch:
switch (state) {
case IN_SQLCODE:
if (c == '$') {
int i0 = i;
i = parseDollarQuotes(sql, i);
checkParsePosition(i, len, i0, sql,
"Unterminated dollar quote started at position {0} in SQL {1}. Expected terminating $$");
newsql.append(sql, i0, i - i0 + 1);
break;
} else if (c == '\'') {
// start of a string?
int i0 = i;
i = parseSingleQuotes(sql, i, stdStrings);
checkParsePosition(i, len, i0, sql,
"Unterminated string literal started at position {0} in SQL {1}. Expected ' char");
newsql.append(sql, i0, i - i0 + 1);
break;
} else if (c == '"') {
// start of a identifier?
int i0 = i;
i = parseDoubleQuotes(sql, i);
checkParsePosition(i, len, i0, sql,
"Unterminated identifier started at position {0} in SQL {1}. Expected \" char");
newsql.append(sql, i0, i - i0 + 1);
break;
} else if (c == '/') {
int i0 = i;
i = parseBlockComment(sql, i);
checkParsePosition(i, len, i0, sql,
"Unterminated block comment started at position {0} in SQL {1}. Expected */ sequence");
newsql.append(sql, i0, i - i0 + 1);
break;
} else if (c == '-') {
int i0 = i;
i = parseLineComment(sql, i);
newsql.append(sql, i0, i - i0 + 1);
break;
} else if (c == '(') { // begin nested sql
nestedParenthesis++;
} else if (c == ')') { // end of nested sql
nestedParenthesis--;
if (nestedParenthesis < 0) {
endOfNested = true;
break;
}
} else if (stopOnComma && c == ',' && nestedParenthesis == 0) {
endOfNested = true;
break;
} else if (c == '{') { // start of an escape code?
if (i + 1 < len) {
SqlParseState[] availableStates = SqlParseState.VALUES;
// skip first state, it's not a escape code state
for (int j = 1; j < availableStates.length; j++) {
SqlParseState availableState = availableStates[j];
int matchedPosition = availableState.getMatchedPosition(sql, i + 1);
if (matchedPosition == 0) {
continue;
}
i += matchedPosition;
if (availableState.replacementKeyword != null) {
newsql.append(availableState.replacementKeyword);
}
state = availableState;
break state_switch;
}
}
}
newsql.append(c);
break;
case ESC_FUNCTION:
// extract function name
i = escapeFunction(sql, i, newsql, stdStrings);
state = SqlParseState.IN_SQLCODE; // end of escaped function (or query)
break;
case ESC_DATE:
case ESC_TIME:
case ESC_TIMESTAMP:
case ESC_OUTERJOIN:
case ESC_ESCAPECHAR:
if (c == '}') {
state = SqlParseState.IN_SQLCODE; // end of escape code.
} else {
newsql.append(c);
}
break;
} // end switch
}
return i;
}
private static int findOpenParenthesis(char[] sql, int i) {
int posArgs = i;
while (posArgs < sql.length && sql[posArgs] != '(') {
posArgs++;
}
return posArgs;
}
private static void checkParsePosition(int i, int len, int i0, char[] sql,
String message)
throws PSQLException {
if (i < len) {
return;
}
throw new PSQLException(
GT.tr(message, i0, new String(sql)),
PSQLState.SYNTAX_ERROR);
}
private static int escapeFunction(char[] sql, int i, StringBuilder newsql, boolean stdStrings) throws SQLException {
String functionName;
int argPos = findOpenParenthesis(sql, i);
if (argPos < sql.length) {
functionName = new String(sql, i, argPos - i).trim();
// extract arguments
i = argPos + 1;// we start the scan after the first (
i = escapeFunctionArguments(newsql, functionName, sql, i, stdStrings);
}
// go to the end of the function copying anything found
i++;
while (i < sql.length && sql[i] != '}') {
newsql.append(sql[i++]);
}
return i;
}
/**
* Generate sql for escaped functions.
*
* @param newsql destination StringBuilder
* @param functionName the escaped function name
* @param sql input SQL text (containing arguments of a function call with possible JDBC escapes)
* @param i position in the input SQL
* @param stdStrings whether standard_conforming_strings is on
* @return the right PostgreSQL sql
* @throws SQLException if something goes wrong
*/
private static int escapeFunctionArguments(StringBuilder newsql, String functionName, char[] sql, int i,
boolean stdStrings)
throws SQLException {
// Maximum arity of functions in EscapedFunctions is 3
List parsedArgs = new ArrayList<>(3);
while (true) {
StringBuilder arg = new StringBuilder();
int lastPos = i;
i = parseSql(sql, i, arg, true, stdStrings);
if (i != lastPos) {
parsedArgs.add(arg);
}
if (i >= sql.length // should not happen
|| sql[i] != ',') {
break;
}
i++;
}
Method method = EscapedFunctions2.getFunction(functionName);
if (method == null) {
newsql.append(functionName);
EscapedFunctions2.appendCall(newsql, "(", ",", ")", parsedArgs);
return i;
}
try {
method.invoke(null, newsql, parsedArgs);
} catch (InvocationTargetException e) {
Throwable targetException = e.getTargetException();
if (targetException instanceof SQLException) {
throw (SQLException) targetException;
} else {
String message = targetException == null ? "no message" : targetException.getMessage();
throw new PSQLException(message, PSQLState.SYSTEM_ERROR);
}
} catch (IllegalAccessException e) {
throw new PSQLException(e.getMessage(), PSQLState.SYSTEM_ERROR);
}
return i;
}
private static final char[] QUOTE_OR_ALPHABETIC_MARKER = {'\"', '0'};
private static final char[] QUOTE_OR_ALPHABETIC_MARKER_OR_PARENTHESIS = {'\"', '0', '('};
private static final char[] SINGLE_QUOTE = {'\''};
// Static variables for parsing SQL when replaceProcessing is true.
@SuppressWarnings("ImmutableEnumChecker")
private enum SqlParseState {
IN_SQLCODE,
ESC_DATE("d", SINGLE_QUOTE, "DATE "),
ESC_TIME("t", SINGLE_QUOTE, "TIME "),
ESC_TIMESTAMP("ts", SINGLE_QUOTE, "TIMESTAMP "),
ESC_FUNCTION("fn", QUOTE_OR_ALPHABETIC_MARKER, null),
ESC_OUTERJOIN("oj", QUOTE_OR_ALPHABETIC_MARKER_OR_PARENTHESIS, null),
ESC_ESCAPECHAR("escape", SINGLE_QUOTE, "ESCAPE ");
private static final SqlParseState[] VALUES = values();
private final char[] escapeKeyword;
private final char[] allowedValues;
private final /* @Nullable */ String replacementKeyword;
SqlParseState() {
this("", new char[0], null);
}
SqlParseState(String escapeKeyword, char[] allowedValues,
/* @Nullable */ String replacementKeyword) {
this.escapeKeyword = escapeKeyword.toCharArray();
this.allowedValues = allowedValues;
this.replacementKeyword = replacementKeyword;
}
private boolean startMatches(char[] sql, int pos) {
// check for the keyword
for (char c : escapeKeyword) {
if (pos >= sql.length) {
return false;
}
char curr = sql[pos++];
if (curr != c && curr != Character.toUpperCase(c)) {
return false;
}
}
return pos < sql.length;
}
private int getMatchedPosition(char[] sql, int pos) {
// check for the keyword
if (!startMatches(sql, pos)) {
return 0;
}
int newPos = pos + escapeKeyword.length;
// check for the beginning of the value
char curr = sql[newPos];
// ignore any in-between whitespace
while (curr == ' ') {
newPos++;
if (newPos >= sql.length) {
return 0;
}
curr = sql[newPos];
}
for (char c : allowedValues) {
if (curr == c || (c == '0' && Character.isLetter(curr))) {
return newPos - pos;
}
}
return 0;
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/PgMessageType.java 0100664 0000000 0000000 00000004737 00000250600 026162 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2025, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
/**
* PostgreSQL protocol message types
*/
public class PgMessageType {
// Frontend message types
public static final byte BIND = 'B';
public static final byte CLOSE_REQUEST = 'C';
public static final byte DESCRIBE_REQUEST = 'D';
public static final byte EXECUTE_REQUEST = 'E';
public static final byte FUNCTION_CALL_REQ = 'F';
public static final byte FLUSH_REQ = 'H';
public static final byte PARSE_REQUEST = 'P';
public static final byte QUERY_REQUEST = 'Q';
public static final byte SYNC_REQUEST = 'S';
public static final byte TERMINATE_REQUEST = 'X';
public static final byte COPY_FAIL = 'f';
public static final byte GSS_TOKEN_REQUEST = 'p';
public static final byte PASSWORD_REQUEST = 'p';
public static final byte SASL_RESPONSE = 'p';
public static final byte SASL_INITIAL_RESPONSE = 'p';
// following 2 are used for describe and close
public static final byte PORTAL = 'P';
public static final byte STATEMENT = 'S';
// Backend message types
public static final byte AUTHENTICATION_RESPONSE = 'R';
public static final byte PARAMETER_STATUS_RESPONSE = 'S';
public static final byte BACKEND_KEY_DATA_RESPONSE = 'K';
public static final byte READY_FOR_QUERY_RESPONSE = 'Z';
public static final byte ROW_DESCRIPTION_RESPONSE = 'T';
public static final byte DATA_ROW_RESPONSE = 'D';
public static final byte COMMAND_COMPLETE_RESPONSE = 'C';
public static final byte COPY_OUT_RESPONSE = 'H';
public static final byte COPY_BOTH_RESPONSE = 'W';
public static final byte COPY_IN_RESPONSE = 'G';
public static final byte NEGOTIATE_PROTOCOL_RESPONSE = 'v';
public static final byte ERROR_RESPONSE = 'E';
public static final byte EMPTY_QUERY_RESPONSE = 'I';
public static final byte ASYNCHRONOUS_NOTICE = 'A';
public static final byte NOTICE_RESPONSE = 'N';
public static final byte PARSE_COMPLETE_RESPONSE = '1';
public static final byte BIND_COMPLETE_RESPONSE = '2';
public static final byte CLOSE_COMPLETE_RESPONSE = '3';
public static final byte NO_DATA_RESPONSE = 'n';
public static final byte PORTAL_SUSPENDED_RESPONSE = 's';
public static final byte PARAMETER_DESCRIPTION_RESPONSE = 't';
public static final byte FUNCTION_CALL_RESPONSE = 'V';
// sent by both backend and client
public static final byte COPY_DONE = 'c';
public static final byte COPY_DATA = 'd';
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ProtocolVersion.java 0100664 0000000 0000000 00000003666 00000250600 026614 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2025, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.sql.SQLException;
/**
* Enum representing the supported PostgreSQL protocol versions.
*/
public enum ProtocolVersion {
/**
* Protocol version 3.0
*/
v3_0(3, 0),
/**
* Protocol version 3.2
*/
v3_2(3, 2);
private final int major;
private final int minor;
private static final ProtocolVersion[] values = values();
ProtocolVersion(int major, int minor) {
this.major = major;
this.minor = minor;
}
/**
* @param major (int): The major version number of the protocol.
* @param minor (int): The minor version number of the protocol.
* @return A `ProtocolVersion` enum value representing the specified protocol version.
* @throws SQLException if the requested protocol version is not supported.
*
* Performs a simple validation check to ensure that only supported protocol versions are used.
* Currently, the PostgreSQL JDBC driver only supports protocol versions 3.0 and 3.2.
*/
public static ProtocolVersion fromMajorMinor(int major, int minor) throws SQLException {
for (ProtocolVersion version : values) {
if (version.major == major && version.minor == minor) {
return version;
}
}
throw new PSQLException(GT.tr("Invalid version number major: {0}, minor: {1}",
major, minor), PSQLState.NOT_IMPLEMENTED);
}
/**
* Gets the major version number.
*
* @return the major version number
*/
public int getMajor() {
return major;
}
/**
* Gets the minor version number.
*
* @return the minor version number
*/
public int getMinor() {
return minor;
}
@Override
public String toString() {
return major + "." + minor;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Provider.java 0100664 0000000 0000000 00000000567 00000250600 025234 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
/**
* Represents a provider of results.
*
* @param the type of results provided by this provider
*/
public interface Provider {
/**
* Gets a result.
*
* @return a result
*/
T get();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Query.java 0100664 0000000 0000000 00000007537 00000250600 024553 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import org.postgresql.core.v3.SqlSerializationContext;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.util.Map;
/**
* Abstraction of a generic Query, hiding the details of any protocol-version-specific data needed
* to execute the query efficiently.
*
* Query objects should be explicitly closed when no longer needed; if resources are allocated on
* the server for this query, their cleanup is triggered by closing the Query.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
public interface Query {
/**
* Create a ParameterList suitable for storing parameters associated with this Query.
*
* If this query has no parameters, a ParameterList will be returned, but it may be a shared
* immutable object. If this query does have parameters, the returned ParameterList is a new list,
* unshared by other callers.
*
* @return a suitable ParameterList instance for this query
*/
ParameterList createParameterList();
/**
* Returns string representation of the query, substituting particular parameter values for
* parameter placeholders.
*
* Note: the method replaces the values on a best-effort basis as it might omit the replacements
* for parameters that can't be processed several times. For instance, {@link java.io.InputStream}
* can be processed only once.
*
* @param parameters a ParameterList returned by this Query's {@link #createParameterList} method,
* or {@code null} to leave the parameter placeholders unsubstituted.
* @return string representation of this query
*/
String toString(/* @Nullable */ ParameterList parameters);
/**
* Returns string representation of the query, substituting particular parameter values for
* parameter placeholders.
*
* @param parameters a ParameterList returned by this Query's {@link #createParameterList} method,
* or {@code null} to leave the parameter placeholders unsubstituted.
* @param context specifies configuration for converting the parameters to string
* @return string representation of this query
*/
String toString(/* @Nullable */ ParameterList parameters, SqlSerializationContext context);
/**
* Returns SQL in native for database format.
* @return SQL in native for database format
*/
String getNativeSql();
/**
* Returns properties of the query (sql keyword, and some other parsing info).
* @return returns properties of the query (sql keyword, and some other parsing info) or null if not applicable
*/
/* @Nullable */ SqlCommand getSqlCommand();
/**
* Close this query and free any server-side resources associated with it. The resources may not
* be immediately deallocated, but closing a Query may make the deallocation more prompt.
*
*
A closed Query should not be executed.
*/
void close();
boolean isStatementDescribed();
boolean isEmpty();
/**
* Get the number of times this Query has been batched.
* @return number of times addBatch()
has been called.
*/
int getBatchSize();
/**
* Get a map that a result set can use to find the index associated to a name.
*
* @return null if the query implementation does not support this method.
*/
/* @Nullable */ Map getResultSetColumnNameIndexMap();
/**
* Return a list of the Query objects that make up this query. If this object is already a
* SimpleQuery, returns null (avoids an extra array construction in the common case).
*
* @return an array of single-statement queries, or null
if this object is already a
* single-statement query.
*/
Query /* @Nullable */ [] getSubqueries();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/QueryExecutor.java 0100664 0000000 0000000 00000057313 00000250600 026267 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import org.postgresql.PGNotification;
import org.postgresql.copy.CopyOperation;
import org.postgresql.core.v3.TypeTransferModeRegistry;
import org.postgresql.jdbc.AutoSave;
import org.postgresql.jdbc.BatchResultHandler;
import org.postgresql.jdbc.EscapeSyntaxCallMode;
import org.postgresql.jdbc.PreferQueryMode;
import org.postgresql.util.HostSpec;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Closeable;
import java.io.IOException;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TimeZone;
/**
* Abstracts the protocol-specific details of executing a query.
*
* Every connection has a single QueryExecutor implementation associated with it. This object
* provides:
*
*
* factory methods for Query objects ({@link #createSimpleQuery(String)} and
* {@link #createQuery(String, boolean, boolean, String...)})
* execution methods for created Query objects (
* {@link #execute(Query, ParameterList, ResultHandler, int, int, int)} for single queries and
* {@link #execute(Query[], ParameterList[], BatchResultHandler, int, int, int)} for batches of queries)
* a fastpath call interface ({@link #createFastpathParameters} and {@link #fastpathCall}).
*
*
* Query objects may represent a query that has parameter placeholders. To provide actual values for
* these parameters, a {@link ParameterList} object is created via a factory method (
* {@link Query#createParameterList}). The parameters are filled in by the caller and passed along
* with the query to the query execution methods. Several ParameterLists for a given query might
* exist at one time (or over time); this allows the underlying Query to be reused for several
* executions, or for batch execution of the same Query.
*
* In general, a Query created by a particular QueryExecutor may only be executed by that
* QueryExecutor, and a ParameterList created by a particular Query may only be used as parameters
* to that Query. Unpredictable things will happen if this isn't done.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
public interface QueryExecutor extends TypeTransferModeRegistry {
/**
* Flag for query execution that indicates the given Query object is unlikely to be reused.
*/
int QUERY_ONESHOT = 1;
/**
* Flag for query execution that indicates that resultset metadata isn't needed and can be safely
* omitted.
*/
int QUERY_NO_METADATA = 2;
/**
* Flag for query execution that indicates that a resultset isn't expected and the query executor
* can safely discard any rows (although the resultset should still appear to be from a
* resultset-returning query).
*/
int QUERY_NO_RESULTS = 4;
/**
* Flag for query execution that indicates a forward-fetch-capable cursor should be used if
* possible.
*/
int QUERY_FORWARD_CURSOR = 8;
/**
* Flag for query execution that indicates the automatic BEGIN on the first statement when outside
* a transaction should not be done.
*/
int QUERY_SUPPRESS_BEGIN = 16;
/**
* Flag for query execution when we don't really want to execute, we just want to get the
* parameter metadata for the statement.
*/
int QUERY_DESCRIBE_ONLY = 32;
/**
* Flag for query execution used by generated keys where we want to receive both the ResultSet and
* associated update count from the command status.
*/
int QUERY_BOTH_ROWS_AND_STATUS = 64;
/**
* Force this query to be described at each execution. This is done in pipelined batches where we
* might need to detect mismatched result types.
*/
int QUERY_FORCE_DESCRIBE_PORTAL = 512;
/**
* Flag to disable batch execution when we expect results (generated keys) from a statement.
*
* @deprecated in PgJDBC 9.4 as we now auto-size batches.
*/
@Deprecated
int QUERY_DISALLOW_BATCHING = 128;
/**
* Flag for query execution to avoid using binary transfer.
*/
int QUERY_NO_BINARY_TRANSFER = 256;
/**
* Execute the query via simple 'Q' command (not parse, bind, exec, but simple execute).
* This sends query text on each execution, however it supports sending multiple queries
* separated with ';' as a single command.
*/
int QUERY_EXECUTE_AS_SIMPLE = 1024;
int MAX_SAVE_POINTS = 1000;
/**
* Flag indicating that when beginning a transaction, it should be read only.
*/
int QUERY_READ_ONLY_HINT = 2048;
/**
* Execute a Query, passing results to a provided ResultHandler.
*
* @param query the query to execute; must be a query returned from calling
* {@link #wrap(List)} on this QueryExecutor object.
* @param parameters the parameters for the query. Must be non-null
if the query
* takes parameters. Must be a parameter object returned by
* {@link org.postgresql.core.Query#createParameterList()}.
* @param handler a ResultHandler responsible for handling results generated by this query
* @param maxRows the maximum number of rows to retrieve
* @param fetchSize if QUERY_FORWARD_CURSOR is set, the preferred number of rows to retrieve
* before suspending
* @param flags a combination of QUERY_* flags indicating how to handle the query.
* @throws SQLException if query execution fails
*/
void execute(Query query, /* @Nullable */ ParameterList parameters, ResultHandler handler, int maxRows,
int fetchSize, int flags) throws SQLException;
/**
* Execute a Query with adaptive fetch, passing results to a provided ResultHandler.
*
* @param query the query to execute; must be a query returned from calling
* {@link #wrap(List)} on this QueryExecutor object.
* @param parameters the parameters for the query. Must be non-null
if the query
* takes parameters. Must be a parameter object returned by
* {@link org.postgresql.core.Query#createParameterList()}.
* @param handler a ResultHandler responsible for handling results generated by this query
* @param maxRows the maximum number of rows to retrieve
* @param fetchSize if QUERY_FORWARD_CURSOR is set, the preferred number of rows to retrieve
* before suspending
* @param flags a combination of QUERY_* flags indicating how to handle the query.
* @param adaptiveFetch state of adaptiveFetch to use during execution
* @throws SQLException if query execution fails
*/
void execute(Query query, /* @Nullable */ ParameterList parameters, ResultHandler handler, int maxRows,
int fetchSize, int flags, boolean adaptiveFetch) throws SQLException;
/**
* Execute several Query, passing results to a provided ResultHandler.
*
* @param queries the queries to execute; each must be a query returned from calling
* {@link #wrap(List)} on this QueryExecutor object.
* @param parameterLists the parameter lists for the queries. The parameter lists correspond 1:1
* to the queries passed in the queries
array. Each must be non-
* null
if the corresponding query takes parameters, and must be a parameter
* object returned by {@link Query#createParameterList()} created by
* the corresponding query.
* @param handler a ResultHandler responsible for handling results generated by this query
* @param maxRows the maximum number of rows to retrieve
* @param fetchSize if QUERY_FORWARD_CURSOR is set, the preferred number of rows to retrieve
* before suspending
* @param flags a combination of QUERY_* flags indicating how to handle the query.
* @throws SQLException if query execution fails
*/
void execute(Query[] queries, /* @Nullable */ ParameterList[] parameterLists,
BatchResultHandler handler, int maxRows,
int fetchSize, int flags) throws SQLException;
/**
* Execute several Query with adaptive fetch, passing results to a provided ResultHandler.
*
* @param queries the queries to execute; each must be a query returned from calling
* {@link #wrap(List)} on this QueryExecutor object.
* @param parameterLists the parameter lists for the queries. The parameter lists correspond 1:1
* to the queries passed in the queries
array. Each must be non-
* null
if the corresponding query takes parameters, and must be a parameter
* object returned by {@link Query#createParameterList()} created by
* the corresponding query.
* @param handler a ResultHandler responsible for handling results generated by this query
* @param maxRows the maximum number of rows to retrieve
* @param fetchSize if QUERY_FORWARD_CURSOR is set, the preferred number of rows to retrieve
* before suspending
* @param flags a combination of QUERY_* flags indicating how to handle the query.
* @param adaptiveFetch state of adaptiveFetch to use during execution
* @throws SQLException if query execution fails
*/
void execute(Query[] queries, /* @Nullable */ ParameterList[] parameterLists,
BatchResultHandler handler, int maxRows,
int fetchSize, int flags, boolean adaptiveFetch) throws SQLException;
/**
* Fetch additional rows from a cursor.
*
* @param cursor the cursor to fetch from
* @param handler the handler to feed results to
* @param fetchSize the preferred number of rows to retrieve before suspending
* @param adaptiveFetch state of adaptiveFetch to use during fetching
* @throws SQLException if query execution fails
*/
void fetch(ResultCursor cursor, ResultHandler handler, int fetchSize, boolean adaptiveFetch) throws SQLException;
/**
* Create an unparameterized Query object suitable for execution by this QueryExecutor. The
* provided query string is not parsed for parameter placeholders ('?' characters), and the
* {@link Query#createParameterList} of the returned object will always return an empty
* ParameterList.
*
* @param sql the SQL for the query to create
* @return a new Query object
* @throws SQLException if something goes wrong
*/
Query createSimpleQuery(String sql) throws SQLException;
boolean isReWriteBatchedInsertsEnabled();
CachedQuery createQuery(String sql, boolean escapeProcessing, boolean isParameterized,
String /* @Nullable */ ... columnNames)
throws SQLException;
Object createQueryKey(String sql, boolean escapeProcessing, boolean isParameterized,
String /* @Nullable */ ... columnNames);
CachedQuery createQueryByKey(Object key) throws SQLException;
CachedQuery borrowQueryByKey(Object key) throws SQLException;
CachedQuery borrowQuery(String sql) throws SQLException;
CachedQuery borrowCallableQuery(String sql) throws SQLException;
CachedQuery borrowReturningQuery(String sql, String /* @Nullable */ [] columnNames) throws SQLException;
void releaseQuery(CachedQuery cachedQuery);
/**
* Wrap given native query into a ready for execution format.
* @param queries list of queries in native to database syntax
* @return query object ready for execution by this query executor
*/
Query wrap(List queries);
/**
* Prior to attempting to retrieve notifications, we need to pull any recently received
* notifications off of the network buffers. The notification retrieval in ProtocolConnection
* cannot do this as it is prone to deadlock, so the higher level caller must be responsible which
* requires exposing this method.
*
* @throws SQLException if and error occurs while fetching notifications
*/
void processNotifies() throws SQLException;
/**
* Prior to attempting to retrieve notifications, we need to pull any recently received
* notifications off of the network buffers. The notification retrieval in ProtocolConnection
* cannot do this as it is prone to deadlock, so the higher level caller must be responsible which
* requires exposing this method. This variant supports blocking for the given time in millis.
*
* @param timeoutMillis number of milliseconds to block for
* @throws SQLException if and error occurs while fetching notifications
*/
void processNotifies(int timeoutMillis) throws SQLException;
//
// Fastpath interface.
//
/**
* Create a new ParameterList implementation suitable for invoking a fastpath function via
* {@link #fastpathCall}.
*
* @param count the number of parameters the fastpath call will take
* @return a ParameterList suitable for passing to {@link #fastpathCall}.
* @deprecated This API is somewhat obsolete, as one may achieve similar performance
* and greater functionality by setting up a prepared statement to define
* the function call. Then, executing the statement with binary transmission of parameters
* and results substitutes for a fast-path function call.
*/
@Deprecated
ParameterList createFastpathParameters(int count);
/**
* Invoke a backend function via the fastpath interface.
*
* @param fnid the OID of the backend function to invoke
* @param params a ParameterList returned from {@link #createFastpathParameters} containing the
* parameters to pass to the backend function
* @param suppressBegin if begin should be suppressed
* @return the binary-format result of the fastpath call, or null
if a void result
* was returned
* @throws SQLException if an error occurs while executing the fastpath call
* @deprecated This API is somewhat obsolete, as one may achieve similar performance
* and greater functionality by setting up a prepared statement to define
* the function call. Then, executing the statement with binary transmission of parameters
* and results substitutes for a fast-path function call.
*/
@Deprecated
byte /* @Nullable */ [] fastpathCall(int fnid, ParameterList params, boolean suppressBegin)
throws SQLException;
/**
* Issues a COPY FROM STDIN / COPY TO STDOUT statement and returns handler for associated
* operation. Until the copy operation completes, no other database operation may be performed.
* Implemented for protocol version 3 only.
*
* @param sql input sql
* @param suppressBegin if begin should be suppressed
* @return handler for associated operation
* @throws SQLException when initializing the given query fails
*/
CopyOperation startCopy(String sql, boolean suppressBegin) throws SQLException;
/**
* @return the version of the implementation
*/
ProtocolVersion getProtocolVersion();
/**
* Adds a single oid that should be received using binary encoding.
*
* @param oid The oid to request with binary encoding.
*/
void addBinaryReceiveOid(int oid);
/**
* Remove given oid from the list of oids for binary receive encoding.
*
* Note: the binary receive for the oid can be re-activated later.
*
* @param oid The oid to request with binary encoding.
*/
void removeBinaryReceiveOid(int oid);
/**
* Gets the oids that should be received using binary encoding.
*
* Note: this returns an unmodifiable set, and its contents might not reflect the current state.
*
* @return The oids to request with binary encoding.
* @deprecated the method returns a copy of the set, so it is not efficient. Use {@link #useBinaryForReceive(int)}
*/
@Deprecated
Set extends Integer> getBinaryReceiveOids();
/**
* Sets the oids that should be received using binary encoding.
*
* @param useBinaryForOids The oids to request with binary encoding.
*/
void setBinaryReceiveOids(Set useBinaryForOids);
/**
* Adds a single oid that should be sent using binary encoding.
*
* @param oid The oid to send with binary encoding.
*/
void addBinarySendOid(int oid);
/**
* Remove given oid from the list of oids for binary send encoding.
*
* Note: the binary send for the oid can be re-activated later.
*
* @param oid The oid to send with binary encoding.
*/
void removeBinarySendOid(int oid);
/**
* Gets the oids that should be sent using binary encoding.
*
* Note: this returns an unmodifiable set, and its contents might not reflect the current state.
*
* @return useBinaryForOids The oids to send with binary encoding.
* @deprecated the method returns a copy of the set, so it is not efficient. Use {@link #useBinaryForSend(int)}
*/
@Deprecated
Set extends Integer> getBinarySendOids();
/**
* Sets the oids that should be sent using binary encoding.
*
* @param useBinaryForOids The oids to send with binary encoding.
*/
void setBinarySendOids(Set useBinaryForOids);
/**
* Returns true if server uses integer instead of double for binary date and time encodings.
*
* @return the server integer_datetime setting.
*/
boolean getIntegerDateTimes();
/**
* @return the host and port this connection is connected to.
*/
HostSpec getHostSpec();
/**
* @return the user this connection authenticated as.
*/
String getUser();
/**
* @return the database this connection is connected to.
*/
String getDatabase();
/**
* Sends a query cancellation for this connection.
*
* @throws SQLException if something goes wrong.
*/
void sendQueryCancel() throws SQLException;
/**
* Return the process ID (PID) of the backend server process handling this connection.
*
* @return process ID (PID) of the backend server process handling this connection
*/
int getBackendPID();
/**
* Abort at network level without sending the Terminate message to the backend.
*/
void abort();
/**
* Close this connection cleanly.
*/
void close();
/**
* Returns an action that would close the connection cleanly.
* The returned object should refer only the minimum subset of objects required
* for proper resource cleanup. For instance, it should better not hold a strong reference to
* {@link QueryExecutor}.
* @return action that would close the connection cleanly.
*/
Closeable getCloseAction();
/**
* Check if this connection is closed.
*
* @return true iff the connection is closed.
*/
boolean isClosed();
/**
* Return the server version from the server_version GUC.
*
* Note that there's no requirement for this to be numeric or of the form x.y.z. PostgreSQL
* development releases usually have the format x.ydevel e.g. 9.4devel; betas usually x.ybetan
* e.g. 9.4beta1. The --with-extra-version configure option may add an arbitrary string to this.
*
* Don't use this string for logic, only use it when displaying the server version to the user.
* Prefer getServerVersionNum() for all logic purposes.
*
* @return the server version string from the server_version GUC
*/
String getServerVersion();
/**
* Retrieve and clear the set of asynchronous notifications pending on this connection.
*
* @return an array of notifications; if there are no notifications, an empty array is returned.
* @throws SQLException if and error occurs while fetching notifications
*/
PGNotification[] getNotifications() throws SQLException;
/**
* Retrieve and clear the chain of warnings accumulated on this connection.
*
* @return the first SQLWarning in the chain; subsequent warnings can be found via
* SQLWarning.getNextWarning().
*/
/* @Nullable */ SQLWarning getWarnings();
/**
* Get a machine-readable server version.
*
* This returns the value of the server_version_num GUC. If no such GUC exists, it falls back on
* attempting to parse the text server version for the major version. If there's no minor version
* (e.g. a devel or beta release) then the minor version is set to zero. If the version could not
* be parsed, zero is returned.
*
* @return the server version in numeric XXYYZZ form, eg 090401, from server_version_num
*/
int getServerVersionNum();
/**
* Get the current transaction state of this connection.
*
* @return a ProtocolConnection.TRANSACTION_* constant.
*/
TransactionState getTransactionState();
/**
* Returns whether the server treats string-literals according to the SQL standard or if it uses
* traditional PostgreSQL escaping rules. Versions up to 8.1 always treated backslashes as escape
* characters in string-literals. Since 8.2, this depends on the value of the
* {@code standard_conforming_strings} server variable.
*
* @return true if the server treats string literals according to the SQL standard
*/
boolean getStandardConformingStrings();
/**
*
* @return true if we are going to quote identifier provided in the returning array default is true
*/
boolean getQuoteReturningIdentifiers();
/**
* Returns backend timezone in java format.
* @return backend timezone in java format.
*/
/* @Nullable */ TimeZone getTimeZone();
/**
* @return the current encoding in use by this connection
*/
Encoding getEncoding();
/**
* Returns application_name connection property.
* @return application_name connection property
*/
String getApplicationName();
boolean isColumnSanitiserDisabled();
EscapeSyntaxCallMode getEscapeSyntaxCallMode();
PreferQueryMode getPreferQueryMode();
void setPreferQueryMode(PreferQueryMode mode);
AutoSave getAutoSave();
void setAutoSave(AutoSave autoSave);
boolean willHealOnRetry(SQLException e);
/**
* By default, the connection resets statement cache in case deallocate all/discard all
* message is observed.
* This API allows to disable that feature for testing purposes.
*
* @param flushCacheOnDeallocate true if statement cache should be reset when "deallocate/discard" message observed
*/
void setFlushCacheOnDeallocate(boolean flushCacheOnDeallocate);
/**
* @return the ReplicationProtocol instance for this connection.
*/
ReplicationProtocol getReplicationProtocol();
void setNetworkTimeout(int milliseconds) throws IOException;
int getNetworkTimeout() throws IOException;
// Expose parameter status to PGConnection
Map getParameterStatuses();
/* @Nullable */ String getParameterStatus(String parameterName);
/**
* Get fetch size computed by adaptive fetch size for given query.
*
* @param adaptiveFetch state of adaptive fetch, which should be used during retrieving
* @param cursor Cursor used by resultSet, containing query, have to be able to cast to
* Portal class.
* @return fetch size computed by adaptive fetch size for given query passed inside cursor
*/
int getAdaptiveFetchSize(boolean adaptiveFetch, ResultCursor cursor);
/**
* Get state of adaptive fetch inside QueryExecutor.
*
* @return state of adaptive fetch inside QueryExecutor
*/
boolean getAdaptiveFetch();
/**
* Set state of adaptive fetch inside QueryExecutor.
*
* @param adaptiveFetch desired state of adaptive fetch
*/
void setAdaptiveFetch(boolean adaptiveFetch);
/**
* Add query to adaptive fetch cache inside QueryExecutor.
*
* @param adaptiveFetch state of adaptive fetch used during adding query
* @param cursor Cursor used by resultSet, containing query, have to be able to cast to
* Portal class.
*/
void addQueryToAdaptiveFetchCache(boolean adaptiveFetch, ResultCursor cursor);
/**
* Remove query from adaptive fetch cache inside QueryExecutor
*
* @param adaptiveFetch state of adaptive fetch used during removing query
* @param cursor Cursor used by resultSet, containing query, have to be able to cast to
* Portal class.
*/
void removeQueryFromAdaptiveFetchCache(boolean adaptiveFetch, ResultCursor cursor);
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/QueryExecutorBase.java 0100664 0000000 0000000 00000036613 00000250600 027062 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.PGNotification;
import org.postgresql.PGProperty;
import org.postgresql.jdbc.AutoSave;
import org.postgresql.jdbc.EscapeSyntaxCallMode;
import org.postgresql.jdbc.PreferQueryMode;
import org.postgresql.jdbc.ResourceLock;
import org.postgresql.util.HostSpec;
import org.postgresql.util.LruCache;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.ServerErrorMessage;
// import org.checkerframework.checker.nullness.qual.MonotonicNonNull;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Closeable;
import java.io.IOException;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Map;
import java.util.Properties;
import java.util.TreeMap;
import java.util.concurrent.locks.Condition;
import java.util.logging.Level;
import java.util.logging.Logger;
public abstract class QueryExecutorBase implements QueryExecutor {
private static final Logger LOGGER = Logger.getLogger(QueryExecutorBase.class.getName());
protected final PGStream pgStream;
private final String user;
private final String database;
private final int cancelSignalTimeout;
protected ProtocolVersion protocolVersion;
private int cancelPid;
private byte /* @Nullable */[] cancelKey;
protected final QueryExecutorCloseAction closeAction;
private /* @MonotonicNonNull */ String serverVersion;
private int serverVersionNum;
private TransactionState transactionState = TransactionState.IDLE;
private final boolean reWriteBatchedInserts;
private final boolean columnSanitiserDisabled;
private final EscapeSyntaxCallMode escapeSyntaxCallMode;
private final boolean quoteReturningIdentifiers;
private PreferQueryMode preferQueryMode;
private AutoSave autoSave;
private boolean flushCacheOnDeallocate = true;
protected final boolean logServerErrorDetail;
// default value for server versions that don't report standard_conforming_strings
private boolean standardConformingStrings;
private /* @Nullable */ SQLWarning warnings;
private final ArrayList notifications = new ArrayList<>();
private final LruCache statementCache;
private final CachedQueryCreateAction cachedQueryCreateAction;
// For getParameterStatuses(), GUC_REPORT tracking
private final TreeMap parameterStatuses
= new TreeMap<>(String.CASE_INSENSITIVE_ORDER);
protected final ResourceLock lock = new ResourceLock();
protected final Condition lockCondition = lock.newCondition();
@SuppressWarnings({"assignment", "argument", "method.invocation"})
protected QueryExecutorBase(PGStream pgStream, int cancelSignalTimeout, Properties info) throws SQLException {
this.pgStream = pgStream;
this.protocolVersion = pgStream.getProtocolVersion();
this.user = PGProperty.USER.getOrDefault(info);
this.database = PGProperty.PG_DBNAME.getOrDefault(info);
this.cancelSignalTimeout = cancelSignalTimeout;
this.reWriteBatchedInserts = PGProperty.REWRITE_BATCHED_INSERTS.getBoolean(info);
this.columnSanitiserDisabled = PGProperty.DISABLE_COLUMN_SANITISER.getBoolean(info);
String callMode = PGProperty.ESCAPE_SYNTAX_CALL_MODE.getOrDefault(info);
this.escapeSyntaxCallMode = EscapeSyntaxCallMode.of(callMode);
this.quoteReturningIdentifiers = PGProperty.QUOTE_RETURNING_IDENTIFIERS.getBoolean(info);
String preferMode = PGProperty.PREFER_QUERY_MODE.getOrDefault(info);
this.preferQueryMode = PreferQueryMode.of(preferMode);
this.autoSave = AutoSave.of(PGProperty.AUTOSAVE.getOrDefault(info));
this.logServerErrorDetail = PGProperty.LOG_SERVER_ERROR_DETAIL.getBoolean(info);
// assignment, argument
this.cachedQueryCreateAction = new CachedQueryCreateAction(this);
statementCache = new LruCache<>(
Math.max(0, PGProperty.PREPARED_STATEMENT_CACHE_QUERIES.getInt(info)),
Math.max(0, PGProperty.PREPARED_STATEMENT_CACHE_SIZE_MIB.getInt(info) * 1024L * 1024L),
false,
cachedQueryCreateAction,
new LruCache.EvictAction() {
@Override
public void evict(CachedQuery cachedQuery) throws SQLException {
cachedQuery.query.close();
}
});
// method.invocation
this.closeAction = createCloseAction();
}
protected QueryExecutorCloseAction createCloseAction() {
return new QueryExecutorCloseAction(pgStream);
}
/**
* Sends "terminate connection" message to the backend.
* @throws IOException in case connection termination fails
* @deprecated use {@link #getCloseAction()} instead
*/
@Deprecated
protected abstract void sendCloseMessage() throws IOException;
@Override
public void setNetworkTimeout(int milliseconds) throws IOException {
pgStream.setNetworkTimeout(milliseconds);
}
@Override
public int getNetworkTimeout() throws IOException {
return pgStream.getNetworkTimeout();
}
@Override
public HostSpec getHostSpec() {
return pgStream.getHostSpec();
}
@Override
public String getUser() {
return user;
}
@Override
public String getDatabase() {
return database;
}
public void setBackendKeyData(int cancelPid, byte[]cancelKey) {
this.cancelPid = cancelPid;
this.cancelKey = cancelKey;
}
@Override
public int getBackendPID() {
return cancelPid;
}
@Override
public void abort() {
closeAction.abort();
}
@Override
public Closeable getCloseAction() {
return closeAction;
}
@Override
public void close() {
if (closeAction.isClosed()) {
return;
}
try {
getCloseAction().close();
} catch (IOException ioe) {
LOGGER.log(Level.FINEST, "Discarding IOException on close:", ioe);
}
}
@Override
public boolean isClosed() {
return closeAction.isClosed();
}
@Override
public void sendQueryCancel() throws SQLException {
PGStream cancelStream = null;
// Now we need to construct and send a cancel packet
try {
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " FE=> CancelRequest(pid={0},ckey={1})", new Object[]{cancelPid, cancelKey});
}
// Cancel signal is variable since protocol 3.2 so we use cancelKey.length + 12
cancelStream =
new PGStream(pgStream.getSocketFactory(), pgStream.getHostSpec(), cancelSignalTimeout, castNonNull(cancelKey).length + 12);
if (cancelSignalTimeout > 0) {
cancelStream.setNetworkTimeout(cancelSignalTimeout);
}
// send the length including self
cancelStream.sendInteger4(castNonNull(castNonNull(cancelKey)).length + 12);
cancelStream.sendInteger2(1234);
cancelStream.sendInteger2(5678);
cancelStream.sendInteger4(cancelPid);
cancelStream.send(castNonNull(cancelKey));
cancelStream.flush();
cancelStream.receiveEOF();
} catch (IOException e) {
// Safe to ignore.
LOGGER.log(Level.FINEST, "Ignoring exception on cancel request:", e);
} finally {
if (cancelStream != null) {
try {
cancelStream.close();
} catch (IOException e) {
// Ignored.
}
}
}
}
public void addWarning(SQLWarning newWarning) {
try (ResourceLock ignore = lock.obtain()) {
if (warnings == null) {
warnings = newWarning;
} else {
warnings.setNextWarning(newWarning);
}
}
}
public void addNotification(PGNotification notification) {
try (ResourceLock ignore = lock.obtain()) {
notifications.add(notification);
}
}
@Override
public PGNotification[] getNotifications() throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
PGNotification[] array = notifications.toArray(new PGNotification[0]);
notifications.clear();
return array;
}
}
@Override
public /* @Nullable */ SQLWarning getWarnings() {
try (ResourceLock ignore = lock.obtain()) {
SQLWarning chain = warnings;
warnings = null;
return chain;
}
}
@Override
public String getServerVersion() {
String serverVersion = this.serverVersion;
if (serverVersion == null) {
throw new IllegalStateException("serverVersion must not be null");
}
return serverVersion;
}
@Override
public int getServerVersionNum() {
if (serverVersionNum != 0) {
return serverVersionNum;
}
serverVersionNum = Utils.parseServerVersionStr(getServerVersion());
return serverVersionNum;
}
public void setServerVersion(String serverVersion) {
this.serverVersion = serverVersion;
}
public void setServerVersionNum(int serverVersionNum) {
this.serverVersionNum = serverVersionNum;
}
public void setTransactionState(TransactionState state) {
try (ResourceLock ignore = lock.obtain()) {
transactionState = state;
}
}
public void setStandardConformingStrings(boolean value) {
try (ResourceLock ignore = lock.obtain()) {
standardConformingStrings = value;
}
}
@Override
public boolean getStandardConformingStrings() {
try (ResourceLock ignore = lock.obtain()) {
return standardConformingStrings;
}
}
@Override
public boolean getQuoteReturningIdentifiers() {
return quoteReturningIdentifiers;
}
@Override
public TransactionState getTransactionState() {
try (ResourceLock ignore = lock.obtain()) {
return transactionState;
}
}
public void setEncoding(Encoding encoding) throws IOException {
pgStream.setEncoding(encoding);
}
@Override
public Encoding getEncoding() {
return pgStream.getEncoding();
}
@Override
public boolean isReWriteBatchedInsertsEnabled() {
return this.reWriteBatchedInserts;
}
@Override
public final CachedQuery borrowQuery(String sql) throws SQLException {
return statementCache.borrow(sql);
}
@Override
public final CachedQuery borrowCallableQuery(String sql) throws SQLException {
return statementCache.borrow(new CallableQueryKey(sql));
}
@Override
public final CachedQuery borrowReturningQuery(String sql, String /* @Nullable */ [] columnNames)
throws SQLException {
return statementCache.borrow(new QueryWithReturningColumnsKey(sql, true, true,
columnNames
));
}
@Override
public CachedQuery borrowQueryByKey(Object key) throws SQLException {
return statementCache.borrow(key);
}
@Override
public void releaseQuery(CachedQuery cachedQuery) {
statementCache.put(cachedQuery.key, cachedQuery);
}
@Override
public final Object createQueryKey(String sql, boolean escapeProcessing,
boolean isParameterized, String /* @Nullable */ ... columnNames) {
Object key;
if (columnNames == null || columnNames.length != 0) {
// Null means "return whatever sensible columns are" (e.g. primary key, or serial, or something like that)
key = new QueryWithReturningColumnsKey(sql, isParameterized, escapeProcessing, columnNames);
} else if (isParameterized) {
// If no generated columns requested, just use the SQL as a cache key
key = sql;
} else {
key = new BaseQueryKey(sql, false, escapeProcessing);
}
return key;
}
@Override
public CachedQuery createQueryByKey(Object key) throws SQLException {
return cachedQueryCreateAction.create(key);
}
@Override
public final CachedQuery createQuery(String sql, boolean escapeProcessing,
boolean isParameterized, String /* @Nullable */ ... columnNames)
throws SQLException {
Object key = createQueryKey(sql, escapeProcessing, isParameterized, columnNames);
// Note: cache is not reused here for two reasons:
// 1) Simplify initial implementation for simple statements
// 2) Non-prepared statements are likely to have literals, thus query reuse would not be often
return createQueryByKey(key);
}
@Override
public boolean isColumnSanitiserDisabled() {
return columnSanitiserDisabled;
}
@Override
public EscapeSyntaxCallMode getEscapeSyntaxCallMode() {
return escapeSyntaxCallMode;
}
@Override
public PreferQueryMode getPreferQueryMode() {
return preferQueryMode;
}
@Override
public void setPreferQueryMode(PreferQueryMode mode) {
preferQueryMode = mode;
}
@Override
public AutoSave getAutoSave() {
return autoSave;
}
@Override
public void setAutoSave(AutoSave autoSave) {
this.autoSave = autoSave;
}
protected boolean willHealViaReparse(SQLException e) {
if (e == null || e.getSQLState() == null) {
return false;
}
// "prepared statement \"S_2\" does not exist"
if (PSQLState.INVALID_SQL_STATEMENT_NAME.getState().equals(e.getSQLState())) {
return true;
}
if (!PSQLState.NOT_IMPLEMENTED.getState().equals(e.getSQLState())) {
return false;
}
if (!(e instanceof PSQLException)) {
return false;
}
PSQLException pe = (PSQLException) e;
ServerErrorMessage serverErrorMessage = pe.getServerErrorMessage();
if (serverErrorMessage == null) {
return false;
}
// "cached plan must not change result type"
String routine = serverErrorMessage.getRoutine();
return "RevalidateCachedQuery".equals(routine) // 9.2+
|| "RevalidateCachedPlan".equals(routine); // <= 9.1
}
@Override
public boolean willHealOnRetry(SQLException e) {
if (autoSave == AutoSave.NEVER && getTransactionState() == TransactionState.FAILED) {
// If autorollback is not activated, then every statement will fail with
// 'transaction is aborted', etc, etc
return false;
}
return willHealViaReparse(e);
}
public boolean isFlushCacheOnDeallocate() {
return flushCacheOnDeallocate;
}
@Override
public void setFlushCacheOnDeallocate(boolean flushCacheOnDeallocate) {
this.flushCacheOnDeallocate = flushCacheOnDeallocate;
}
protected boolean hasNotifications() {
return !notifications.isEmpty();
}
@Override
public final Map getParameterStatuses() {
return Collections.unmodifiableMap(parameterStatuses);
}
@Override
public final /* @Nullable */ String getParameterStatus(String parameterName) {
return parameterStatuses.get(parameterName);
}
/**
* Update the parameter status map in response to a new ParameterStatus
* wire protocol message.
*
* The server sends ParameterStatus messages when GUC_REPORT settings are
* initially assigned and whenever they change.
*
* A future version may invoke a client-defined listener class at this point,
* so this should be the only access path.
*
* Keys are case-insensitive and case-preserving.
*
* The server doesn't provide a way to report deletion of a reportable
* parameter so we don't expose one here.
*
* @param parameterName case-insensitive case-preserving name of parameter to create or update
* @param parameterStatus new value of parameter
* @see org.postgresql.PGConnection#getParameterStatuses
* @see org.postgresql.PGConnection#getParameterStatus
*/
protected void onParameterStatus(String parameterName, String parameterStatus) {
if (parameterName == null || "".equals(parameterName)) {
throw new IllegalStateException("attempt to set GUC_REPORT parameter with null or empty-string name");
}
parameterStatuses.put(parameterName, parameterStatus);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/QueryExecutorCloseAction.java 0100664 0000000 0000000 00000006421 00000250600 030405 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2023, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Closeable;
import java.io.IOException;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* The action performs connection cleanup, so it is properly terminated from the backend
* point of view.
* Implementation note: it should keep only the minimum number of object references
* to reduce heap usage in case the user abandons connection without closing it first.
*/
public class QueryExecutorCloseAction implements Closeable {
private static final Logger LOGGER = Logger.getLogger(QueryExecutorBase.class.getName());
@SuppressWarnings("RedundantCast")
// Cast is needed for checkerframework to accept the code
private static final AtomicReferenceFieldUpdater PG_STREAM_UPDATER =
AtomicReferenceFieldUpdater.newUpdater(
QueryExecutorCloseAction.class, (Class* @Nullable */ PGStream>) PGStream.class, "pgStream");
private volatile /* @Nullable */ PGStream pgStream;
public QueryExecutorCloseAction(PGStream pgStream) {
this.pgStream = pgStream;
}
public boolean isClosed() {
PGStream pgStream = this.pgStream;
return pgStream == null || pgStream.isClosed();
}
public void abort() {
PGStream pgStream = this.pgStream;
if (pgStream == null || !PG_STREAM_UPDATER.compareAndSet(this, pgStream, null)) {
// The connection has already been closed
return;
}
try {
LOGGER.log(Level.FINEST, " FE=> close socket");
pgStream.getSocket().close();
} catch (IOException e) {
// ignore
}
}
@Override
public void close() throws IOException {
LOGGER.log(Level.FINEST, " FE=> Terminate");
PGStream pgStream = this.pgStream;
if (pgStream == null || !PG_STREAM_UPDATER.compareAndSet(this, pgStream, null)) {
// The connection has already been closed
return;
}
sendCloseMessage(pgStream);
// Technically speaking, this check should not be needed,
// however org.postgresql.test.jdbc2.ConnectionTest.testPGStreamSettings
// closes pgStream reflectively, so here's an extra check to prevent failures
// when getNetworkTimeout is called on a closed stream
if (pgStream.isClosed()) {
return;
}
pgStream.flush();
pgStream.close();
}
public void sendCloseMessage(PGStream pgStream) throws IOException {
// Technically speaking, this check should not be needed,
// however org.postgresql.test.jdbc2.ConnectionTest.testPGStreamSettings
// closes pgStream reflectively, so here's an extra check to prevent failures
// when getNetworkTimeout is called on a closed stream
if (pgStream.isClosed()) {
return;
}
// Prevent blocking the thread for too long
// The connection will be discarded anyway, so there's no much sense in waiting long
int timeout = pgStream.getNetworkTimeout();
if (timeout == 0 || timeout > 1000) {
pgStream.setNetworkTimeout(1000);
}
pgStream.sendChar(PgMessageType.TERMINATE_REQUEST);
pgStream.sendInteger4(4);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/QueryWithReturningColumnsKey.java 0100664 0000000 0000000 00000004656 00000250600 031316 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.util.Arrays;
/**
* Cache key for a query that have some returning columns.
* {@code columnNames} should contain non-quoted column names.
* The parser will quote them automatically.
*
* There's a special case of {@code columnNames == new String[]{"*"}} that means all columns
* should be returned. {@link Parser} is aware of that and does not quote {@code *}
*/
class QueryWithReturningColumnsKey extends BaseQueryKey {
public final String[] columnNames;
private int size; // query length cannot exceed MAX_INT
QueryWithReturningColumnsKey(String sql, boolean isParameterized, boolean escapeProcessing,
String /* @Nullable */ [] columnNames) {
super(sql, isParameterized, escapeProcessing);
if (columnNames == null) {
// TODO: teach parser to fetch key columns somehow when no column names were given
columnNames = new String[]{"*"};
}
this.columnNames = columnNames;
}
@Override
public long getSize() {
int size = this.size;
if (size != 0) {
return size;
}
size = (int) super.getSize();
if (columnNames != null) {
size += 16; // array itself
for (String columnName: columnNames) {
size += columnName.length() * 2; // 2 bytes per char, revise with Java 9's compact strings
}
}
this.size = size;
return size;
}
@Override
public String toString() {
return "QueryWithReturningColumnsKey{"
+ "sql='" + sql + '\''
+ ", isParameterized=" + isParameterized
+ ", escapeProcessing=" + escapeProcessing
+ ", columnNames=" + Arrays.toString(columnNames)
+ '}';
}
@Override
public boolean equals(/* @Nullable */ Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
if (!super.equals(o)) {
return false;
}
QueryWithReturningColumnsKey that = (QueryWithReturningColumnsKey) o;
// Probably incorrect - comparing Object[] arrays with Arrays.equals
return Arrays.equals(columnNames, that.columnNames);
}
@Override
public int hashCode() {
int result = super.hashCode();
result = 31 * result + Arrays.hashCode(columnNames);
return result;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ReplicationProtocol.java 0100664 0000000 0000000 00000002436 00000250600 027432 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.replication.PGReplicationStream;
import org.postgresql.replication.fluent.logical.LogicalReplicationOptions;
import org.postgresql.replication.fluent.physical.PhysicalReplicationOptions;
import java.sql.SQLException;
/**
* Abstracts the protocol-specific details of physic and logic replication.
*
* With each connection open with replication options associate own instance ReplicationProtocol.
*/
public interface ReplicationProtocol {
/**
* Starts logical replication.
* @param options not null options for logical replication stream
* @return not null stream instance from which available fetch wal logs that was decode by output
* plugin
* @throws SQLException on error
*/
PGReplicationStream startLogical(LogicalReplicationOptions options) throws SQLException;
/**
* Starts physical replication.
* @param options not null options for physical replication stream
* @return not null stream instance from which available fetch wal logs
* @throws SQLException on error
*/
PGReplicationStream startPhysical(PhysicalReplicationOptions options) throws SQLException;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ResultCursor.java 0100664 0000000 0000000 00000001323 00000250600 026105 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
/**
* Abstraction of a cursor over a returned resultset. This is an opaque interface that only provides
* a way to close the cursor; all other operations are done by passing a ResultCursor to
* QueryExecutor methods.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
public interface ResultCursor {
/**
* Close this cursor. This may not immediately free underlying resources but may make it happen
* more promptly. Closed cursors should not be passed to QueryExecutor methods.
*/
void close();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ResultHandler.java 0100664 0000000 0000000 00000007334 00000250600 026215 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.util.List;
/**
* Callback interface for passing query results from the protocol-specific layer to the
* protocol-independent JDBC implementation code.
*
* In general, a single query execution will consist of a number of calls to handleResultRows,
* handleCommandStatus, handleWarning, and handleError, followed by a single call to
* handleCompletion when query execution is complete. If the caller wants to throw SQLException,
* this can be done in handleCompletion.
*
* Each executed query ends with a call to handleResultRows, handleCommandStatus, or handleError. If
* an error occurs, subsequent queries won't generate callbacks.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
public interface ResultHandler {
/**
* Called when result rows are received from a query.
*
* @param fromQuery the underlying query that generated these results; this may not be very
* specific (e.g. it may be a query that includes multiple statements).
* @param fields column metadata for the resultset; might be null
if
* Query.QUERY_NO_METADATA was specified.
* @param tuples the actual data
* @param cursor a cursor to use to fetch additional data; null
if no further results
* are present.
*/
void handleResultRows(Query fromQuery, Field[] fields, List tuples,
/* @Nullable */ ResultCursor cursor);
/**
* Called when a query that did not return a resultset completes.
*
* @param status the command status string (e.g. "SELECT") returned by the backend
* @param updateCount the number of rows affected by an INSERT, UPDATE, DELETE, FETCH, or MOVE
* command; -1 if not available.
* @param insertOID for a single-row INSERT query, the OID of the newly inserted row; 0 if not
* available.
*/
void handleCommandStatus(String status, long updateCount, long insertOID);
/**
* Called when a warning is emitted.
*
* @param warning the warning that occurred.
*/
void handleWarning(SQLWarning warning);
/**
* Called when an error occurs. Subsequent queries are abandoned; in general the only calls
* between a handleError call and a subsequent handleCompletion call are handleError or
* handleWarning.
*
* @param error the error that occurred
*/
void handleError(SQLException error);
/**
* Called before a QueryExecutor method returns. This method may throw a SQLException if desired;
* if it does, the QueryExecutor method will propagate that exception to the original caller.
*
* @throws SQLException if the handler wishes the original method to throw an exception.
*/
void handleCompletion() throws SQLException;
/**
* Callback for batch statements. In case batch statement is executed in autocommit==true mode,
* the executor might commit "as it this it is best", so the result handler should track which
* statements are executed successfully and which are not.
*/
void secureProgress();
/**
* Returns the first encountered exception. The rest are chained via {@link SQLException#setNextException(SQLException)}
* @return the first encountered exception
*/
/* @Nullable */ SQLException getException();
/**
* Returns the first encountered warning. The rest are chained via {@link SQLException#setNextException(SQLException)}
* @return the first encountered warning
*/
/* @Nullable */ SQLWarning getWarning();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ResultHandlerBase.java 0100664 0000000 0000000 00000004455 00000250600 027011 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import static org.postgresql.util.internal.Nullness.castNonNull;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.util.List;
/**
* Empty implementation of {@link ResultHandler} interface.
* {@link SQLException#setNextException(SQLException)} has {@code O(N)} complexity,
* so this class tracks the last exception object to speedup {@code setNextException}.
*/
public class ResultHandlerBase implements ResultHandler {
// Last exception is tracked to avoid O(N) SQLException#setNextException just in case there
// will be lots of exceptions (e.g. all batch rows fail with constraint violation or so)
private /* @Nullable */ SQLException firstException;
private /* @Nullable */ SQLException lastException;
private /* @Nullable */ SQLWarning firstWarning;
private /* @Nullable */ SQLWarning lastWarning;
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List tuples,
/* @Nullable */ ResultCursor cursor) {
}
@Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
}
@Override
public void secureProgress() {
}
@Override
public void handleWarning(SQLWarning warning) {
if (firstWarning == null) {
firstWarning = lastWarning = warning;
return;
}
SQLWarning lastWarning = castNonNull(this.lastWarning);
lastWarning.setNextException(warning);
this.lastWarning = warning;
}
@Override
public void handleError(SQLException error) {
if (firstException == null) {
firstException = lastException = error;
return;
}
castNonNull(lastException).setNextException(error);
this.lastException = error;
}
@Override
public void handleCompletion() throws SQLException {
SQLException firstException = this.firstException;
if (firstException != null) {
throw firstException;
}
}
@Override
public /* @Nullable */ SQLException getException() {
return firstException;
}
@Override
public /* @Nullable */ SQLWarning getWarning() {
return firstWarning;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ResultHandlerDelegate.java 0100664 0000000 0000000 00000004003 00000250600 027636 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.util.List;
/**
* Internal to the driver class, please do not use in the application.
*
* The class simplifies creation of ResultHandler delegates: it provides default implementation
* for the interface methods
*/
public class ResultHandlerDelegate implements ResultHandler {
private final /* @Nullable */ ResultHandler delegate;
public ResultHandlerDelegate(/* @Nullable */ ResultHandler delegate) {
this.delegate = delegate;
}
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List tuples,
/* @Nullable */ ResultCursor cursor) {
if (delegate != null) {
delegate.handleResultRows(fromQuery, fields, tuples, cursor);
}
}
@Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
if (delegate != null) {
delegate.handleCommandStatus(status, updateCount, insertOID);
}
}
@Override
public void handleWarning(SQLWarning warning) {
if (delegate != null) {
delegate.handleWarning(warning);
}
}
@Override
public void handleError(SQLException error) {
if (delegate != null) {
delegate.handleError(error);
}
}
@Override
public void handleCompletion() throws SQLException {
if (delegate != null) {
delegate.handleCompletion();
}
}
@Override
public void secureProgress() {
if (delegate != null) {
delegate.secureProgress();
}
}
@Override
public /* @Nullable */ SQLException getException() {
if (delegate != null) {
return delegate.getException();
}
return null;
}
@Override
public /* @Nullable */ SQLWarning getWarning() {
if (delegate != null) {
return delegate.getWarning();
}
return null;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/ServerVersion.java 0100664 0000000 0000000 00000013251 00000250600 026250 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.text.NumberFormat;
import java.text.ParsePosition;
/**
* Enumeration for PostgreSQL versions.
*/
public enum ServerVersion implements Version {
INVALID("0.0.0"),
// @Deprecated
v8_2("8.2.0"),
// @Deprecated
v8_3("8.3.0"),
// @Deprecated
v8_4("8.4.0"),
// @Deprecated
v9_0("9.0.0"),
v9_1("9.1.0"),
v9_2("9.2.0"),
v9_3("9.3.0"),
v9_4("9.4.0"),
v9_5("9.5.0"),
v9_6("9.6.0"),
v10("10"),
v11("11"),
v12("12"),
v13("13"),
v14("14"),
v15("15"),
v16("16"),
v17("17"),
v18("18")
;
private final int version;
ServerVersion(String version) {
this.version = parseServerVersionStr(version);
}
/**
* Get a machine-readable version number.
*
* @return the version in numeric XXYYZZ form, e.g. 90401 for 9.4.1
*/
@Override
public int getVersionNum() {
return version;
}
@Override
public int getMajorVersionNumber() {
return version / 10000;
}
/**
* Attempt to parse the server version string into an XXYYZZ form version number into a
* {@link Version}.
*
* If the specified version cannot be parsed, the {@link Version#getVersionNum()} will return 0.
*
* @param version version in numeric XXYYZZ form, e.g. "090401" for 9.4.1
* @return a {@link Version} representing the specified version string.
*/
public static Version from(/* @Nullable */ String version) {
final int versionNum = parseServerVersionStr(version);
return new Version() {
@Override
public int getVersionNum() {
return versionNum;
}
@Override
public int getMajorVersionNumber() {
return versionNum / 10000;
}
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (obj instanceof Version) {
return this.getVersionNum() == ((Version) obj).getVersionNum();
}
return false;
}
@Override
public int hashCode() {
return getVersionNum();
}
@Override
public String toString() {
return Integer.toString(versionNum);
}
};
}
/**
* Attempt to parse the server version string into an XXYYZZ form version number.
*
* Returns 0 if the version could not be parsed.
*
* Returns minor version 0 if the minor version could not be determined, e.g. devel or beta
* releases.
*
* If a single major part like 90400 is passed, it's assumed to be a pre-parsed version and
* returned verbatim. (Anything equal to or greater than 10000 is presumed to be this form).
*
* The yy or zz version parts may be larger than 99. A NumberFormatException is thrown if a
* version part is out of range.
*
* @param serverVersion server version in a XXYYZZ form
* @return server version in number form
*/
static int parseServerVersionStr(/* @Nullable */ String serverVersion) throws NumberFormatException {
if (serverVersion == null) {
return 0;
}
NumberFormat numformat = NumberFormat.getIntegerInstance();
numformat.setGroupingUsed(false);
ParsePosition parsepos = new ParsePosition(0);
int[] parts = new int[3];
int versionParts;
for (versionParts = 0; versionParts < 3; versionParts++) {
Number part = (Number) numformat.parseObject(serverVersion, parsepos);
if (part == null) {
break;
}
parts[versionParts] = part.intValue();
if (parsepos.getIndex() == serverVersion.length()
|| serverVersion.charAt(parsepos.getIndex()) != '.') {
break;
}
// Skip .
parsepos.setIndex(parsepos.getIndex() + 1);
}
versionParts++;
if (parts[0] >= 10000) {
/*
* PostgreSQL version 1000? I don't think so. We're seeing a version like 90401; return it
* verbatim, but only if there's nothing else in the version. If there is, treat it as a parse
* error.
*/
if (parsepos.getIndex() == serverVersion.length() && versionParts == 1) {
return parts[0];
} else {
throw new NumberFormatException(
"First major-version part equal to or greater than 10000 in invalid version string: "
+ serverVersion);
}
}
/* #667 - Allow for versions with greater than 3 parts.
For versions with more than 3 parts, still return 3 parts (4th part ignored for now
as no functionality is dependent on the 4th part .
Allows for future versions of the server to utilize more than 3 part version numbers
without upgrading the jdbc driver */
if (versionParts >= 3) {
if (parts[1] > 99) {
throw new NumberFormatException(
"Unsupported second part of major version > 99 in invalid version string: "
+ serverVersion);
}
if (parts[2] > 99) {
throw new NumberFormatException(
"Unsupported second part of minor version > 99 in invalid version string: "
+ serverVersion);
}
return (parts[0] * 100 + parts[1]) * 100 + parts[2];
}
if (versionParts == 2) {
if (parts[0] >= 10) {
return parts[0] * 100 * 100 + parts[1];
}
if (parts[1] > 99) {
throw new NumberFormatException(
"Unsupported second part of major version > 99 in invalid version string: "
+ serverVersion);
}
return (parts[0] * 100 + parts[1]) * 100;
}
if (versionParts == 1) {
if (parts[0] >= 10) {
return parts[0] * 100 * 100;
}
}
return 0; /* unknown */
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/SetupQueryRunner.java 0100664 0000000 0000000 00000004063 00000250600 026755 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.util.List;
/**
* Poor man's Statement & ResultSet, used for initial queries while we're still initializing the
* system.
*/
public class SetupQueryRunner {
private static class SimpleResultHandler extends ResultHandlerBase {
private /* @Nullable */ List tuples;
/* @Nullable */ List getResults() {
return tuples;
}
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List tuples,
/* @Nullable */ ResultCursor cursor) {
this.tuples = tuples;
}
@Override
public void handleWarning(SQLWarning warning) {
// We ignore warnings. We assume we know what we're
// doing in the setup queries.
}
}
public static /* @Nullable */ Tuple run(QueryExecutor executor, String queryString,
boolean wantResults) throws SQLException {
Query query = executor.createSimpleQuery(queryString);
SimpleResultHandler handler = new SimpleResultHandler();
int flags = QueryExecutor.QUERY_ONESHOT | QueryExecutor.QUERY_SUPPRESS_BEGIN
| QueryExecutor.QUERY_EXECUTE_AS_SIMPLE;
if (!wantResults) {
flags |= QueryExecutor.QUERY_NO_RESULTS | QueryExecutor.QUERY_NO_METADATA;
}
try {
executor.execute(query, null, handler, 0, 0, flags);
} finally {
query.close();
}
if (!wantResults) {
return null;
}
List tuples = handler.getResults();
if (tuples == null || tuples.size() != 1) {
throw new PSQLException(GT.tr("An unexpected result was returned by a query."),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
return tuples.get(0);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/SocketFactoryFactory.java 0100664 0000000 0000000 00000004670 00000250600 027551 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.PGProperty;
import org.postgresql.ssl.LibPQFactory;
import org.postgresql.util.GT;
import org.postgresql.util.ObjectFactory;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.util.Properties;
import javax.net.SocketFactory;
import javax.net.ssl.SSLSocketFactory;
/**
* Instantiates {@link SocketFactory} based on the {@link PGProperty#SOCKET_FACTORY}.
*/
public class SocketFactoryFactory {
/**
* Instantiates {@link SocketFactory} based on the {@link PGProperty#SOCKET_FACTORY}.
*
* @param info connection properties
* @return socket factory
* @throws PSQLException if something goes wrong
*/
public static SocketFactory getSocketFactory(Properties info) throws PSQLException {
// Socket factory
String socketFactoryClassName = PGProperty.SOCKET_FACTORY.getOrDefault(info);
if (socketFactoryClassName == null) {
return SocketFactory.getDefault();
}
try {
return ObjectFactory.instantiate(SocketFactory.class, socketFactoryClassName, info, true,
PGProperty.SOCKET_FACTORY_ARG.getOrDefault(info));
} catch (Exception e) {
throw new PSQLException(
GT.tr("The SocketFactory class provided {0} could not be instantiated.",
socketFactoryClassName),
PSQLState.CONNECTION_FAILURE, e);
}
}
/**
* Instantiates {@link SSLSocketFactory} based on the {@link PGProperty#SSL_FACTORY}.
*
* @param info connection properties
* @return SSL socket factory
* @throws PSQLException if something goes wrong
*/
public static SSLSocketFactory getSslSocketFactory(Properties info) throws PSQLException {
String classname = PGProperty.SSL_FACTORY.getOrDefault(info);
if (classname == null
|| "org.postgresql.ssl.jdbc4.LibPQFactory".equals(classname)
|| "org.postgresql.ssl.LibPQFactory".equals(classname)) {
return new LibPQFactory(info);
}
try {
return ObjectFactory.instantiate(SSLSocketFactory.class, classname, info, true,
PGProperty.SSL_FACTORY_ARG.getOrDefault(info));
} catch (Exception e) {
throw new PSQLException(
GT.tr("The SSLSocketFactory class provided {0} could not be instantiated.", classname),
PSQLState.CONNECTION_FAILURE, e);
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/SqlCommand.java 0100664 0000000 0000000 00000005425 00000250600 025476 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import static org.postgresql.core.SqlCommandType.INSERT;
import static org.postgresql.core.SqlCommandType.SELECT;
import static org.postgresql.core.SqlCommandType.WITH;
/**
* Data Modification Language inspection support.
*
* @author Jeremy Whiting jwhiting@redhat.com
* @author Christopher Deckers (chrriis@gmail.com)
*
*/
public class SqlCommand {
public static final SqlCommand BLANK = SqlCommand.createStatementTypeInfo(SqlCommandType.BLANK);
public boolean isBatchedReWriteCompatible() {
return valuesBraceOpenPosition >= 0;
}
public int getBatchRewriteValuesBraceOpenPosition() {
return valuesBraceOpenPosition;
}
public int getBatchRewriteValuesBraceClosePosition() {
return valuesBraceClosePosition;
}
public SqlCommandType getType() {
return commandType;
}
public boolean isReturningKeywordPresent() {
return parsedSQLhasRETURNINGKeyword;
}
public boolean returnsRows() {
return parsedSQLhasRETURNINGKeyword || commandType == SELECT || commandType == WITH;
}
public static SqlCommand createStatementTypeInfo(SqlCommandType type,
boolean isBatchedReWritePropertyConfigured,
int valuesBraceOpenPosition, int valuesBraceClosePosition, boolean isRETURNINGkeywordPresent,
int priorQueryCount) {
return new SqlCommand(type, isBatchedReWritePropertyConfigured,
valuesBraceOpenPosition, valuesBraceClosePosition, isRETURNINGkeywordPresent,
priorQueryCount);
}
public static SqlCommand createStatementTypeInfo(SqlCommandType type) {
return new SqlCommand(type, false, -1, -1, false, 0);
}
public static SqlCommand createStatementTypeInfo(SqlCommandType type,
boolean isRETURNINGkeywordPresent) {
return new SqlCommand(type, false, -1, -1, isRETURNINGkeywordPresent, 0);
}
private SqlCommand(SqlCommandType type, boolean isBatchedReWriteConfigured,
int valuesBraceOpenPosition, int valuesBraceClosePosition, boolean isPresent,
int priorQueryCount) {
commandType = type;
parsedSQLhasRETURNINGKeyword = isPresent;
boolean batchedReWriteCompatible = (type == INSERT) && isBatchedReWriteConfigured
&& valuesBraceOpenPosition >= 0 && valuesBraceClosePosition > valuesBraceOpenPosition
&& !isPresent && priorQueryCount == 0;
this.valuesBraceOpenPosition = batchedReWriteCompatible ? valuesBraceOpenPosition : -1;
this.valuesBraceClosePosition = batchedReWriteCompatible ? valuesBraceClosePosition : -1;
}
private final SqlCommandType commandType;
private final boolean parsedSQLhasRETURNINGKeyword;
private final int valuesBraceOpenPosition;
private final int valuesBraceClosePosition;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/SqlCommandType.java 0100664 0000000 0000000 00000000737 00000250600 026341 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
/**
* Type information inspection support.
* @author Jeremy Whiting jwhiting@redhat.com
*
*/
public enum SqlCommandType {
/**
* Use BLANK for empty sql queries or when parsing the sql string is not
* necessary.
*/
BLANK,
INSERT,
UPDATE,
DELETE,
MOVE,
SELECT,
WITH,
CREATE,
ALTER
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/TransactionState.java 0100664 0000000 0000000 00000000335 00000250600 026721 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
public enum TransactionState {
IDLE,
OPEN,
FAILED
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Tuple.java 0100664 0000000 0000000 00000005105 00000250600 024524 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
// import org.checkerframework.checker.index.qual.NonNegative;
// import org.checkerframework.checker.nullness.qual.Nullable;
// import org.checkerframework.dataflow.qual.Pure;
/**
* Class representing a row in a {@link java.sql.ResultSet}.
*/
public class Tuple {
private final boolean forUpdate;
final byte[] /* @Nullable */ [] data;
/**
* Construct an empty tuple. Used in updatable result sets.
* @param length the number of fields in the tuple.
*/
public Tuple(int length) {
this(new byte[length][], true);
}
/**
* Construct a populated tuple. Used when returning results.
* @param data the tuple data
*/
public Tuple(byte[] /* @Nullable */ [] data) {
this(data, false);
}
private Tuple(byte[] /* @Nullable */ [] data, boolean forUpdate) {
this.data = data;
this.forUpdate = forUpdate;
}
/**
* Number of fields in the tuple
* @return number of fields
*/
public /* @NonNegative */ int fieldCount() {
return data.length;
}
/**
* Total length in bytes of the tuple data.
* @return the number of bytes in this tuple
*/
public /* @NonNegative */ int length() {
int length = 0;
for (byte[] field : data) {
if (field != null) {
length += field.length;
}
}
return length;
}
/**
* Get the data for the given field
* @param index 0-based field position in the tuple
* @return byte array of the data
*/
/* @Pure */
public byte /* @Nullable */ [] get(/* @NonNegative */ int index) {
return data[index];
}
/**
* Create a copy of the tuple for updating.
* @return a copy of the tuple that allows updates
*/
public Tuple updateableCopy() {
return copy(true);
}
/**
* Create a read-only copy of the tuple
* @return a copy of the tuple that does not allow updates
*/
public Tuple readOnlyCopy() {
return copy(false);
}
private Tuple copy(boolean forUpdate) {
byte[][] dataCopy = new byte[data.length][];
System.arraycopy(data, 0, dataCopy, 0, data.length);
return new Tuple(dataCopy, forUpdate);
}
/**
* Set the given field to the given data.
* @param index 0-based field position
* @param fieldData the data to set
*/
public void set(/* @NonNegative */ int index, byte /* @Nullable */ [] fieldData) {
if (!forUpdate) {
throw new IllegalArgumentException("Attempted to write to readonly tuple");
}
data[index] = fieldData;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/TypeInfo.java 0100664 0000000 0000000 00000012033 00000250600 025166 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2008, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.util.PGobject;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
import java.util.Iterator;
public interface TypeInfo {
void addCoreType(String pgTypeName, Integer oid, Integer sqlType, String javaClass,
Integer arrayOid);
void addDataType(String type, Class extends PGobject> klass) throws SQLException;
/**
* Look up the SQL typecode for a given type oid.
*
* @param oid the type's OID
* @return the SQL type code (a constant from {@link java.sql.Types}) for the type
* @throws SQLException if an error occurs when retrieving sql type
*/
int getSQLType(int oid) throws SQLException;
/**
* Look up the SQL typecode for a given postgresql type name.
*
* @param pgTypeName the server type name to look up
* @return the SQL type code (a constant from {@link java.sql.Types}) for the type
* @throws SQLException if an error occurs when retrieving sql type
*/
int getSQLType(String pgTypeName) throws SQLException;
int getJavaArrayType(String className) throws SQLException;
/**
* Look up the oid for a given postgresql type name. This is the inverse of
* {@link #getPGType(int)}.
*
* @param pgTypeName the server type name to look up
* @return the type's OID, or 0 if unknown
* @throws SQLException if an error occurs when retrieving PG type
*/
int getPGType(String pgTypeName) throws SQLException;
/**
* Look up the postgresql type name for a given oid. This is the inverse of
* {@link #getPGType(String)}.
*
* @param oid the type's OID
* @return the server type name for that OID or null if unknown
* @throws SQLException if an error occurs when retrieving PG type
*/
/* @Nullable */ String getPGType(int oid) throws SQLException;
/**
* Look up the oid of an array's base type given the array's type oid.
*
* @param oid the array type's OID
* @return the base type's OID, or 0 if unknown
* @throws SQLException if an error occurs when retrieving array element
*/
int getPGArrayElement(int oid) throws SQLException;
/**
* Determine the oid of the given base postgresql type's array type.
*
* @param elementTypeName the base type's
* @return the array type's OID, or 0 if unknown
* @throws SQLException if an error occurs when retrieving array type
*/
int getPGArrayType(String elementTypeName) throws SQLException;
/**
* Determine the delimiter for the elements of the given array type oid.
*
* @param oid the array type's OID
* @return the base type's array type delimiter
* @throws SQLException if an error occurs when retrieving array delimiter
*/
char getArrayDelimiter(int oid) throws SQLException;
Iterator getPGTypeNamesWithSQLTypes();
Iterator getPGTypeOidsWithSQLTypes();
/* @Nullable */ Class extends PGobject> getPGobject(String type);
String getJavaClass(int oid) throws SQLException;
/* @Nullable */ String getTypeForAlias(String alias);
int getPrecision(int oid, int typmod);
int getScale(int oid, int typmod);
boolean isCaseSensitive(int oid);
boolean isSigned(int oid);
int getDisplaySize(int oid, int typmod);
int getMaximumPrecision(int oid);
boolean requiresQuoting(int oid) throws SQLException;
/**
* Returns true if particular sqlType requires quoting.
* This method is used internally by the driver, so it might disappear without notice.
*
* @param sqlType sql type as in java.sql.Types
* @return true if the type requires quoting
* @throws SQLException if something goes wrong
*/
boolean requiresQuotingSqlType(int sqlType) throws SQLException;
/**
* Java Integers are signed 32-bit integers, but oids are unsigned 32-bit integers.
* We therefore read them as positive long values and then force them into signed integers
* (wrapping around into negative values when required) or we'd be unable to correctly
* handle the upper half of the oid space.
*
* This function handles the mapping of uint32-values in the long to java integers, and
* throws for values that are out of range.
*
* @param oid the oid as a long.
* @return the (internal) signed integer representation of the (unsigned) oid.
* @throws SQLException if the long has a value outside of the range representable by uint32
*/
int longOidToInt(long oid) throws SQLException;
/**
* Java Integers are signed 32-bit integers, but oids are unsigned 32-bit integers.
* We must therefore first map the (internal) integer representation to a positive long
* value before sending it to postgresql, or we would be unable to correctly handle the
* upper half of the oid space because these negative values are disallowed as OID values.
*
* @param oid the (signed) integer oid to convert into a long.
* @return the non-negative value of this oid, stored as a java long.
*/
long intOidToLong(int oid);
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Utils.java 0100664 0000000 0000000 00000014772 00000250600 024545 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core;
import org.postgresql.util.GT;
import org.postgresql.util.PGbytea;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.sql.SQLException;
/**
* Collection of utilities used by the protocol-level code.
*/
public class Utils {
/**
* Turn a bytearray into a printable form, representing each byte in hex.
*
* @param data the bytearray to stringize
* @return a hex-encoded printable representation of {@code data}
*/
public static String toHexString(byte[] data) {
StringBuilder sb = new StringBuilder(data.length * 2);
PGbytea.appendHexString(sb, data, 0, data.length);
return sb.toString();
}
/**
* Escape the given literal {@code value} and append it to the string builder {@code sbuf}. If
* {@code sbuf} is {@code null}, a new StringBuilder will be returned. The argument
* {@code standardConformingStrings} defines whether the backend expects standard-conforming
* string literals or allows backslash escape sequences.
*
* @param sbuf the string builder to append to; or {@code null}
* @param value the string value
* @param standardConformingStrings if standard conforming strings should be used
* @return the sbuf argument; or a new string builder for sbuf == null
* @throws SQLException if the string contains a {@code \0} character
*/
public static StringBuilder escapeLiteral(/* @Nullable */ StringBuilder sbuf, String value,
boolean standardConformingStrings) throws SQLException {
if (sbuf == null) {
sbuf = new StringBuilder((value.length() + 10) / 10 * 11); // Add 10% for escaping.
}
doAppendEscapedLiteral(sbuf, value, standardConformingStrings);
return sbuf;
}
/**
* Common part for {@link #escapeLiteral(StringBuilder, String, boolean)}.
*
* @param sbuf Either StringBuffer or StringBuilder as we do not expect any IOException to be
* thrown
* @param value value to append
* @param standardConformingStrings if standard conforming strings should be used
*/
private static void doAppendEscapedLiteral(Appendable sbuf, String value,
boolean standardConformingStrings) throws SQLException {
try {
if (standardConformingStrings) {
// With standard_conforming_strings on, escape only single-quotes.
for (int i = 0; i < value.length(); i++) {
char ch = value.charAt(i);
if (ch == '\0') {
throw new PSQLException(GT.tr("Zero bytes may not occur in string parameters."),
PSQLState.INVALID_PARAMETER_VALUE);
}
if (ch == '\'') {
sbuf.append('\'');
}
sbuf.append(ch);
}
} else {
// With standard_conforming_string off, escape backslashes and
// single-quotes, but still escape single-quotes by doubling, to
// avoid a security hazard if the reported value of
// standard_conforming_strings is incorrect, or an error if
// backslash_quote is off.
for (int i = 0; i < value.length(); i++) {
char ch = value.charAt(i);
if (ch == '\0') {
throw new PSQLException(GT.tr("Zero bytes may not occur in string parameters."),
PSQLState.INVALID_PARAMETER_VALUE);
}
if (ch == '\\' || ch == '\'') {
sbuf.append(ch);
}
sbuf.append(ch);
}
}
} catch (IOException e) {
throw new PSQLException(GT.tr("No IOException expected from StringBuffer or StringBuilder"),
PSQLState.UNEXPECTED_ERROR, e);
}
}
/**
* Escape the given identifier {@code value} and append it to the string builder {@code sbuf}.
* If {@code sbuf} is {@code null}, a new StringBuilder will be returned. This method is
* different from appendEscapedLiteral in that it includes the quoting required for the identifier
* while {@link #escapeLiteral(StringBuilder, String, boolean)} does not.
*
* @param sbuf the string builder to append to; or {@code null}
* @param value the string value
* @return the sbuf argument; or a new string builder for sbuf == null
* @throws SQLException if the string contains a {@code \0} character
*/
public static StringBuilder escapeIdentifier(/* @Nullable */ StringBuilder sbuf, String value)
throws SQLException {
if (sbuf == null) {
sbuf = new StringBuilder(2 + (value.length() + 10) / 10 * 11); // Add 10% for escaping.
}
doAppendEscapedIdentifier(sbuf, value);
return sbuf;
}
/**
* Common part for appendEscapedIdentifier.
*
* @param sbuf Either StringBuffer or StringBuilder as we do not expect any IOException to be
* thrown.
* @param value value to append
*/
private static void doAppendEscapedIdentifier(Appendable sbuf, String value) throws SQLException {
try {
sbuf.append('"');
for (int i = 0; i < value.length(); i++) {
char ch = value.charAt(i);
if (ch == '\0') {
throw new PSQLException(GT.tr("Zero bytes may not occur in identifiers."),
PSQLState.INVALID_PARAMETER_VALUE);
}
if (ch == '"') {
sbuf.append(ch);
}
sbuf.append(ch);
}
sbuf.append('"');
} catch (IOException e) {
throw new PSQLException(GT.tr("No IOException expected from StringBuffer or StringBuilder"),
PSQLState.UNEXPECTED_ERROR, e);
}
}
/**
* Attempt to parse the server version string into an XXYYZZ form version number.
*
* Returns 0 if the version could not be parsed.
*
* Returns minor version 0 if the minor version could not be determined, e.g. devel or beta
* releases.
*
* If a single major part like 90400 is passed, it's assumed to be a pre-parsed version and
* returned verbatim. (Anything equal to or greater than 10000 is presumed to be this form).
*
* The yy or zz version parts may be larger than 99. A NumberFormatException is thrown if a
* version part is out of range.
*
* @param serverVersion server version in a XXYYZZ form
* @return server version in number form
*/
public static int parseServerVersionStr(/* @Nullable */ String serverVersion) throws NumberFormatException {
return ServerVersion.parseServerVersionStr(serverVersion);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/Version.java 0100664 0000000 0000000 00000000573 00000250600 025064 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
public interface Version {
/**
* Get a machine-readable version number.
*
* @return the version in numeric XXYYZZ form, e.g. 90401 for 9.4.1
*/
int getVersionNum();
int getMajorVersionNumber();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/VisibleBufferedInputStream.java 0100664 0000000 0000000 00000023162 00000250600 030672 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2006, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core;
import org.postgresql.util.ByteConverter;
import java.io.EOFException;
import java.io.IOException;
import java.io.InputStream;
import java.net.SocketTimeoutException;
/**
* A faster version of BufferedInputStream. Does no synchronisation and allows direct access to the
* used byte[] buffer.
*
* @author Mikko Tiihonen
*/
public class VisibleBufferedInputStream extends InputStream {
/**
* If a direct read to byte array is called that would require a smaller read from the wrapped
* stream that MINIMUM_READ then first fill the buffer and serve the bytes from there. Larger
* reads are directly done to the provided byte array.
*/
private static final int MINIMUM_READ = 1024;
/**
* In how large spans is the C string zero-byte scanned.
*/
private static final int STRING_SCAN_SPAN = 1024;
/**
* The wrapped input stream.
*/
private final InputStream wrapped;
/**
* The buffer.
*/
private byte[] buffer;
/**
* Current read position in the buffer.
*/
private int index;
/**
* How far is the buffer filled with valid data.
*/
private int endIndex;
/**
* socket timeout has been requested
*/
private boolean timeoutRequested;
/**
* Creates a new buffer around the given stream.
*
* @param in The stream to buffer.
* @param bufferSize The initial size of the buffer.
*/
public VisibleBufferedInputStream(InputStream in, int bufferSize) {
wrapped = in;
buffer = new byte[bufferSize < MINIMUM_READ ? MINIMUM_READ : bufferSize];
}
/**
* {@inheritDoc}
*/
@Override
public int read() throws IOException {
if (ensureBytes(1)) {
return buffer[index++] & 0xFF;
}
return -1;
}
/**
* Reads an int2 value from the underlying stream as an unsigned integer (0..65535).
* @return int2 in the range of 0..65535
* @throws IOException if an I/ O error occurs.
*/
public int readInt2() throws IOException {
if (ensureBytes(2)) {
int res = ByteConverter.int2(buffer, index) & 0xffff;
index += 2;
return res;
}
throw new EOFException("End of stream reached while trying to read integer2");
}
/**
* Reads an int4 value from the underlying stream.
* @return int4 value from the underlying stream
* @throws IOException if an I/ O error occurs.
*/
public int readInt4() throws IOException {
if (ensureBytes(4)) {
int res = ByteConverter.int4(buffer, index);
index += 4;
return res;
}
throw new EOFException("End of stream reached while trying to read integer4");
}
/**
* Reads a byte from the buffer without advancing the index pointer.
*
* @return byte from the buffer without advancing the index pointer
* @throws IOException if something wrong happens
*/
public int peek() throws IOException {
if (ensureBytes(1)) {
return buffer[index] & 0xFF;
}
return -1;
}
/**
* Reads byte from the buffer without any checks. This method never reads from the underlying
* stream. Before calling this method the {@link #ensureBytes} method must have been called.
*
* @return The next byte from the buffer.
* @throws ArrayIndexOutOfBoundsException If ensureBytes was not called to make sure the buffer
* contains the byte.
*/
public byte readRaw() {
return buffer[index++];
}
/**
* Ensures that the buffer contains at least n bytes. This method invalidates the buffer and index
* fields.
*
* @param n The amount of bytes to ensure exists in buffer
* @return true if required bytes are available and false if EOF
* @throws IOException If reading of the wrapped stream failed.
*/
public boolean ensureBytes(int n) throws IOException {
return ensureBytes(n, true);
}
/**
* Ensures that the buffer contains at least n bytes. This method invalidates the buffer and index
* fields.
*
* @param n The amount of bytes to ensure exists in buffer
* @param block whether or not to block the IO
* @return true if required bytes are available and false if EOF or the parameter block was false and socket timeout occurred.
* @throws IOException If reading of the wrapped stream failed.
*/
public boolean ensureBytes(int n, boolean block) throws IOException {
int required = n - endIndex + index;
while (required > 0) {
if (!readMore(required, block)) {
return false;
}
required = n - endIndex + index;
}
return true;
}
/**
* Reads more bytes into the buffer.
*
* @param wanted How much should be at least read.
* @return True if at least some bytes were read.
* @throws IOException If reading of the wrapped stream failed.
*/
private boolean readMore(int wanted, boolean block) throws IOException {
if (endIndex == index) {
index = 0;
endIndex = 0;
}
int canFit = buffer.length - endIndex;
if (canFit < wanted) {
// would the wanted bytes fit if we compacted the buffer
// and still leave some slack
if (index + canFit > wanted + MINIMUM_READ) {
compact();
} else {
doubleBuffer();
}
canFit = buffer.length - endIndex;
}
int read = 0;
try {
read = wrapped.read(buffer, endIndex, canFit);
if (!block && read == 0) {
return false;
}
} catch (SocketTimeoutException e) {
if (!block) {
return false;
}
if (timeoutRequested) {
throw e;
}
}
if (read < 0) {
return false;
}
endIndex += read;
return true;
}
/**
* Doubles the size of the buffer.
*/
private void doubleBuffer() {
byte[] buf = new byte[buffer.length * 2];
moveBufferTo(buf);
buffer = buf;
}
/**
* Compacts the unread bytes of the buffer to the beginning of the buffer.
*/
private void compact() {
moveBufferTo(buffer);
}
/**
* Moves bytes from the buffer to the beginning of the destination buffer. Also sets the index and
* endIndex variables.
*
* @param dest The destination buffer.
*/
private void moveBufferTo(byte[] dest) {
int size = endIndex - index;
System.arraycopy(buffer, index, dest, 0, size);
index = 0;
endIndex = size;
}
/**
* {@inheritDoc}
*/
@Override
public int read(byte[] to, int off, int len) throws IOException {
if ((off | len | (off + len) | (to.length - (off + len))) < 0) {
throw new IndexOutOfBoundsException();
} else if (len == 0) {
return 0;
}
// if the read would go to wrapped stream, but would result
// in a small read then try read to the buffer instead
int avail = endIndex - index;
if (len - avail < MINIMUM_READ) {
ensureBytes(len);
avail = endIndex - index;
}
// first copy from buffer
if (avail > 0) {
if (len <= avail) {
System.arraycopy(buffer, index, to, off, len);
index += len;
return len;
}
System.arraycopy(buffer, index, to, off, avail);
len -= avail;
off += avail;
}
int read = avail;
// good place to reset index because the buffer is fully drained
index = 0;
endIndex = 0;
// then directly from wrapped stream
do {
int r;
try {
r = wrapped.read(to, off, len);
} catch (SocketTimeoutException e) {
if (read == 0 && timeoutRequested) {
throw e;
}
return read;
}
if (r <= 0) {
return read == 0 ? r : read;
}
read += r;
off += r;
len -= r;
} while (len > 0);
return read;
}
/**
* {@inheritDoc}
*/
@Override
public long skip(long n) throws IOException {
int avail = endIndex - index;
if (avail >= n) {
// Cast to int is safe here since the number of available bytes within the buffer
// always fits within int
index += (int) n;
return n;
}
n -= avail;
index = 0;
endIndex = 0;
return avail + wrapped.skip(n);
}
/**
* {@inheritDoc}
*/
@Override
public int available() throws IOException {
int avail = endIndex - index;
return avail > 0 ? avail : wrapped.available();
}
/**
* {@inheritDoc}
*/
@Override
public void close() throws IOException {
wrapped.close();
}
/**
* Returns direct handle to the used buffer. Use the {@link #ensureBytes} to prefill required
* bytes the buffer and {@link #getIndex} to fetch the current position of the buffer.
*
* @return The underlying buffer.
*/
public byte[] getBuffer() {
return buffer;
}
/**
* Returns the current read position in the buffer.
*
* @return the current read position in the buffer.
*/
public int getIndex() {
return index;
}
/**
* Scans the length of the next null terminated string (C-style string) from the stream.
*
* @return The length of the next null terminated string.
* @throws IOException If reading of stream fails.
* @throws EOFException If the stream did not contain any null terminators.
*/
public int scanCStringLength() throws IOException {
int pos = index;
while (true) {
while (pos < endIndex) {
if (buffer[pos++] == '\0') {
return pos - index;
}
}
if (!readMore(STRING_SCAN_SPAN, true)) {
throw new EOFException();
}
pos = index;
}
}
public void setTimeoutRequested(boolean timeoutRequested) {
this.timeoutRequested = timeoutRequested;
}
/**
* Returns the underlying stream.
* @return the underlying stream
*/
public InputStream getWrapped() {
return wrapped;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/ 0040775 0000000 0000000 00000000000 00000250600 023122 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/AuthenticationPluginManager.java 0100664 0000000 0000000 00000012433 00000250600 031416 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2021, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import org.postgresql.PGProperty;
import org.postgresql.plugin.AuthenticationPlugin;
import org.postgresql.plugin.AuthenticationRequestType;
import org.postgresql.util.GT;
import org.postgresql.util.ObjectFactory;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
class AuthenticationPluginManager {
private static final Logger LOGGER = Logger.getLogger(AuthenticationPluginManager.class.getName());
@FunctionalInterface
public interface PasswordAction {
R apply(T password) throws PSQLException, IOException;
}
private AuthenticationPluginManager() {
}
/**
* If a password is requested by the server during connection initiation, this
* method will be invoked to supply the password. This method will only be
* invoked if the server actually requests a password, e.g. trust authentication
* will skip it entirely.
*
* The caller provides a action method that will be invoked with the {@code char[]}
* password. After completion, for security reasons the {@code char[]} array will be
* wiped by filling it with zeroes. Callers must not rely on being able to read
* the password {@code char[]} after the action has completed.
*
* @param type The authentication type that is being requested
* @param info The connection properties for the connection
* @param action The action to invoke with the password
* @throws PSQLException Throws a PSQLException if the plugin class cannot be instantiated
* @throws IOException Bubbles up any thrown IOException from the provided action
*/
public static T withPassword(AuthenticationRequestType type, Properties info,
PasswordAction action) throws PSQLException, IOException {
char[] password = null;
String authPluginClassName = PGProperty.AUTHENTICATION_PLUGIN_CLASS_NAME.getOrDefault(info);
if (authPluginClassName == null || "".equals(authPluginClassName)) {
// Default auth plugin simply pulls password directly from connection properties
String passwordText = PGProperty.PASSWORD.getOrDefault(info);
if (passwordText != null) {
password = passwordText.toCharArray();
}
} else {
AuthenticationPlugin authPlugin;
try {
authPlugin = ObjectFactory.instantiate(AuthenticationPlugin.class, authPluginClassName, info,
false, null);
} catch (Exception ex) {
String msg = GT.tr("Unable to load Authentication Plugin {0}", authPluginClassName);
LOGGER.log(Level.FINE, msg, ex);
throw new PSQLException(msg, PSQLState.INVALID_PARAMETER_VALUE, ex);
}
password = authPlugin.getPassword(type);
}
try {
return action.apply(password);
} finally {
if (password != null) {
Arrays.fill(password, (char) 0);
}
}
}
/**
* Helper that wraps {@link #withPassword(AuthenticationRequestType, Properties, PasswordAction)}, checks that it is not-null, and encodes
* it as a byte array. Used by internal code paths that require an encoded password
* that may be an empty string, but not null.
*
* The caller provides a callback method that will be invoked with the {@code byte[]}
* encoded password. After completion, for security reasons the {@code byte[]} array will be
* wiped by filling it with zeroes. Callers must not rely on being able to read
* the password {@code byte[]} after the callback has completed.
* @param type The authentication type that is being requested
* @param info The connection properties for the connection
* @param action The action to invoke with the encoded password
* @throws PSQLException Throws a PSQLException if the plugin class cannot be instantiated or if the retrieved password is null.
* @throws IOException Bubbles up any thrown IOException from the provided callback
*/
public static T withEncodedPassword(AuthenticationRequestType type, Properties info,
PasswordAction action) throws PSQLException, IOException {
// Checkerframework infers `nullable byte[]` for the return type for unknown reason
@SuppressWarnings("RedundantTypeArguments")
byte [] encodedPassword = AuthenticationPluginManager.withPassword(type, info, password -> {
if (password == null) {
throw new PSQLException(
GT.tr("The server requested password-based authentication, but no password was provided by plugin {0}",
PGProperty.AUTHENTICATION_PLUGIN_CLASS_NAME.getOrDefault(info)),
PSQLState.CONNECTION_REJECTED);
}
ByteBuffer buf = StandardCharsets.UTF_8.encode(CharBuffer.wrap(password));
byte[] bytes = new byte[buf.limit()];
buf.get(bytes);
return bytes;
});
try {
return action.apply(encodedPassword);
} finally {
Arrays.fill(encodedPassword, (byte) 0);
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/BatchedQuery.java 0100664 0000000 0000000 00000015054 00000250600 026347 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import org.postgresql.core.NativeQuery;
import org.postgresql.core.ParameterList;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* Purpose of this object is to support batched query re write behaviour. Responsibility for
* tracking the batch size and implement the clean up of the query fragments after the batch execute
* is complete. Intended to be used to wrap a Query that is present in the batchStatements
* collection.
*
* @author Jeremy Whiting jwhiting@redhat.com
* @author Christopher Deckers (chrriis@gmail.com)
*
*/
public class BatchedQuery extends SimpleQuery {
private /* @Nullable */ String sql;
private final int valuesBraceOpenPosition;
private final int valuesBraceClosePosition;
private final int batchSize;
private BatchedQuery /* @Nullable */ [] blocks;
public BatchedQuery(NativeQuery query, TypeTransferModeRegistry transferModeRegistry,
int valuesBraceOpenPosition,
int valuesBraceClosePosition, boolean sanitiserDisabled) {
super(query, transferModeRegistry, sanitiserDisabled);
this.valuesBraceOpenPosition = valuesBraceOpenPosition;
this.valuesBraceClosePosition = valuesBraceClosePosition;
this.batchSize = 1;
}
private BatchedQuery(BatchedQuery src, int batchSize) {
super(src);
this.valuesBraceOpenPosition = src.valuesBraceOpenPosition;
this.valuesBraceClosePosition = src.valuesBraceClosePosition;
this.batchSize = batchSize;
}
public BatchedQuery deriveForMultiBatch(int valueBlock) {
if (getBatchSize() != 1) {
throw new IllegalStateException("Only the original decorator can be derived.");
}
if (valueBlock == 1) {
return this;
}
int index = Integer.numberOfTrailingZeros(valueBlock) - 1;
if (valueBlock > 128 || valueBlock != (1 << (index + 1))) {
throw new IllegalArgumentException(
"Expected value block should be a power of 2 smaller or equal to 128. Actual block is "
+ valueBlock);
}
if (blocks == null) {
blocks = new BatchedQuery[7];
}
BatchedQuery bq = blocks[index];
if (bq == null) {
bq = new BatchedQuery(this, valueBlock);
blocks[index] = bq;
}
return bq;
}
@Override
public int getBatchSize() {
return batchSize;
}
/**
* Method to return the sql based on number of batches. Skipping the initial
* batch.
*/
@Override
public String getNativeSql() {
if (sql != null) {
return sql;
}
sql = buildNativeSql(null, DefaultSqlSerializationContext.STDSTR_IDEMPOTENT);
return sql;
}
private String buildNativeSql(/* @Nullable */ ParameterList params, SqlSerializationContext context) {
String sql = null;
// dynamically build sql with parameters for batches
String nativeSql = super.getNativeSql();
int batchSize = getBatchSize();
if (batchSize < 2) {
sql = nativeSql;
return sql;
}
if (nativeSql == null) {
sql = "";
return sql;
}
int valuesBlockCharCount = 0;
// Split the values section around every dynamic parameter.
int[] bindPositions = getNativeQuery().bindPositions;
int[] chunkStart = new int[1 + bindPositions.length];
int[] chunkEnd = new int[1 + bindPositions.length];
chunkStart[0] = valuesBraceOpenPosition;
if (bindPositions.length == 0) {
valuesBlockCharCount = valuesBraceClosePosition - valuesBraceOpenPosition + 1;
chunkEnd[0] = valuesBraceClosePosition + 1;
} else {
chunkEnd[0] = bindPositions[0];
// valuesBlockCharCount += chunks[0].length;
valuesBlockCharCount += chunkEnd[0] - chunkStart[0];
for (int i = 0; i < bindPositions.length; i++) {
int startIndex = bindPositions[i] + 2;
int endIndex =
i < bindPositions.length - 1 ? bindPositions[i + 1] : valuesBraceClosePosition + 1;
for (; startIndex < endIndex; startIndex++) {
if (!Character.isDigit(nativeSql.charAt(startIndex))) {
break;
}
}
chunkStart[i + 1] = startIndex;
chunkEnd[i + 1] = endIndex;
// valuesBlockCharCount += chunks[i + 1].length;
valuesBlockCharCount += chunkEnd[i + 1] - chunkStart[i + 1];
}
}
int length = nativeSql.length();
//valuesBraceOpenPosition + valuesBlockCharCount;
length += NativeQuery.calculateBindLength(bindPositions.length * batchSize);
length -= NativeQuery.calculateBindLength(bindPositions.length);
length += (valuesBlockCharCount + 1 /*comma*/) * (batchSize - 1 /* initial sql */);
StringBuilder s = new StringBuilder(length);
// Add query until end of values parameter block.
int pos;
if (bindPositions.length > 0 && params == null) {
// Add the first values (...) clause, it would be values($1,..., $n), and it matches with
// the values clause of a simple non-rewritten SQL
s.append(nativeSql, 0, valuesBraceClosePosition + 1);
pos = bindPositions.length + 1;
} else {
pos = 1;
batchSize++; // do not use super.toString(params) as it does not work if query ends with --
// We need to carefully add (...),(...), and we do not want to get (...) --, (...)
// s.append(super.toString(params));
s.append(nativeSql, 0, valuesBraceOpenPosition);
}
for (int i = 2; i <= batchSize; i++) {
if (i > 2 || pos != 1) {
// For "has binds" the first valuds
s.append(',');
}
s.append(nativeSql, chunkStart[0], chunkEnd[0]);
for (int j = 1; j < chunkStart.length; j++) {
if (params == null) {
NativeQuery.appendBindName(s, pos++);
} else {
s.append(params.toString(pos++, context));
}
s.append(nativeSql, chunkStart[j], chunkEnd[j]);
}
}
// Add trailing content: final query is like original with multi values.
// This could contain "--" comments, so it is important to add them at end.
s.append(nativeSql, valuesBraceClosePosition + 1, nativeSql.length());
sql = s.toString();
// Predict length only when building sql with $1, $2, ... (that is no specific params given)
assert params != null || s.length() == length
: "Predicted length != actual: " + length + " !=" + s.length();
return sql;
}
@Override
public String toString(/* @Nullable */ ParameterList params, SqlSerializationContext context) {
if (getBatchSize() < 2) {
return super.toString(params, context);
}
return buildNativeSql(params, context);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/ChannelBindingOption.java 0100664 0000000 0000000 00000002172 00000250600 030020 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2024, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import org.postgresql.PGProperty;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.internal.Nullness;
import java.util.Properties;
enum ChannelBindingOption {
/**
* Prevents the use of channel binding
*/
DISABLE,
/**
* Means that the client will choose channel binding if available.
*/
PREFER,
/**
* Means that the connection must employ channel binding.
*/
REQUIRE;
public static ChannelBindingOption of(Properties info) throws PSQLException {
String option = Nullness.castNonNull(PGProperty.CHANNEL_BINDING.getOrDefault(info));
switch (option) {
case "disable":
return DISABLE;
case "prefer":
return PREFER;
case "require":
return REQUIRE;
default:
throw new PSQLException(GT.tr("Invalid channelBinding value: {0}", option),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/CompositeParameterList.java 0100664 0000000 0000000 00000014730 00000250600 030426 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import org.postgresql.core.ParameterList;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.index.qual.NonNegative;
// import org.checkerframework.checker.index.qual.Positive;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.InputStream;
import java.sql.SQLException;
/**
* Parameter list for V3 query strings that contain multiple statements. We delegate to one
* SimpleParameterList per statement, and translate parameter indexes as needed.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
class CompositeParameterList implements V3ParameterList {
CompositeParameterList(SimpleParameterList[] subparams, int[] offsets) {
this.subparams = subparams;
this.offsets = offsets;
this.total = offsets[offsets.length - 1] + subparams[offsets.length - 1].getInParameterCount();
}
private int findSubParam(/* @Positive */ int index) throws SQLException {
if (index < 1 || index > total) {
throw new PSQLException(
GT.tr("The column index is out of range: {0}, number of columns: {1}.", index, total),
PSQLState.INVALID_PARAMETER_VALUE);
}
for (int i = offsets.length - 1; i >= 0; i--) {
if (offsets[i] < index) {
return i;
}
}
throw new IllegalArgumentException("I am confused; can't find a subparam for index " + index);
}
@Override
public void registerOutParameter(/* @Positive */ int index, int sqlType) {
}
public int getDirection(int i) {
return 0;
}
@Override
public /* @NonNegative */ int getParameterCount() {
return total;
}
@Override
public /* @NonNegative */ int getInParameterCount() {
return total;
}
@Override
public /* @NonNegative */ int getOutParameterCount() {
return 0;
}
@Override
public int[] getTypeOIDs() {
int[] oids = new int[total];
for (int i = 0; i < offsets.length; i++) {
int[] subOids = subparams[i].getTypeOIDs();
System.arraycopy(subOids, 0, oids, offsets[i], subOids.length);
}
return oids;
}
@Override
public void setIntParameter(/* @Positive */ int index, int value) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setIntParameter(index - offsets[sub], value);
}
@Override
public void setLiteralParameter(/* @Positive */ int index, String value, int oid) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setStringParameter(index - offsets[sub], value, oid);
}
@Override
public void setStringParameter(/* @Positive */ int index, String value, int oid) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setStringParameter(index - offsets[sub], value, oid);
}
@Override
public void setBinaryParameter(/* @Positive */ int index, byte[] value, int oid) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setBinaryParameter(index - offsets[sub], value, oid);
}
@Override
public void setBytea(/* @Positive */ int index, byte[] data, int offset, /* @NonNegative */ int length) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setBytea(index - offsets[sub], data, offset, length);
}
@Override
public void setBytea(/* @Positive */ int index, InputStream stream, /* @NonNegative */ int length) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setBytea(index - offsets[sub], stream, length);
}
@Override
public void setBytea(/* @Positive */ int index, InputStream stream) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setBytea(index - offsets[sub], stream);
}
@Override
public void setBytea(/* @Positive */ int index, ByteStreamWriter writer) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setBytea(index - offsets[sub], writer);
}
@Override
public void setText(/* @Positive */ int index, InputStream stream) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setText(index - offsets[sub], stream);
}
@Override
public void setNull(/* @Positive */ int index, int oid) throws SQLException {
int sub = findSubParam(index);
subparams[sub].setNull(index - offsets[sub], oid);
}
@Override
public String toString(/* @Positive */ int index, boolean standardConformingStrings) {
return toString(index, SqlSerializationContext.of(standardConformingStrings, true));
}
@Override
public String toString(/* @Positive */ int index, SqlSerializationContext context) {
try {
int sub = findSubParam(index);
return subparams[sub].toString(index - offsets[sub], context);
} catch (SQLException e) {
throw new IllegalStateException(e.getMessage());
}
}
@Override
public ParameterList copy() {
SimpleParameterList[] copySub = new SimpleParameterList[subparams.length];
for (int sub = 0; sub < subparams.length; sub++) {
copySub[sub] = (SimpleParameterList) subparams[sub].copy();
}
return new CompositeParameterList(copySub, offsets);
}
@Override
public void clear() {
for (SimpleParameterList subparam : subparams) {
subparam.clear();
}
}
@Override
public SimpleParameterList /* @Nullable */ [] getSubparams() {
return subparams;
}
@Override
public void checkAllParametersSet() throws SQLException {
for (SimpleParameterList subparam : subparams) {
subparam.checkAllParametersSet();
}
}
@Override
public byte /* @Nullable */ [][] getEncoding() {
return null; // unsupported
}
@Override
public byte /* @Nullable */ [] getFlags() {
return null; // unsupported
}
@Override
public int /* @Nullable */ [] getParamTypes() {
return null; // unsupported
}
@Override
public /* @Nullable */ Object /* @Nullable */ [] getValues() {
return null; // unsupported
}
@Override
public void appendAll(ParameterList list) throws SQLException {
// no-op, unsupported
}
@Override
public void convertFunctionOutParameters() {
for (SimpleParameterList subparam : subparams) {
subparam.convertFunctionOutParameters();
}
}
private final /* @Positive */ int total;
private final SimpleParameterList[] subparams;
private final int[] offsets;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/CompositeQuery.java 0100664 0000000 0000000 00000006260 00000250600 026756 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Query;
import org.postgresql.core.SqlCommand;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.util.Map;
/**
* V3 Query implementation for queries that involve multiple statements. We split it up into one
* SimpleQuery per statement, and wrap the corresponding per-statement SimpleParameterList objects
* in a CompositeParameterList.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
class CompositeQuery implements Query {
CompositeQuery(SimpleQuery[] subqueries, int[] offsets) {
this.subqueries = subqueries;
this.offsets = offsets;
}
@Override
public ParameterList createParameterList() {
SimpleParameterList[] subparams = new SimpleParameterList[subqueries.length];
for (int i = 0; i < subqueries.length; i++) {
subparams[i] = (SimpleParameterList) subqueries[i].createParameterList();
}
return new CompositeParameterList(subparams, offsets);
}
@Override
public String toString(/* @Nullable */ ParameterList parameters) {
return toString(parameters, DefaultSqlSerializationContext.STDSTR_IDEMPOTENT);
}
@Override
public String toString(/* @Nullable */ ParameterList parameters, SqlSerializationContext context) {
SimpleParameterList[] subparams =
parameters == null ? null : ((V3ParameterList) parameters).getSubparams();
StringBuilder sbuf = new StringBuilder(
subqueries[0].toString(subparams == null ? null : subparams[0], context));
for (int i = 1; i < subqueries.length; i++) {
sbuf.append(';');
sbuf.append(subqueries[i].toString(subparams == null ? null : subparams[i], context));
}
return sbuf.toString();
}
@Override
public String getNativeSql() {
StringBuilder sbuf = new StringBuilder(subqueries[0].getNativeSql());
for (int i = 1; i < subqueries.length; i++) {
sbuf.append(';');
sbuf.append(subqueries[i].getNativeSql());
}
return sbuf.toString();
}
@Override
public /* @Nullable */ SqlCommand getSqlCommand() {
return null;
}
@Override
public String toString() {
return toString(null);
}
@Override
public void close() {
for (SimpleQuery subquery : subqueries) {
subquery.close();
}
}
@Override
public Query[] getSubqueries() {
return subqueries;
}
@Override
public boolean isStatementDescribed() {
for (SimpleQuery subquery : subqueries) {
if (!subquery.isStatementDescribed()) {
return false;
}
}
return true;
}
@Override
public boolean isEmpty() {
for (SimpleQuery subquery : subqueries) {
if (!subquery.isEmpty()) {
return false;
}
}
return true;
}
@Override
public int getBatchSize() {
return 0; // no-op, unsupported
}
@Override
public /* @Nullable */ Map getResultSetColumnNameIndexMap() {
return null; // unsupported
}
private final SimpleQuery[] subqueries;
private final int[] offsets;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/ConnectionFactoryImpl.java 0100664 0000000 0000000 00000124272 00000250600 030243 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.PGProperty;
import org.postgresql.core.ConnectionFactory;
import org.postgresql.core.PGStream;
import org.postgresql.core.PgMessageType;
import org.postgresql.core.ProtocolVersion;
import org.postgresql.core.QueryExecutor;
import org.postgresql.core.ServerVersion;
import org.postgresql.core.SetupQueryRunner;
import org.postgresql.core.SocketFactoryFactory;
import org.postgresql.core.Tuple;
import org.postgresql.core.Utils;
import org.postgresql.core.Version;
import org.postgresql.gss.MakeGSS;
import org.postgresql.hostchooser.CandidateHost;
import org.postgresql.hostchooser.GlobalHostStatusTracker;
import org.postgresql.hostchooser.HostChooser;
import org.postgresql.hostchooser.HostChooserFactory;
import org.postgresql.hostchooser.HostRequirement;
import org.postgresql.hostchooser.HostStatus;
import org.postgresql.jdbc.GSSEncMode;
import org.postgresql.jdbc.SslMode;
import org.postgresql.jdbc.SslNegotiation;
import org.postgresql.plugin.AuthenticationRequestType;
import org.postgresql.ssl.MakeSSL;
import org.postgresql.sspi.ISSPIClient;
import org.postgresql.util.GT;
import org.postgresql.util.HostSpec;
import org.postgresql.util.MD5Digest;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.ServerErrorMessage;
import org.postgresql.util.internal.Nullness;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.net.ConnectException;
import java.nio.charset.StandardCharsets;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.TimeZone;
import java.util.logging.Level;
import java.util.logging.LogRecord;
import java.util.logging.Logger;
import javax.net.SocketFactory;
/**
* ConnectionFactory implementation for version 3 (7.4+) connections.
*
* @author Oliver Jowett (oliver@opencloud.com), based on the previous implementation
*/
public class ConnectionFactoryImpl extends ConnectionFactory {
private static class StartupParam {
private final String key;
private final String value;
StartupParam(String key, String value) {
this.key = key;
this.value = value;
}
@Override
public String toString() {
return this.key + "=" + this.value;
}
public byte[] getEncodedKey() {
return this.key.getBytes(StandardCharsets.UTF_8);
}
public byte[] getEncodedValue() {
return this.value.getBytes(StandardCharsets.UTF_8);
}
}
private static final Logger LOGGER = Logger.getLogger(ConnectionFactoryImpl.class.getName());
private static final int AUTH_REQ_OK = 0;
@SuppressWarnings("unused")
private static final int AUTH_REQ_KRB4 = 1;
@SuppressWarnings("unused")
private static final int AUTH_REQ_KRB5 = 2;
private static final int AUTH_REQ_PASSWORD = 3;
@SuppressWarnings("unused")
private static final int AUTH_REQ_CRYPT = 4;
private static final int AUTH_REQ_MD5 = 5;
@SuppressWarnings("unused")
private static final int AUTH_REQ_SCM = 6;
private static final int AUTH_REQ_GSS = 7;
private static final int AUTH_REQ_GSS_CONTINUE = 8;
private static final int AUTH_REQ_SSPI = 9;
private static final int AUTH_REQ_SASL = 10;
private static final int AUTH_REQ_SASL_CONTINUE = 11;
private static final int AUTH_REQ_SASL_FINAL = 12;
private static final String IN_HOT_STANDBY = "in_hot_standby";
private static ISSPIClient createSSPI(PGStream pgStream,
/* @Nullable */ String spnServiceClass,
boolean enableNegotiate) {
try {
@SuppressWarnings("unchecked")
Class c = (Class) Class.forName("org.postgresql.sspi.SSPIClient");
return c.getDeclaredConstructor(PGStream.class, String.class, boolean.class)
.newInstance(pgStream, spnServiceClass, enableNegotiate);
} catch (Exception e) {
// This caught quite a lot of exceptions, but until Java 7 there is no ReflectiveOperationException
throw new IllegalStateException("Unable to load org.postgresql.sspi.SSPIClient."
+ " Please check that SSPIClient is included in your pgjdbc distribution.", e);
}
}
private PGStream tryConnect(Properties info, SocketFactory socketFactory, HostSpec hostSpec,
SslMode sslMode, GSSEncMode gssEncMode)
throws SQLException, IOException {
int connectTimeout = PGProperty.CONNECT_TIMEOUT.getInt(info) * 1000;
String user = PGProperty.USER.getOrDefault(info);
String database = PGProperty.PG_DBNAME.getOrDefault(info);
SslNegotiation sslNegotiation = SslNegotiation.of(Nullness.castNonNull(PGProperty.SSL_NEGOTIATION.getOrDefault(info)));
if (user == null) {
throw new PSQLException(GT.tr("User cannot be null"), PSQLState.INVALID_NAME);
}
if (database == null) {
throw new PSQLException(GT.tr("Database cannot be null"), PSQLState.INVALID_NAME);
}
int maxSendBufferSize = PGProperty.MAX_SEND_BUFFER_SIZE.getInt(info);
PGStream newStream = new PGStream(socketFactory, hostSpec, connectTimeout, maxSendBufferSize);
try {
// Set the socket timeout if the "socketTimeout" property has been set.
int socketTimeout = PGProperty.SOCKET_TIMEOUT.getInt(info);
if (socketTimeout > 0) {
newStream.setNetworkTimeout(socketTimeout * 1000);
}
String maxResultBuffer = PGProperty.MAX_RESULT_BUFFER.getOrDefault(info);
newStream.setMaxResultBuffer(maxResultBuffer);
// Enable TCP keep-alive probe if required.
boolean requireTCPKeepAlive = PGProperty.TCP_KEEP_ALIVE.getBoolean(info);
newStream.getSocket().setKeepAlive(requireTCPKeepAlive);
// Enable TCP no delay if required
boolean requireTCPNoDelay = PGProperty.TCP_NO_DELAY.getBoolean(info);
newStream.getSocket().setTcpNoDelay(requireTCPNoDelay);
// Try to set SO_SNDBUF and SO_RECVBUF socket options, if requested.
// If receiveBufferSize and send_buffer_size are set to a value greater
// than 0, adjust. -1 means use the system default, 0 is ignored since not
// supported.
// Set SO_RECVBUF read buffer size
int receiveBufferSize = PGProperty.RECEIVE_BUFFER_SIZE.getInt(info);
if (receiveBufferSize > -1) {
// value of 0 not a valid buffer size value
if (receiveBufferSize > 0) {
newStream.getSocket().setReceiveBufferSize(receiveBufferSize);
} else {
LOGGER.log(Level.WARNING, "Ignore invalid value for receiveBufferSize: {0}",
receiveBufferSize);
}
}
// Set SO_SNDBUF write buffer size
int sendBufferSize = PGProperty.SEND_BUFFER_SIZE.getInt(info);
if (sendBufferSize > -1) {
if (sendBufferSize > 0) {
newStream.getSocket().setSendBufferSize(sendBufferSize);
} else {
LOGGER.log(Level.WARNING, "Ignore invalid value for sendBufferSize: {0}", sendBufferSize);
}
}
if (LOGGER.isLoggable(Level.FINE)) {
LOGGER.log(Level.FINE, "Receive Buffer Size is {0}",
newStream.getSocket().getReceiveBufferSize());
LOGGER.log(Level.FINE, "Send Buffer Size is {0}",
newStream.getSocket().getSendBufferSize());
}
if (sslNegotiation != SslNegotiation.DIRECT) {
newStream =
enableGSSEncrypted(newStream, gssEncMode, hostSpec.getHost(), info, connectTimeout);
}
// if we have a security context then gss negotiation succeeded. Do not attempt SSL
// negotiation
if (!newStream.isGssEncrypted()) {
// Construct and send an SSL startup packet if requested.
newStream = enableSSL(newStream, sslMode, info, connectTimeout);
}
// Make sure to set network timeout again, in case the stream changed due to GSS or SSL
if (socketTimeout > 0) {
newStream.setNetworkTimeout(socketTimeout * 1000);
}
List paramList = getParametersForStartup(user, database, info);
String protocolVersion = PGProperty.PROTOCOL_VERSION.getOrDefault(info);
int protocolMajor = 3;
int protocolMinor = 0;
if (protocolVersion != null) {
int decimal = protocolVersion.indexOf('.');
if (decimal == -1) {
protocolMinor = Integer.parseInt(protocolVersion);
protocolMinor = 0;
} else {
protocolMinor = Integer.parseInt(protocolVersion.substring(decimal + 1));
protocolMajor = Integer.parseInt(protocolVersion.substring(0,decimal));
}
}
sendStartupPacket(newStream, ProtocolVersion.fromMajorMinor(protocolMajor,protocolMinor), paramList);
// Do authentication (until AuthenticationOk).
doAuthentication(newStream, hostSpec.getHost(), user, info);
return newStream;
} catch (Exception e) {
closeStream(newStream);
throw e;
}
}
@Override
public QueryExecutor openConnectionImpl(HostSpec[] hostSpecs, Properties info) throws SQLException {
SslMode sslMode = SslMode.of(info);
GSSEncMode gssEncMode = GSSEncMode.of(info);
HostRequirement targetServerType;
String targetServerTypeStr = castNonNull(PGProperty.TARGET_SERVER_TYPE.getOrDefault(info));
try {
targetServerType = HostRequirement.getTargetServerType(targetServerTypeStr);
} catch (IllegalArgumentException ex) {
throw new PSQLException(
GT.tr("Invalid targetServerType value: {0}", targetServerTypeStr),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
SocketFactory socketFactory = SocketFactoryFactory.getSocketFactory(info);
HostChooser hostChooser =
HostChooserFactory.createHostChooser(hostSpecs, targetServerType, info);
Iterator hostIter = hostChooser.iterator();
Map knownStates = new HashMap<>();
while (hostIter.hasNext()) {
CandidateHost candidateHost = hostIter.next();
HostSpec hostSpec = candidateHost.hostSpec;
LOGGER.log(Level.FINE, "Trying to establish a protocol version 3 connection to {0}", hostSpec);
// Note: per-connect-attempt status map is used here instead of GlobalHostStatusTracker
// for the case when "no good hosts" match (e.g. all the hosts are known as "connectfail")
// In that case, the system tries to connect to each host in order, thus it should not look into
// GlobalHostStatusTracker
HostStatus knownStatus = knownStates.get(hostSpec);
if (knownStatus != null && !candidateHost.targetServerType.allowConnectingTo(knownStatus)) {
if (LOGGER.isLoggable(Level.FINER)) {
LOGGER.log(Level.FINER, "Known status of host {0} is {1}, and required status was {2}. Will try next host",
new Object[]{hostSpec, knownStatus, candidateHost.targetServerType});
}
continue;
}
//
// Establish a connection.
//
PGStream newStream = null;
try {
try {
newStream = tryConnect(info, socketFactory, hostSpec, sslMode, gssEncMode);
} catch (SQLException e) {
if (sslMode == SslMode.PREFER
&& PSQLState.INVALID_AUTHORIZATION_SPECIFICATION.getState().equals(e.getSQLState())) {
// Try non-SSL connection to cover case like "non-ssl only db"
// Note: PREFER allows loss of encryption, so no significant harm is made
Throwable ex = null;
try {
newStream =
tryConnect(info, socketFactory, hostSpec, SslMode.DISABLE, gssEncMode);
LOGGER.log(Level.FINE, "Downgraded to non-encrypted connection for host {0}",
hostSpec);
} catch (SQLException | IOException ee) {
ex = ee;
}
if (ex != null) {
log(Level.FINE, "sslMode==PREFER, however non-SSL connection failed as well", ex);
// non-SSL failed as well, so re-throw original exception
// Add non-SSL exception as suppressed
e.addSuppressed(ex);
throw e;
}
} else if (sslMode == SslMode.ALLOW
&& PSQLState.INVALID_AUTHORIZATION_SPECIFICATION.getState().equals(e.getSQLState())) {
// Try using SSL
Throwable ex = null;
try {
newStream =
tryConnect(info, socketFactory, hostSpec, SslMode.REQUIRE, gssEncMode);
LOGGER.log(Level.FINE, "Upgraded to encrypted connection for host {0}",
hostSpec);
} catch (SQLException ee) {
ex = ee;
} catch (IOException ee) {
ex = ee; // Can't use multi-catch in Java 6 :(
}
if (ex != null) {
log(Level.FINE, "sslMode==ALLOW, however SSL connection failed as well", ex);
// non-SSL failed as well, so re-throw original exception
// Add SSL exception as suppressed
e.addSuppressed(ex);
throw e;
}
} else {
throw e;
}
}
int cancelSignalTimeout = PGProperty.CANCEL_SIGNAL_TIMEOUT.getInt(info) * 1000;
// CheckerFramework can't infer newStream is non-nullable
castNonNull(newStream);
// Do final startup.
QueryExecutor queryExecutor = new QueryExecutorImpl(newStream, cancelSignalTimeout, info);
// Check Primary or Secondary
HostStatus hostStatus = HostStatus.ConnectOK;
if (candidateHost.targetServerType != HostRequirement.any) {
hostStatus = isPrimary(queryExecutor) ? HostStatus.Primary : HostStatus.Secondary;
}
GlobalHostStatusTracker.reportHostStatus(hostSpec, hostStatus);
knownStates.put(hostSpec, hostStatus);
if (!candidateHost.targetServerType.allowConnectingTo(hostStatus)) {
queryExecutor.close();
continue;
}
runInitialQueries(queryExecutor, info);
// And we're done.
return queryExecutor;
} catch (ConnectException cex) {
// Added by Peter Mount
// ConnectException is thrown when the connection cannot be made.
// we trap this an return a more meaningful message for the end user
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail);
knownStates.put(hostSpec, HostStatus.ConnectFail);
if (hostIter.hasNext()) {
log(Level.FINE, "ConnectException occurred while connecting to {0}", cex, hostSpec);
// still more addresses to try
continue;
}
throw new PSQLException(GT.tr(
"Connection to {0} refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.",
hostSpec), PSQLState.CONNECTION_UNABLE_TO_CONNECT, cex);
} catch (IOException ioe) {
closeStream(newStream);
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail);
knownStates.put(hostSpec, HostStatus.ConnectFail);
if (hostIter.hasNext()) {
log(Level.FINE, "IOException occurred while connecting to {0}", ioe, hostSpec);
// still more addresses to try
continue;
}
throw new PSQLException(GT.tr("The connection attempt failed."),
PSQLState.CONNECTION_UNABLE_TO_CONNECT, ioe);
} catch (SQLException se) {
closeStream(newStream);
GlobalHostStatusTracker.reportHostStatus(hostSpec, HostStatus.ConnectFail);
knownStates.put(hostSpec, HostStatus.ConnectFail);
if (hostIter.hasNext()) {
log(Level.FINE, "SQLException occurred while connecting to {0}", se, hostSpec);
// still more addresses to try
continue;
}
throw se;
}
}
throw new PSQLException(GT
.tr("Could not find a server with specified targetServerType: {0}", targetServerType),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
private static List getParametersForStartup(String user, String database, Properties info) {
List paramList = new ArrayList<>();
paramList.add(new StartupParam("user", user));
paramList.add(new StartupParam("database", database));
paramList.add(new StartupParam("client_encoding", "UTF8"));
paramList.add(new StartupParam("DateStyle", "ISO"));
paramList.add(new StartupParam("TimeZone", createPostgresTimeZone()));
Version assumeVersion = ServerVersion.from(PGProperty.ASSUME_MIN_SERVER_VERSION.getOrDefault(info));
// assumeMinServerVersion implies a minimum, not an exact version, so we will set the decimal
// digits in runInitialQueries when we know the exact version, if needed.
// application name is important to set as early as possible for connection logging, we set it immediately
// if we can assume the minimum version supports doing so
String appName = PGProperty.APPLICATION_NAME.getOrDefault(info);
if ( appName != null && assumeVersion.getVersionNum() >= ServerVersion.v9_0.getVersionNum() ) {
paramList.add(new StartupParam("application_name", appName));
}
// probably no need to make sure the assumeVersion is 9.4 or greater. The user really wants replication.
String replication = PGProperty.REPLICATION.getOrDefault(info);
if (replication != null && assumeVersion.getVersionNum() >= ServerVersion.v9_4.getVersionNum()) {
paramList.add(new StartupParam("replication", replication));
}
String currentSchema = PGProperty.CURRENT_SCHEMA.getOrDefault(info);
if (currentSchema != null) {
paramList.add(new StartupParam("search_path", currentSchema));
}
String options = PGProperty.OPTIONS.getOrDefault(info);
if (options != null) {
paramList.add(new StartupParam("options", options));
}
return paramList;
}
private static void log(Level level, String msg, Throwable thrown, Object... params) {
if (!LOGGER.isLoggable(level)) {
return;
}
LogRecord rec = new LogRecord(level, msg);
// Set the loggerName of the LogRecord with the current logger
rec.setLoggerName(LOGGER.getName());
rec.setParameters(params);
rec.setThrown(thrown);
LOGGER.log(rec);
}
/**
* Convert Java time zone to postgres time zone. All others stay the same except that GMT+nn
* changes to GMT-nn and vise versa.
* If you provide GMT+/-nn postgres uses POSIX rules which has a positive sign for west of Greenwich
* JAVA uses ISO rules which the positive sign is east of Greenwich
* To make matters more interesting postgres will always report in ISO
*
* @return The current JVM time zone in postgresql format.
*/
private static String createPostgresTimeZone() {
String tz = TimeZone.getDefault().getID();
if (tz.length() <= 3 || !tz.startsWith("GMT")) {
return tz;
}
char sign = tz.charAt(3);
String start;
switch (sign) {
case '+':
start = "GMT-";
break;
case '-':
start = "GMT+";
break;
default:
// unknown type
return tz;
}
return start + tz.substring(4);
}
private static PGStream enableGSSEncrypted(PGStream pgStream, GSSEncMode gssEncMode, String host, Properties info,
int connectTimeout)
throws IOException, PSQLException {
if ( gssEncMode == GSSEncMode.DISABLE ) {
return pgStream;
}
if (gssEncMode == GSSEncMode.ALLOW ) {
// start with plain text and let the server request it
return pgStream;
}
/*
at this point gssEncMode is either PREFER or REQUIRE
libpq looks to see if there is a ticket in the cache before asking
the server if it supports encrypted GSS connections or not.
since the user has specifically asked or either prefer or require we can
assume they want it.
*/
/*
let's see if the server will allow a GSS encrypted connection
*/
String user = PGProperty.USER.getOrDefault(info);
if (user == null) {
throw new PSQLException("GSSAPI encryption required but was impossible user is null", PSQLState.CONNECTION_REJECTED);
}
// attempt to acquire a GSS encrypted connection
LOGGER.log(Level.FINEST, " FE=> GSSENCRequest");
int gssTimeout = PGProperty.SSL_RESPONSE_TIMEOUT.getInt(info);
int currentTimeout = pgStream.getNetworkTimeout();
// if the current timeout is less than sslTimeout then
// use the smaller timeout. We could do something tricky
// here to not set it in that case but this is pretty readable
if (currentTimeout > 0 && currentTimeout < gssTimeout) {
gssTimeout = currentTimeout;
}
pgStream.setNetworkTimeout(gssTimeout);
// Send GSSEncryption request packet
pgStream.sendInteger4(8);
pgStream.sendInteger2(1234);
pgStream.sendInteger2(5680);
pgStream.flush();
// Now get the response from the backend, one of N, E, S.
int beresp = pgStream.receiveChar();
pgStream.setNetworkTimeout(currentTimeout);
switch (beresp) {
case 'E':
LOGGER.log(Level.FINEST, " <=BE GSSEncrypted Error");
// Server doesn't even know about the SSL handshake protocol
if (gssEncMode.requireEncryption()) {
throw new PSQLException(GT.tr("The server does not support GSS Encoding."),
PSQLState.CONNECTION_REJECTED);
}
// We have to reconnect to continue.
pgStream.close();
int maxSendBufferSize = PGProperty.MAX_SEND_BUFFER_SIZE.getInt(info);
return new PGStream(pgStream.getSocketFactory(), pgStream.getHostSpec(), connectTimeout,
maxSendBufferSize);
case 'N':
LOGGER.log(Level.FINEST, " <=BE GSSEncrypted Refused");
// Server does not support gss encryption
if (gssEncMode.requireEncryption()) {
throw new PSQLException(GT.tr("The server does not support GSS Encryption."),
PSQLState.CONNECTION_REJECTED);
}
return pgStream;
case 'G':
LOGGER.log(Level.FINEST, " <=BE GSSEncryptedOk");
try {
AuthenticationPluginManager.withPassword(AuthenticationRequestType.GSS, info, password -> {
MakeGSS.authenticate(true, pgStream, host, user, password,
PGProperty.JAAS_APPLICATION_NAME.getOrDefault(info),
PGProperty.KERBEROS_SERVER_NAME.getOrDefault(info), false, // TODO: fix this
PGProperty.JAAS_LOGIN.getBoolean(info),
PGProperty.GSS_USE_DEFAULT_CREDS.getBoolean(info),
PGProperty.LOG_SERVER_ERROR_DETAIL.getBoolean(info));
return void.class;
});
return pgStream;
} catch (PSQLException ex) {
// allow the connection to proceed
if (gssEncMode == GSSEncMode.PREFER) {
// we have to reconnect to continue
return new PGStream(pgStream, connectTimeout);
}
}
// fallthrough
default:
throw new PSQLException(GT.tr("An error occurred while setting up the GSS Encoded connection."),
PSQLState.PROTOCOL_VIOLATION);
}
}
private static PGStream enableSSL(PGStream pgStream, SslMode sslMode, Properties info,
int connectTimeout)
throws IOException, PSQLException {
if (sslMode == SslMode.DISABLE) {
return pgStream;
}
if (sslMode == SslMode.ALLOW) {
// Allow ==> start with plaintext, use encryption if required by server
return pgStream;
}
SslNegotiation sslNegotiation = SslNegotiation.of(Nullness.castNonNull(PGProperty.SSL_NEGOTIATION.getOrDefault(info)));
LOGGER.log(Level.FINEST, () -> String.format(" FE=> SSLRequest %s", sslNegotiation.value()));
int sslTimeout = PGProperty.SSL_RESPONSE_TIMEOUT.getInt(info);
int currentTimeout = pgStream.getNetworkTimeout();
// if the current timeout is less than sslTimeout then
// use the smaller timeout. We could do something tricky
// here to not set it in that case but this is pretty readable
if (currentTimeout > 0 && currentTimeout < sslTimeout) {
sslTimeout = currentTimeout;
}
pgStream.setNetworkTimeout(sslTimeout);
if (sslNegotiation == SslNegotiation.DIRECT) {
MakeSSL.convert(pgStream, info);
return pgStream;
}
// Send SSL request packet
pgStream.sendInteger4(8);
pgStream.sendInteger2(1234);
pgStream.sendInteger2(5679);
pgStream.flush();
// Now get the response from the backend, one of N, E, S.
int beresp = pgStream.receiveChar();
pgStream.setNetworkTimeout(currentTimeout);
switch (beresp) {
case 'E':
LOGGER.log(Level.FINEST, " <=BE SSLError");
// Server doesn't even know about the SSL handshake protocol
if (sslMode.requireEncryption()) {
throw new PSQLException(GT.tr("The server does not support SSL."),
PSQLState.CONNECTION_REJECTED);
}
// We have to reconnect to continue.
return new PGStream(pgStream, connectTimeout);
case 'N':
LOGGER.log(Level.FINEST, " <=BE SSLRefused");
// Server does not support ssl
if (sslMode.requireEncryption()) {
throw new PSQLException(GT.tr("The server does not support SSL."),
PSQLState.CONNECTION_REJECTED);
}
return pgStream;
case 'S':
LOGGER.log(Level.FINEST, " <=BE SSLOk");
// Server supports ssl
MakeSSL.convert(pgStream, info);
return pgStream;
default:
throw new PSQLException(GT.tr("An error occurred while setting up the SSL connection."),
PSQLState.PROTOCOL_VIOLATION);
}
}
private static void sendStartupPacket(PGStream pgStream, ProtocolVersion protocolVersion, List params)
throws SQLException, IOException {
if (LOGGER.isLoggable(Level.FINEST)) {
StringBuilder details = new StringBuilder();
for (int i = 0; i < params.size(); i++) {
if (i != 0) {
details.append(", ");
}
details.append(params.get(i).toString());
}
LOGGER.log(Level.FINEST, " FE=> StartupPacket({0})", details);
}
// Precalculate message length and encode params.
int length = 4 + 4;
byte[][] encodedParams = new byte[params.size() * 2][];
for (int i = 0; i < params.size(); i++) {
encodedParams[i * 2] = params.get(i).getEncodedKey();
encodedParams[i * 2 + 1] = params.get(i).getEncodedValue();
length += encodedParams[i * 2].length + 1 + encodedParams[i * 2 + 1].length + 1;
}
length += 1; // Terminating \0
// Send the startup message.
pgStream.sendInteger4(length);
pgStream.sendInteger2(protocolVersion.getMajor()); // protocol major
pgStream.sendInteger2(protocolVersion.getMinor()); // protocol minor
for (byte[] encodedParam : encodedParams) {
pgStream.send(encodedParam);
pgStream.sendChar(0);
}
pgStream.sendChar(0);
pgStream.setProtocolVersion(protocolVersion);
pgStream.flush();
}
private static void doAuthentication(PGStream pgStream, String host, String user, Properties info) throws IOException, SQLException {
// Now get the response from the backend, either an error message
// or an authentication request
/* SSPI negotiation state, if used */
ISSPIClient sspiClient = null;
/* SCRAM authentication state, if used */
ScramAuthenticator scramAuthenticator = null;
// TODO: figure out how to deal with new protocols
int protocol = 3 << 16;
try {
authloop: while (true) {
int beresp = pgStream.receiveChar();
switch (beresp) {
case PgMessageType.NEGOTIATE_PROTOCOL_RESPONSE: // Negotiate Protocol Version
// read the length and ignore it.
pgStream.receiveInteger4();
protocol = pgStream.receiveInteger4();
int numOptionsNotRecognized = pgStream.receiveInteger4();
if (numOptionsNotRecognized > 0) {
// do not connect and throw an error
String errorMessage = "Protocol error, received invalid options: ";
for (int i = 0; i < numOptionsNotRecognized; i++) {
errorMessage += i > 0 ? "" : "," + pgStream.receiveString();
}
LOGGER.log(Level.FINEST, errorMessage);
throw new PSQLException(errorMessage, PSQLState.PROTOCOL_VIOLATION);
}
int major = protocol >> 16 & 0xff;
int minor = protocol & 0xff;
pgStream.setProtocolVersion( ProtocolVersion.fromMajorMinor(major, minor));
break;
case PgMessageType.ERROR_RESPONSE:
// An error occurred, so pass the error message to the
// user.
//
// The most common one to be thrown here is:
// "User authentication failed"
//
int elen = pgStream.receiveInteger4();
ServerErrorMessage errorMsg =
new ServerErrorMessage(pgStream.receiveErrorString(elen - 4));
LOGGER.log(Level.FINEST, " <=BE ErrorMessage({0})", errorMsg);
throw new PSQLException(errorMsg, PGProperty.LOG_SERVER_ERROR_DETAIL.getBoolean(info));
case PgMessageType.AUTHENTICATION_RESPONSE:
// Authentication request.
// Get the message length
int msgLen = pgStream.receiveInteger4();
// Get the type of request
int areq = pgStream.receiveInteger4();
// Process the request.
switch (areq) {
case AUTH_REQ_MD5: {
byte[] md5Salt = pgStream.receive(4);
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE AuthenticationReqMD5(salt={0})", Utils.toHexString(md5Salt));
}
byte[] digest = AuthenticationPluginManager.withEncodedPassword(
AuthenticationRequestType.MD5_PASSWORD, info,
encodedPassword -> MD5Digest.encode(user.getBytes(StandardCharsets.UTF_8),
encodedPassword, md5Salt)
);
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " FE=> Password(md5digest={0})", new String(digest, StandardCharsets.US_ASCII));
}
try {
pgStream.sendChar(PgMessageType.PASSWORD_REQUEST);
pgStream.sendInteger4(4 + digest.length + 1);
pgStream.send(digest);
} finally {
Arrays.fill(digest, (byte) 0);
}
pgStream.sendChar(0);
pgStream.flush();
break;
}
case AUTH_REQ_PASSWORD: {
LOGGER.log(Level.FINEST, "<=BE AuthenticationReqPassword");
LOGGER.log(Level.FINEST, " FE=> Password(password=)");
AuthenticationPluginManager.withEncodedPassword(AuthenticationRequestType.CLEARTEXT_PASSWORD, info, encodedPassword -> {
pgStream.sendChar(PgMessageType.PASSWORD_REQUEST);
pgStream.sendInteger4(4 + encodedPassword.length + 1);
pgStream.send(encodedPassword);
return void.class;
});
pgStream.sendChar(0);
pgStream.flush();
break;
}
case AUTH_REQ_GSS:
case AUTH_REQ_SSPI:
/*
* Use GSSAPI if requested on all platforms, via JSSE.
*
* For SSPI auth requests, if we're on Windows attempt native SSPI authentication if
* available, and if not disabled by setting a kerberosServerName. On other
* platforms, attempt JSSE GSSAPI negotiation with the SSPI server.
*
* Note that this is slightly different to libpq, which uses SSPI for GSSAPI where
* supported. We prefer to use the existing Java JSSE Kerberos support rather than
* going to native (via JNA) calls where possible, so that JSSE system properties
* etc continue to work normally.
*
* Note that while SSPI is often Kerberos-based there's no guarantee it will be; it
* may be NTLM or anything else. If the client responds to an SSPI request via
* GSSAPI and the other end isn't using Kerberos for SSPI then authentication will
* fail.
*/
final String gsslib = PGProperty.GSS_LIB.getOrDefault(info);
final boolean usespnego = PGProperty.USE_SPNEGO.getBoolean(info);
boolean useSSPI = false;
/*
* Use SSPI if we're in auto mode on windows and have a request for SSPI auth, or if
* it's forced. Otherwise use gssapi. If the user has specified a Kerberos server
* name we'll always use JSSE GSSAPI.
*/
if ("gssapi".equals(gsslib)) {
LOGGER.log(Level.FINE, "Using JSSE GSSAPI, param gsslib=gssapi");
} else if (areq == AUTH_REQ_GSS && !"sspi".equals(gsslib)) {
LOGGER.log(Level.FINE,
"Using JSSE GSSAPI, gssapi requested by server and gsslib=sspi not forced");
} else {
/* Determine if SSPI is supported by the client */
sspiClient = createSSPI(pgStream, PGProperty.SSPI_SERVICE_CLASS.getOrDefault(info),
/* Use negotiation for SSPI, or if explicitly requested for GSS */
areq == AUTH_REQ_SSPI || (areq == AUTH_REQ_GSS && usespnego));
useSSPI = sspiClient.isSSPISupported();
LOGGER.log(Level.FINE, "SSPI support detected: {0}", useSSPI);
if (!useSSPI) {
/* No need to dispose() if no SSPI used */
sspiClient = null;
if ("sspi".equals(gsslib)) {
throw new PSQLException(
"SSPI forced with gsslib=sspi, but SSPI not available; set loglevel=2 for details",
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
}
if (LOGGER.isLoggable(Level.FINE)) {
LOGGER.log(Level.FINE, "Using SSPI: {0}, gsslib={1} and SSPI support detected", new Object[]{useSSPI, gsslib});
}
}
if (useSSPI) {
/* SSPI requested and detected as available */
castNonNull(sspiClient).startSSPI();
} else {
/* Use JGSS's GSSAPI for this request */
AuthenticationPluginManager.withPassword(AuthenticationRequestType.GSS, info, password -> {
MakeGSS.authenticate(false, pgStream, host, user, password,
PGProperty.JAAS_APPLICATION_NAME.getOrDefault(info),
PGProperty.KERBEROS_SERVER_NAME.getOrDefault(info), usespnego,
PGProperty.JAAS_LOGIN.getBoolean(info),
PGProperty.GSS_USE_DEFAULT_CREDS.getBoolean(info),
PGProperty.LOG_SERVER_ERROR_DETAIL.getBoolean(info));
return void.class;
});
}
break;
case AUTH_REQ_GSS_CONTINUE:
/*
* Only called for SSPI, as GSS is handled by an inner loop in MakeGSS.
*/
castNonNull(sspiClient).continueSSPI(msgLen - 8);
break;
case AUTH_REQ_SASL:
scramAuthenticator = AuthenticationPluginManager.withPassword(AuthenticationRequestType.SASL, info, password -> {
if (password == null) {
throw new PSQLException(
GT.tr(
"The server requested SCRAM-based authentication, but no password was provided."),
PSQLState.CONNECTION_REJECTED);
}
if (password.length == 0) {
throw new PSQLException(
GT.tr(
"The server requested SCRAM-based authentication, but the password is an empty string."),
PSQLState.CONNECTION_REJECTED);
}
return new ScramAuthenticator(password, pgStream, info);
});
scramAuthenticator.handleAuthenticationSASL();
break;
case AUTH_REQ_SASL_CONTINUE:
castNonNull(scramAuthenticator).handleAuthenticationSASLContinue(msgLen - 4 - 4);
break;
case AUTH_REQ_SASL_FINAL:
castNonNull(scramAuthenticator).handleAuthenticationSASLFinal(msgLen - 4 - 4);
break;
case AUTH_REQ_OK:
/* Cleanup after successful authentication */
LOGGER.log(Level.FINEST, " <=BE AuthenticationOk");
break authloop; // We're done.
default:
LOGGER.log(Level.FINEST, " <=BE AuthenticationReq (unsupported type {0})", areq);
throw new PSQLException(GT.tr(
"The authentication type {0} is not supported. Check that you have configured the pg_hba.conf file to include the client''s IP address or subnet, and that it is using an authentication scheme supported by the driver.",
areq), PSQLState.CONNECTION_REJECTED);
}
break;
default:
throw new PSQLException(GT.tr("Protocol error. Session setup failed."),
PSQLState.PROTOCOL_VIOLATION);
}
}
} finally {
/* Cleanup after successful or failed authentication attempts */
if (sspiClient != null) {
try {
sspiClient.dispose();
} catch (RuntimeException ex) {
LOGGER.log(Level.FINE, "Unexpected error during SSPI context disposal", ex);
}
}
}
}
private static void runInitialQueries(QueryExecutor queryExecutor, Properties info)
throws SQLException {
// The version we assumed the server would be prior to connecting, to determine what we have already sent
Version assumeVersion = ServerVersion.from(PGProperty.ASSUME_MIN_SERVER_VERSION.getOrDefault(info));
// The actual version we connected to
final int dbVersion = queryExecutor.getServerVersionNum();
StringBuilder sb = new StringBuilder();
// Only need to send the application name if it's defined and wasn't already sent as a startup parameter
String appName = PGProperty.APPLICATION_NAME.getOrDefault(info);
boolean sendApplicationName = appName != null
&& assumeVersion.getVersionNum() < ServerVersion.v9_0.getVersionNum()
&& dbVersion >= ServerVersion.v9_0.getVersionNum();
boolean sendExtraFloatDigits = dbVersion < ServerVersion.v12.getVersionNum();
if ( sendApplicationName || sendExtraFloatDigits ) {
if ( sendExtraFloatDigits ) {
if (dbVersion < ServerVersion.v9_0.getVersionNum()) {
// server version < 9 so 8.x or less
sb.append("SET extra_float_digits = 2");
} else {
// server version < 12 so 9.0 - 11.x
sb.append("SET extra_float_digits = 3");
}
}
if ( sendApplicationName ) {
// we could check the length of sb, but this should be faster
if (sendExtraFloatDigits) {
sb.append(';');
}
sb.append("SET application_name = '");
Utils.escapeLiteral(sb, Nullness.castNonNull(appName), queryExecutor.getStandardConformingStrings());
sb.append("'");
}
SetupQueryRunner.run(queryExecutor, sb.toString(), false);
}
}
/**
* Since PG14 there is GUC_REPORT ParamStatus {@code in_hot_standby} which is set to "on"
* when the server is in archive recovery or standby mode. In driver's lingo such server is called
* {@link org.postgresql.hostchooser.HostRequirement#secondary}.
* Previously {@code transaction_read_only} was used as a workable substitute.
* However {@code transaction_read_only} could have been manually overridden on the primary server
* by database user leading to a false positives: ie server is effectively read-only but
* technically is "primary" (not in a recovery/standby mode).
*
* This method checks whether {@code in_hot_standby} GUC was reported by the server
* during initial connection:
*
*
* {@code in_hot_standby} was reported and the value was "on" then the server is a replica
* and database is read-only by definition, false is returned.
* {@code in_hot_standby} was reported and the value was "off"
* then the server is indeed primary but database may be in
* read-only mode nevertheless. We proceed to conservatively {@code show transaction_read_only}
* since users may not be expecting a readonly connection for {@code targetServerType=primary}
* If {@code in_hot_standby} has not been reported we fallback to pre v14 behavior.
*
*
* Do not confuse {@code hot_standby} and {@code in_hot_standby} ParamStatuses
*
* @see GUC_REPORT documentation
* @see Hot standby documentation
* @see in_hot_standby patch thread v10
* @see in_hot_standby patch thread v14
*
*/
private static boolean isPrimary(QueryExecutor queryExecutor) throws SQLException, IOException {
String inHotStandby = queryExecutor.getParameterStatus(IN_HOT_STANDBY);
if ("on".equalsIgnoreCase(inHotStandby)) {
return false;
}
Tuple results = SetupQueryRunner.run(queryExecutor, "show transaction_read_only", true);
Tuple nonNullResults = castNonNull(results);
String queriedTransactionReadonly = queryExecutor.getEncoding().decode(castNonNull(nonNullResults.get(0)));
return "off".equalsIgnoreCase(queriedTransactionReadonly);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/CopyDualImpl.java 0100664 0000000 0000000 00000003067 00000250600 026332 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import org.postgresql.copy.CopyDual;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.PSQLException;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
import java.util.ArrayDeque;
import java.util.Queue;
public class CopyDualImpl extends CopyOperationImpl implements CopyDual {
private final Queue received = new ArrayDeque<>();
@Override
public void writeToCopy(byte[] data, int off, int siz) throws SQLException {
getQueryExecutor().writeToCopy(this, data, off, siz);
}
@Override
public void writeToCopy(ByteStreamWriter from) throws SQLException {
getQueryExecutor().writeToCopy(this, from);
}
@Override
public void flushCopy() throws SQLException {
getQueryExecutor().flushCopy(this);
}
@Override
public long endCopy() throws SQLException {
return getQueryExecutor().endCopy(this);
}
@Override
public byte /* @Nullable */ [] readFromCopy() throws SQLException {
return readFromCopy(true);
}
@Override
public byte /* @Nullable */ [] readFromCopy(boolean block) throws SQLException {
if (received.isEmpty()) {
getQueryExecutor().readFromCopy(this, block);
}
return received.poll();
}
@Override
public void handleCommandStatus(String status) throws PSQLException {
}
@Override
protected void handleCopydata(byte[] data) {
received.add(data);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/CopyInImpl.java 0100664 0000000 0000000 00000004516 00000250600 026013 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import org.postgresql.copy.CopyIn;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.sql.SQLException;
/**
* COPY FROM STDIN operation.
*
* Anticipated flow:
*
* CopyManager.copyIn() ->QueryExecutor.startCopy() - sends given query to server
* ->processCopyResults(): - receives CopyInResponse from Server - creates new CopyInImpl
* ->initCopy(): - receives copy metadata from server ->CopyInImpl.init() ->lock()
* connection for this operation - if query fails an exception is thrown - if query returns wrong
* CopyOperation, copyIn() cancels it before throwing exception <-return: new CopyInImpl holding
* lock on connection repeat CopyIn.writeToCopy() for all data ->CopyInImpl.writeToCopy()
* ->QueryExecutorImpl.writeToCopy() - sends given data ->processCopyResults() - parameterized
* not to block, just peek for new messages from server - on ErrorResponse, waits until protocol is
* restored and unlocks connection CopyIn.endCopy() ->CopyInImpl.endCopy()
* ->QueryExecutorImpl.endCopy() - sends CopyDone - processCopyResults() - on CommandComplete
* ->CopyOperationImpl.handleCommandComplete() - sets updatedRowCount when applicable - on
* ReadyForQuery unlock() connection for use by other operations <-return:
* CopyInImpl.getUpdatedRowCount()
*/
public class CopyInImpl extends CopyOperationImpl implements CopyIn {
@Override
public void writeToCopy(byte[] data, int off, int siz) throws SQLException {
getQueryExecutor().writeToCopy(this, data, off, siz);
}
@Override
public void writeToCopy(ByteStreamWriter from) throws SQLException {
getQueryExecutor().writeToCopy(this, from);
}
@Override
public void flushCopy() throws SQLException {
getQueryExecutor().flushCopy(this);
}
@Override
public long endCopy() throws SQLException {
return getQueryExecutor().endCopy(this);
}
@Override
protected void handleCopydata(byte[] data) throws PSQLException {
throw new PSQLException(GT.tr("CopyIn copy direction can't receive data"),
PSQLState.PROTOCOL_VIOLATION);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/CopyOperationImpl.java 0100664 0000000 0000000 00000004057 00000250600 027405 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.copy.CopyOperation;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
public abstract class CopyOperationImpl implements CopyOperation {
/* @Nullable */ QueryExecutorImpl queryExecutor;
int rowFormat;
int /* @Nullable */ [] fieldFormats;
long handledRowCount = -1;
void init(QueryExecutorImpl q, int fmt, int[] fmts) {
queryExecutor = q;
rowFormat = fmt;
fieldFormats = fmts;
}
protected QueryExecutorImpl getQueryExecutor() {
return castNonNull(queryExecutor);
}
@Override
public void cancelCopy() throws SQLException {
castNonNull(queryExecutor).cancelCopy(this);
}
@Override
public int getFieldCount() {
return castNonNull(fieldFormats).length;
}
@Override
public int getFieldFormat(int field) {
return castNonNull(fieldFormats)[field];
}
@Override
public int getFormat() {
return rowFormat;
}
@Override
public boolean isActive() {
return castNonNull(queryExecutor).hasLockOn(this);
}
public void handleCommandStatus(String status) throws PSQLException {
if (status.startsWith("COPY")) {
int i = status.lastIndexOf(' ');
handledRowCount = i > 3 ? Long.parseLong(status.substring(i + 1)) : -1;
} else {
throw new PSQLException(GT.tr("CommandComplete expected COPY but got: " + status),
PSQLState.COMMUNICATION_ERROR);
}
}
/**
* Consume received copy data.
*
* @param data data that was receive by copy protocol
* @throws PSQLException if some internal problem occurs
*/
protected abstract void handleCopydata(byte[] data) throws PSQLException;
@Override
public long getHandledRowCount() {
return handledRowCount;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/CopyOutImpl.java 0100664 0000000 0000000 00000003430 00000250600 026206 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2009, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import org.postgresql.copy.CopyOut;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
/**
* Anticipated flow of a COPY TO STDOUT operation:
*
* CopyManager.copyOut() ->QueryExecutor.startCopy() - sends given query to server
* ->processCopyResults(): - receives CopyOutResponse from Server - creates new CopyOutImpl
* ->initCopy(): - receives copy metadata from server ->CopyOutImpl.init() ->lock()
* connection for this operation - if query fails an exception is thrown - if query returns wrong
* CopyOperation, copyOut() cancels it before throwing exception <-returned: new CopyOutImpl
* holding lock on connection repeat CopyOut.readFromCopy() until null
* ->CopyOutImpl.readFromCopy() ->QueryExecutorImpl.readFromCopy() ->processCopyResults() -
* on copydata row from server ->CopyOutImpl.handleCopydata() stores reference to byte array - on
* CopyDone, CommandComplete, ReadyForQuery ->unlock() connection for use by other operations
* <-returned: byte array of data received from server or null at end.
*/
public class CopyOutImpl extends CopyOperationImpl implements CopyOut {
private byte /* @Nullable */ [] currentDataRow;
@Override
public byte /* @Nullable */ [] readFromCopy() throws SQLException {
return readFromCopy(true);
}
@Override
public byte /* @Nullable */ [] readFromCopy(boolean block) throws SQLException {
currentDataRow = null;
getQueryExecutor().readFromCopy(this, block);
return currentDataRow;
}
@Override
protected void handleCopydata(byte[] data) {
currentDataRow = data;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/DefaultSqlSerializationContext.java 0100664 0000000 0000000 00000003774 00000250600 032144 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2025, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
/**
* Provides a default implementation for {@link SqlSerializationContext}.
*/
enum DefaultSqlSerializationContext implements SqlSerializationContext {
/**
* Render SQL in a repeatable way (avoid consuming {@link java.io.InputStream} sources),
* use standard_conforming_strings=yes string literals.
* This option is useful for {@code toString()} implementations as it does induce side effects.
*/
STDSTR_IDEMPOTENT(true, true),
/**
* Render SQL with replacing all the parameters, including {@link java.io.InputStream} sources.
* Use standard_conforming_strings=yes for string literals.
* This option is useful for rendering an executable SQL.
*/
STDSTR_NONIDEMPOTENT(true, false),
// Auxiliary options as standard_conforming_strings=on since PostgreSQL 9.1
/**
* Render SQL in a repeatable way (avoid consuming {@link java.io.InputStream} sources),
* use standard_conforming_strings=no string literals.
* The entry is for completeness only as standard_conforming_strings=no should probably be avoided.
*/
NONSTDSTR_IDEMPOTENT(false, true),
/**
* Render SQL with replacing all the parameters, including {@link java.io.InputStream} sources.
* Use standard_conforming_strings=no for string literals.
* The entry is for completeness only as standard_conforming_strings=no should probably be avoided.
*/
NONSTDSTR_NONIDEMPOTENT(false, false),
;
private final boolean standardConformingStrings;
private final boolean idempotent;
DefaultSqlSerializationContext(boolean standardConformingStrings, boolean idempotent) {
this.standardConformingStrings = standardConformingStrings;
this.idempotent = idempotent;
}
@Override
public boolean getStandardConformingStrings() {
return standardConformingStrings;
}
@Override
public boolean getIdempotent() {
return idempotent;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/DescribeRequest.java 0100664 0000000 0000000 00000001475 00000250600 027062 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2015, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* Information for "pending describe queue".
*
* @see QueryExecutorImpl#pendingDescribeStatementQueue
*/
class DescribeRequest {
public final SimpleQuery query;
public final SimpleParameterList parameterList;
public final boolean describeOnly;
public final /* @Nullable */ String statementName;
DescribeRequest(SimpleQuery query, SimpleParameterList parameterList,
boolean describeOnly, /* @Nullable */ String statementName) {
this.query = query;
this.parameterList = parameterList;
this.describeOnly = describeOnly;
this.statementName = statementName;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/ExecuteRequest.java 0100664 0000000 0000000 00000001201 00000250600 026727 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2015, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* Information for "pending execute queue".
*
* @see QueryExecutorImpl#pendingExecuteQueue
*/
class ExecuteRequest {
public final SimpleQuery query;
public final /* @Nullable */ Portal portal;
public final boolean asSimple;
ExecuteRequest(SimpleQuery query, /* @Nullable */ Portal portal, boolean asSimple) {
this.query = query;
this.portal = portal;
this.asSimple = asSimple;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/Portal.java 0100664 0000000 0000000 00000003574 00000250600 025234 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import org.postgresql.core.ResultCursor;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.lang.ref.PhantomReference;
import java.nio.charset.StandardCharsets;
/**
* V3 ResultCursor implementation in terms of backend Portals. This holds the state of a single
* Portal. We use a PhantomReference managed by our caller to handle resource cleanup.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
class Portal implements ResultCursor {
Portal(/* @Nullable */ SimpleQuery query, String portalName) {
this.query = query;
this.portalName = portalName;
this.encodedName = portalName.getBytes(StandardCharsets.UTF_8);
}
@Override
public void close() {
PhantomReference> cleanupRef = this.cleanupRef;
if (cleanupRef != null) {
cleanupRef.clear();
cleanupRef.enqueue();
this.cleanupRef = null;
}
}
String getPortalName() {
return portalName;
}
byte[] getEncodedPortalName() {
return encodedName;
}
/* @Nullable */ SimpleQuery getQuery() {
return query;
}
void setCleanupRef(PhantomReference> cleanupRef) {
this.cleanupRef = cleanupRef;
}
@Override
public String toString() {
return portalName;
}
// Holding on to a reference to the generating query has
// the nice side-effect that while this Portal is referenced,
// so is the SimpleQuery, so the underlying statement won't
// be closed while the portal is open (the backend closes
// all open portals when the statement is closed)
private final /* @Nullable */ SimpleQuery query;
private final String portalName;
private final byte[] encodedName;
private /* @Nullable */ PhantomReference> cleanupRef;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/QueryExecutorImpl.java 0100664 0000000 0000000 00000336724 00000250600 027447 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.PGProperty;
import org.postgresql.copy.CopyIn;
import org.postgresql.copy.CopyOperation;
import org.postgresql.copy.CopyOut;
import org.postgresql.core.CommandCompleteParser;
import org.postgresql.core.Encoding;
import org.postgresql.core.EncodingPredictor;
import org.postgresql.core.Field;
import org.postgresql.core.NativeQuery;
import org.postgresql.core.Notification;
import org.postgresql.core.Oid;
import org.postgresql.core.PGBindException;
import org.postgresql.core.PGStream;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Parser;
import org.postgresql.core.PgMessageType;
import org.postgresql.core.ProtocolVersion;
import org.postgresql.core.Query;
import org.postgresql.core.QueryExecutor;
import org.postgresql.core.QueryExecutorBase;
import org.postgresql.core.ReplicationProtocol;
import org.postgresql.core.ResultCursor;
import org.postgresql.core.ResultHandler;
import org.postgresql.core.ResultHandlerBase;
import org.postgresql.core.ResultHandlerDelegate;
import org.postgresql.core.SqlCommand;
import org.postgresql.core.SqlCommandType;
import org.postgresql.core.TransactionState;
import org.postgresql.core.Tuple;
import org.postgresql.core.v3.adaptivefetch.AdaptiveFetchCache;
import org.postgresql.core.v3.replication.V3ReplicationProtocol;
import org.postgresql.jdbc.AutoSave;
import org.postgresql.jdbc.BatchResultHandler;
import org.postgresql.jdbc.ResourceLock;
import org.postgresql.jdbc.TimestampUtils;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.PSQLWarning;
import org.postgresql.util.ServerErrorMessage;
import org.postgresql.util.internal.IntSet;
import org.postgresql.util.internal.SourceStreamIOException;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.lang.ref.PhantomReference;
import java.lang.ref.Reference;
import java.lang.ref.ReferenceQueue;
import java.net.Socket;
import java.net.SocketException;
import java.net.SocketTimeoutException;
import java.nio.charset.StandardCharsets;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Deque;
import java.util.HashMap;
import java.util.List;
import java.util.Locale;
import java.util.Objects;
import java.util.Properties;
import java.util.Set;
import java.util.TimeZone;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* QueryExecutor implementation for the V3 protocol.
*/
public class QueryExecutorImpl extends QueryExecutorBase {
private static final Logger LOGGER = Logger.getLogger(QueryExecutorImpl.class.getName());
private static final Field[] NO_FIELDS = new Field[0];
static {
//canonicalize commonly seen strings to reduce memory and speed comparisons
Encoding.canonicalize("application_name");
Encoding.canonicalize("client_encoding");
Encoding.canonicalize("DateStyle");
Encoding.canonicalize("integer_datetimes");
Encoding.canonicalize("off");
Encoding.canonicalize("on");
Encoding.canonicalize("server_encoding");
Encoding.canonicalize("server_version");
Encoding.canonicalize("server_version_num");
Encoding.canonicalize("standard_conforming_strings");
Encoding.canonicalize("TimeZone");
Encoding.canonicalize("UTF8");
Encoding.canonicalize("UTF-8");
Encoding.canonicalize("in_hot_standby");
}
/**
* TimeZone of the current connection (TimeZone backend parameter).
*/
private /* @Nullable */ TimeZone timeZone;
/**
* application_name connection property.
*/
private /* @Nullable */ String applicationName;
/**
* True if server uses integers for date and time fields. False if server uses double.
*/
private boolean integerDateTimes;
/**
* Bit set that has a bit set for each oid which should be received using binary format.
*/
private final IntSet useBinaryReceiveForOids = new IntSet();
/**
* Bit set that has a bit set for each oid which should be sent using binary format.
*/
private final IntSet useBinarySendForOids = new IntSet();
/**
* This is a fake query object so processResults can distinguish "ReadyForQuery" messages
* from Sync messages vs from simple execute (aka 'Q').
*/
@SuppressWarnings("method.invocation")
private final SimpleQuery sync = (SimpleQuery) createQuery("SYNC", false, true).query;
private short deallocateEpoch;
/**
* This caches the latest observed {@code set search_path} query so the reset of prepared
* statement cache can be skipped if using repeated calls for the same {@code set search_path}
* value.
*/
private /* @Nullable */ String lastSetSearchPathQuery;
/**
* The exception that caused the last transaction to fail.
*/
private /* @Nullable */ SQLException transactionFailCause;
private final ReplicationProtocol replicationProtocol;
/**
* {@code CommandComplete(B)} messages are quite common, so we reuse instance to parse those
*/
private final CommandCompleteParser commandCompleteParser = new CommandCompleteParser();
private final AdaptiveFetchCache adaptiveFetchCache;
@SuppressWarnings({"assignment", "argument",
"method.invocation"})
public QueryExecutorImpl(PGStream pgStream,
int cancelSignalTimeout, Properties info) throws SQLException, IOException {
super(pgStream, cancelSignalTimeout, info);
long maxResultBuffer = pgStream.getMaxResultBuffer();
this.adaptiveFetchCache = new AdaptiveFetchCache(maxResultBuffer, info);
this.allowEncodingChanges = PGProperty.ALLOW_ENCODING_CHANGES.getBoolean(info);
this.cleanupSavePoints = PGProperty.CLEANUP_SAVEPOINTS.getBoolean(info);
// assignment, argument
this.replicationProtocol = new V3ReplicationProtocol(this, pgStream);
readStartupMessages();
}
@Override
public ProtocolVersion getProtocolVersion() {
return protocolVersion;
}
/**
* Supplement to synchronization of public methods on current QueryExecutor.
*
* Necessary for keeping the connection intact between calls to public methods sharing a state
* such as COPY subprotocol. waitOnLock() must be called at beginning of each connection access
* point.
*
* Public methods sharing that state must then be synchronized among themselves. Normal method
* synchronization typically suffices for that.
*
* See notes on related methods as well as currentCopy() below.
*/
private /* @Nullable */ Object lockedFor;
/**
* Obtain lock over this connection for given object, blocking to wait if necessary.
*
* @param obtainer object that gets the lock. Normally current thread.
* @throws PSQLException when already holding the lock or getting interrupted.
*/
private void lock(Object obtainer) throws PSQLException {
if (lockedFor == obtainer) {
throw new PSQLException(GT.tr("Tried to obtain lock while already holding it"),
PSQLState.OBJECT_NOT_IN_STATE);
}
waitOnLock();
lockedFor = obtainer;
}
/**
* Release lock on this connection presumably held by given object.
*
* @param holder object that holds the lock. Normally current thread.
* @throws PSQLException when this thread does not hold the lock
*/
private void unlock(Object holder) throws PSQLException {
if (lockedFor != holder) {
throw new PSQLException(GT.tr("Tried to break lock on database connection"),
PSQLState.OBJECT_NOT_IN_STATE);
}
lockedFor = null;
lockCondition.signal();
}
/**
* Wait until our lock is released. Execution of a single synchronized method can then continue
* without further ado. Must be called at beginning of each synchronized public method.
*/
private void waitOnLock() throws PSQLException {
while (lockedFor != null) {
try {
lockCondition.await();
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
throw new PSQLException(
GT.tr("Interrupted while waiting to obtain lock on database connection"),
PSQLState.OBJECT_NOT_IN_STATE, ie);
}
}
}
/**
* @param holder object assumed to hold the lock
* @return whether given object actually holds the lock
*/
boolean hasLockOn(/* @Nullable */ Object holder) {
try (ResourceLock ignore = lock.obtain()) {
return lockedFor == holder;
}
}
/**
* @param holder object assumed to hold the lock
* @return whether given object actually holds the lock
*/
private boolean hasLock(/* @Nullable */ Object holder) {
return lockedFor == holder;
}
//
// Query parsing
//
@Override
public Query createSimpleQuery(String sql) throws SQLException {
List queries = Parser.parseJdbcSql(sql,
getStandardConformingStrings(), false, true,
isReWriteBatchedInsertsEnabled(), getQuoteReturningIdentifiers());
return wrap(queries);
}
@Override
public Query wrap(List queries) {
if (queries.isEmpty()) {
// Empty query
return emptyQuery;
}
if (queries.size() == 1) {
NativeQuery firstQuery = queries.get(0);
if (isReWriteBatchedInsertsEnabled()
&& firstQuery.getCommand().isBatchedReWriteCompatible()) {
int valuesBraceOpenPosition =
firstQuery.getCommand().getBatchRewriteValuesBraceOpenPosition();
int valuesBraceClosePosition =
firstQuery.getCommand().getBatchRewriteValuesBraceClosePosition();
return new BatchedQuery(firstQuery, this, valuesBraceOpenPosition,
valuesBraceClosePosition, isColumnSanitiserDisabled());
} else {
return new SimpleQuery(firstQuery, this, isColumnSanitiserDisabled());
}
}
// Multiple statements.
SimpleQuery[] subqueries = new SimpleQuery[queries.size()];
int[] offsets = new int[subqueries.length];
int offset = 0;
for (int i = 0; i < queries.size(); i++) {
NativeQuery nativeQuery = queries.get(i);
offsets[i] = offset;
subqueries[i] = new SimpleQuery(nativeQuery, this, isColumnSanitiserDisabled());
offset += nativeQuery.bindPositions.length;
}
return new CompositeQuery(subqueries, offsets);
}
//
// Query execution
//
private int updateQueryMode(int flags) {
switch (getPreferQueryMode()) {
case SIMPLE:
return flags | QUERY_EXECUTE_AS_SIMPLE;
case EXTENDED:
return flags & ~QUERY_EXECUTE_AS_SIMPLE;
default:
return flags;
}
}
@Override
public void execute(Query query, /* @Nullable */ ParameterList parameters,
ResultHandler handler,
int maxRows, int fetchSize, int flags) throws SQLException {
execute(query, parameters, handler, maxRows, fetchSize, flags, false);
}
@Override
public void execute(Query query, /* @Nullable */ ParameterList parameters,
ResultHandler handler,
int maxRows, int fetchSize, int flags, boolean adaptiveFetch) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
waitOnLock();
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " simple execute, handler={0}, maxRows={1}, fetchSize={2}, flags={3}",
new Object[]{handler, maxRows, fetchSize, flags});
}
if (parameters == null) {
parameters = SimpleQuery.NO_PARAMETERS;
}
flags = updateQueryMode(flags);
boolean describeOnly = (QUERY_DESCRIBE_ONLY & flags) != 0;
((V3ParameterList) parameters).convertFunctionOutParameters();
// Check parameters are all set..
if (!describeOnly) {
((V3ParameterList) parameters).checkAllParametersSet();
}
boolean autosave = false;
try {
try {
handler = sendQueryPreamble(handler, flags);
autosave = sendAutomaticSavepoint(query, flags);
sendQuery(query, (V3ParameterList) parameters, maxRows, fetchSize, flags,
handler, null, adaptiveFetch);
if ((flags & QueryExecutor.QUERY_EXECUTE_AS_SIMPLE) != 0) {
// Sync message is not required for 'Q' execution as 'Q' ends with ReadyForQuery message
// on its own
} else {
sendSync();
}
processResults(handler, flags, adaptiveFetch);
estimatedReceiveBufferBytes = 0;
} catch (PGBindException se) {
// There are three causes of this error, an
// invalid total Bind message length, a
// BinaryStream that cannot provide the amount
// of data claimed by the length argument, and
// a BinaryStream that throws an Exception
// when reading.
//
// We simply do not send the Execute message
// so we can just continue on as if nothing
// has happened. Perhaps we need to
// introduce an error here to force the
// caller to rollback if there is a
// transaction in progress?
//
sendSync();
processResults(handler, flags, adaptiveFetch);
estimatedReceiveBufferBytes = 0;
handler
.handleError(new PSQLException(GT.tr("Unable to bind parameter values for statement."),
PSQLState.INVALID_PARAMETER_VALUE, se.getIOException()));
}
} catch (IOException e) {
abort();
handler.handleError(
new PSQLException(GT.tr("An I/O error occurred while sending to the backend."),
PSQLState.CONNECTION_FAILURE, e));
}
try {
handler.handleCompletion();
if (cleanupSavePoints) {
releaseSavePoint(autosave);
}
} catch (SQLException e) {
rollbackIfRequired(autosave, e);
}
}
}
private boolean sendAutomaticSavepoint(Query query, int flags) throws IOException {
if (((flags & QueryExecutor.QUERY_SUPPRESS_BEGIN) == 0
|| getTransactionState() == TransactionState.OPEN)
&& query != restoreToAutoSave
&& !"COMMIT".equalsIgnoreCase(query.getNativeSql())
&& getAutoSave() != AutoSave.NEVER
// If query has no resulting fields, it cannot fail with 'cached plan must not change result type'
// thus no need to set a savepoint before such query
&& (getAutoSave() == AutoSave.ALWAYS
// If CompositeQuery is observed, just assume it might fail and set the savepoint
|| !(query instanceof SimpleQuery)
|| ((SimpleQuery) query).getFields() != null)) {
/*
create a different SAVEPOINT the first time so that all subsequent SAVEPOINTS can be released
easily. There have been reports of server resources running out if there are too many
SAVEPOINTS.
*/
sendOneQuery(autoSaveQuery, SimpleQuery.NO_PARAMETERS, 1, 0,
QUERY_NO_RESULTS | QUERY_NO_METADATA
// PostgreSQL does not support bind, exec, simple, sync message flow,
// so we force autosavepoint to use simple if the main query is using simple
| QUERY_EXECUTE_AS_SIMPLE);
return true;
}
return false;
}
private void releaseSavePoint(boolean autosave) throws SQLException {
if ( autosave
&& getAutoSave() == AutoSave.ALWAYS
&& getTransactionState() == TransactionState.OPEN) {
try {
sendOneQuery(releaseAutoSave, SimpleQuery.NO_PARAMETERS, 1, 0,
QUERY_NO_RESULTS | QUERY_NO_METADATA
| QUERY_EXECUTE_AS_SIMPLE);
} catch (IOException ex) {
throw new PSQLException(GT.tr("Error releasing savepoint"), PSQLState.IO_ERROR);
}
}
}
private void rollbackIfRequired(boolean autosave, SQLException e) throws SQLException {
if (autosave
&& getTransactionState() == TransactionState.FAILED
&& (getAutoSave() == AutoSave.ALWAYS || willHealOnRetry(e))) {
try {
// ROLLBACK and AUTOSAVE are executed as simple always to overcome "statement no longer exists S_xx"
execute(restoreToAutoSave, SimpleQuery.NO_PARAMETERS, new ResultHandlerDelegate(null),
1, 0, QUERY_NO_RESULTS | QUERY_NO_METADATA | QUERY_EXECUTE_AS_SIMPLE);
} catch (SQLException e2) {
// That's O(N), sorry
e.setNextException(e2);
}
}
throw e;
}
// Deadlock avoidance:
//
// It's possible for the send and receive streams to get "deadlocked" against each other since
// we do not have a separate thread. The scenario is this: we have two streams:
//
// driver -> TCP buffering -> server
// server -> TCP buffering -> driver
//
// The server behaviour is roughly:
// while true:
// read message
// execute message
// write results
//
// If the server -> driver stream has a full buffer, the write will block.
// If the driver is still writing when this happens, and the driver -> server
// stream also fills up, we deadlock: the driver is blocked on write() waiting
// for the server to read some more data, and the server is blocked on write()
// waiting for the driver to read some more data.
//
// To avoid this, we guess at how much response data we can request from the
// server before the server -> driver stream's buffer is full (MAX_BUFFERED_RECV_BYTES).
// This is the point where the server blocks on write and stops reading data. If we
// reach this point, we force a Sync message and read pending data from the server
// until ReadyForQuery, then go back to writing more queries unless we saw an error.
//
// This is not 100% reliable -- it's only done in the batch-query case and only
// at a reasonably high level (per query, not per message), and it's only an estimate
// -- so it might break. To do it correctly in all cases would seem to require a
// separate send or receive thread as we can only do the Sync-and-read-results
// operation at particular points, and also as we don't really know how much data
// the server is sending.
//
// Our message size estimation is coarse, and disregards asynchronous
// notifications, warnings/info/debug messages, etc, so the response size may be
// quite different from the 250 bytes assumed here even for queries that don't
// return data.
//
// See github issue #194 and #195 .
//
// Assume 64k server->client buffering, which is extremely conservative. A typical
// system will have 200kb or more of buffers for its receive buffers, and the sending
// system will typically have the same on the send side, giving us 400kb or to work
// with. (We could check Java's receive buffer size, but prefer to assume a very
// conservative buffer instead, and we don't know how big the server's send
// buffer is.)
//
private static final int MAX_BUFFERED_RECV_BYTES = 64000;
private static final int NODATA_QUERY_RESPONSE_SIZE_BYTES = 250;
@Override
public void execute(Query[] queries, /* @Nullable */ ParameterList[] parameterLists,
BatchResultHandler batchHandler, int maxRows, int fetchSize, int flags) throws SQLException {
execute(queries, parameterLists, batchHandler, maxRows, fetchSize, flags, false);
}
@Override
public void execute(Query[] queries, /* @Nullable */ ParameterList[] parameterLists,
BatchResultHandler batchHandler, int maxRows, int fetchSize, int flags, boolean adaptiveFetch)
throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
waitOnLock();
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " batch execute {0} queries, handler={1}, maxRows={2}, fetchSize={3}, flags={4}",
new Object[]{queries.length, batchHandler, maxRows, fetchSize, flags});
}
flags = updateQueryMode(flags);
boolean describeOnly = (QUERY_DESCRIBE_ONLY & flags) != 0;
// Check parameters and resolve OIDs.
if (!describeOnly) {
for (ParameterList parameterList : parameterLists) {
if (parameterList != null) {
((V3ParameterList) parameterList).checkAllParametersSet();
}
}
}
boolean autosave = false;
ResultHandler handler = batchHandler;
try {
handler = sendQueryPreamble(batchHandler, flags);
autosave = sendAutomaticSavepoint(queries[0], flags);
estimatedReceiveBufferBytes = 0;
for (int i = 0; i < queries.length; i++) {
Query query = queries[i];
V3ParameterList parameters = (V3ParameterList) parameterLists[i];
if (parameters == null) {
parameters = SimpleQuery.NO_PARAMETERS;
}
sendQuery(query, parameters, maxRows, fetchSize, flags, handler, batchHandler, adaptiveFetch);
if (handler.getException() != null) {
break;
}
}
if (handler.getException() == null) {
if ((flags & QueryExecutor.QUERY_EXECUTE_AS_SIMPLE) != 0) {
// Sync message is not required for 'Q' execution as 'Q' ends with ReadyForQuery message
// on its own
} else {
sendSync();
}
processResults(handler, flags, adaptiveFetch);
estimatedReceiveBufferBytes = 0;
}
} catch (IOException e) {
abort();
handler.handleError(
new PSQLException(GT.tr("An I/O error occurred while sending to the backend."),
PSQLState.CONNECTION_FAILURE, e));
}
try {
handler.handleCompletion();
if (cleanupSavePoints) {
releaseSavePoint(autosave);
}
} catch (SQLException e) {
rollbackIfRequired(autosave, e);
}
}
}
private ResultHandler sendQueryPreamble(final ResultHandler delegateHandler, int flags)
throws IOException {
// First, send CloseStatements for finalized SimpleQueries that had statement names assigned.
processDeadParsedQueries();
processDeadPortals();
// Send BEGIN on first statement in transaction.
if ((flags & QueryExecutor.QUERY_SUPPRESS_BEGIN) != 0
|| getTransactionState() != TransactionState.IDLE) {
return delegateHandler;
}
int beginFlags = QueryExecutor.QUERY_NO_METADATA;
if ((flags & QueryExecutor.QUERY_ONESHOT) != 0) {
beginFlags |= QueryExecutor.QUERY_ONESHOT;
}
beginFlags |= QueryExecutor.QUERY_EXECUTE_AS_SIMPLE;
beginFlags = updateQueryMode(beginFlags);
final SimpleQuery beginQuery = (flags & QueryExecutor.QUERY_READ_ONLY_HINT) == 0 ? beginTransactionQuery : beginReadOnlyTransactionQuery;
sendOneQuery(beginQuery, SimpleQuery.NO_PARAMETERS, 0, 0, beginFlags);
// Insert a handler that intercepts the BEGIN.
return new ResultHandlerDelegate(delegateHandler) {
private boolean sawBegin = false;
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List tuples,
/* @Nullable */ ResultCursor cursor) {
if (sawBegin) {
super.handleResultRows(fromQuery, fields, tuples, cursor);
}
}
@Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
if (!sawBegin) {
sawBegin = true;
if (!"BEGIN".equals(status)) {
handleError(new PSQLException(GT.tr("Expected command status BEGIN, got {0}.", status),
PSQLState.PROTOCOL_VIOLATION));
}
} else {
super.handleCommandStatus(status, updateCount, insertOID);
}
}
};
}
//
// Fastpath
//
@Override
@SuppressWarnings("deprecation")
public byte /* @Nullable */ [] fastpathCall(int fnid, ParameterList parameters,
boolean suppressBegin)
throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
waitOnLock();
if (!suppressBegin) {
doSubprotocolBegin();
}
try {
sendFastpathCall(fnid, (SimpleParameterList) parameters);
return receiveFastpathResult();
} catch (IOException ioe) {
abort();
throw new PSQLException(GT.tr("An I/O error occurred while sending to the backend."),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
public void doSubprotocolBegin() throws SQLException {
if (getTransactionState() == TransactionState.IDLE) {
LOGGER.log(Level.FINEST, "Issuing BEGIN before fastpath or copy call.");
ResultHandler handler = new ResultHandlerBase() {
private boolean sawBegin = false;
@Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
if (!sawBegin) {
if (!"BEGIN".equals(status)) {
handleError(
new PSQLException(GT.tr("Expected command status BEGIN, got {0}.", status),
PSQLState.PROTOCOL_VIOLATION));
}
sawBegin = true;
} else {
handleError(new PSQLException(GT.tr("Unexpected command status: {0}.", status),
PSQLState.PROTOCOL_VIOLATION));
}
}
@Override
public void handleWarning(SQLWarning warning) {
// we don't want to ignore warnings and it would be tricky
// to chain them back to the connection, so since we don't
// expect to get them in the first place, we just consider
// them errors.
handleError(warning);
}
};
try {
/* Send BEGIN with simple protocol preferred */
int beginFlags = QueryExecutor.QUERY_NO_METADATA
| QueryExecutor.QUERY_ONESHOT
| QueryExecutor.QUERY_EXECUTE_AS_SIMPLE;
beginFlags = updateQueryMode(beginFlags);
sendOneQuery(beginTransactionQuery, SimpleQuery.NO_PARAMETERS, 0, 0, beginFlags);
sendSync();
processResults(handler, 0);
estimatedReceiveBufferBytes = 0;
} catch (IOException ioe) {
throw new PSQLException(GT.tr("An I/O error occurred while sending to the backend."),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
@Override
@SuppressWarnings("deprecation")
public ParameterList createFastpathParameters(int count) {
return new SimpleParameterList(count, this);
}
private void sendFastpathCall(int fnid, SimpleParameterList params)
throws SQLException, IOException {
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " FE=> FunctionCall({0}, {1} params)", new Object[]{fnid, params.getParameterCount()});
}
//
// Total size = 4 (length)
// + 4 (function OID)
// + 2 (format code count) + N * 2 (format codes)
// + 2 (parameter count) + encodedSize (parameters)
// + 2 (result format)
int paramCount = params.getParameterCount();
int encodedSize = 0;
for (int i = 1; i <= paramCount; i++) {
if (params.isNull(i)) {
encodedSize += 4;
} else {
encodedSize += 4 + params.getV3Length(i);
}
}
pgStream.sendChar(PgMessageType.FUNCTION_CALL_REQ);
pgStream.sendInteger4(4 + 4 + 2 + 2 * paramCount + 2 + encodedSize + 2);
pgStream.sendInteger4(fnid);
pgStream.sendInteger2(paramCount);
for (int i = 1; i <= paramCount; i++) {
pgStream.sendInteger2(params.isBinary(i) ? 1 : 0);
}
pgStream.sendInteger2(paramCount);
for (int i = 1; i <= paramCount; i++) {
if (params.isNull(i)) {
pgStream.sendInteger4(-1);
} else {
pgStream.sendInteger4(params.getV3Length(i)); // Parameter size
params.writeV3Value(i, pgStream);
}
}
pgStream.sendInteger2(1); // Binary result format
pgStream.flush();
}
// Just for API compatibility with previous versions.
@Override
public void processNotifies() throws SQLException {
processNotifies(-1);
}
/**
* @param timeoutMillis when > 0, block for this time
* when =0, block forever
* when < 0, don't block
*/
@Override
public void processNotifies(int timeoutMillis) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
waitOnLock();
// Asynchronous notifies only arrive when we are not in a transaction
if (getTransactionState() != TransactionState.IDLE) {
return;
}
if (hasNotifications()) {
// No need to timeout when there are already notifications. We just check for more in this case.
timeoutMillis = -1;
}
boolean useTimeout = timeoutMillis > 0;
long startTime = 0;
int oldTimeout = 0;
if (useTimeout) {
startTime = TimeUnit.NANOSECONDS.toMillis(System.nanoTime());
try {
oldTimeout = pgStream.getSocket().getSoTimeout();
} catch (SocketException e) {
throw new PSQLException(GT.tr("An error occurred while trying to get the socket "
+ "timeout."), PSQLState.CONNECTION_FAILURE, e);
}
}
try {
while (timeoutMillis >= 0 || pgStream.hasMessagePending()) {
if (useTimeout && timeoutMillis >= 0) {
setSocketTimeout(timeoutMillis);
}
int c = pgStream.receiveChar();
if (useTimeout && timeoutMillis >= 0) {
setSocketTimeout(0); // Don't timeout after first char
}
switch (c) {
case 'A': // Asynchronous Notify
receiveAsyncNotify();
timeoutMillis = -1;
continue;
case 'E':
// Error Response (response to pretty much everything; backend then skips until Sync)
throw receiveErrorResponse();
case 'N': // Notice Response (warnings / info)
SQLWarning warning = receiveNoticeResponse();
addWarning(warning);
if (useTimeout) {
long newTimeMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime());
timeoutMillis += (int) (startTime - newTimeMillis); // Overflows after 49 days, ignore that
startTime = newTimeMillis;
if (timeoutMillis == 0) {
timeoutMillis = -1; // Don't accidentally wait forever
}
}
break;
default:
throw new PSQLException(GT.tr("Unknown Response Type {0}.", (char) c),
PSQLState.CONNECTION_FAILURE);
}
}
} catch (SocketTimeoutException ioe) {
// No notifications this time...
} catch (IOException ioe) {
throw new PSQLException(GT.tr("An I/O error occurred while sending to the backend."),
PSQLState.CONNECTION_FAILURE, ioe);
} finally {
if (useTimeout) {
setSocketTimeout(oldTimeout);
}
}
}
}
private void setSocketTimeout(int millis) throws PSQLException {
try {
Socket s = pgStream.getSocket();
if (!s.isClosed()) { // Is this check required?
pgStream.setNetworkTimeout(millis);
}
} catch (IOException e) {
throw new PSQLException(GT.tr("An error occurred while trying to reset the socket timeout."),
PSQLState.CONNECTION_FAILURE, e);
}
}
private byte /* @Nullable */ [] receiveFastpathResult() throws IOException, SQLException {
boolean endQuery = false;
SQLException error = null;
byte[] returnValue = null;
while (!endQuery) {
int c = pgStream.receiveChar();
switch (c) {
case PgMessageType.ASYNCHRONOUS_NOTICE:
receiveAsyncNotify();
break;
case PgMessageType.ERROR_RESPONSE:
// response to pretty much everything; backend then skips until Sync
SQLException newError = receiveErrorResponse();
if (error == null) {
error = newError;
} else {
error.setNextException(newError);
}
// keep processing
break;
case PgMessageType.NOTICE_RESPONSE: // warnings / info
SQLWarning warning = receiveNoticeResponse();
addWarning(warning);
break;
case PgMessageType.READY_FOR_QUERY_RESPONSE: // eventual response to Sync
receiveRFQ();
endQuery = true;
break;
case PgMessageType.FUNCTION_CALL_RESPONSE:
@SuppressWarnings("unused")
int msgLen = pgStream.receiveInteger4();
int valueLen = pgStream.receiveInteger4();
LOGGER.log(Level.FINEST, " <=BE FunctionCallResponse({0} bytes)", valueLen);
if (valueLen != -1) {
byte[] buf = new byte[valueLen];
pgStream.receive(buf, 0, valueLen);
returnValue = buf;
}
break;
case PgMessageType.PARAMETER_STATUS_RESPONSE: // Parameter Status
try {
receiveParameterStatus();
} catch (SQLException e) {
if (error == null) {
error = e;
} else {
error.setNextException(e);
}
endQuery = true;
}
break;
default:
throw new PSQLException(GT.tr("Unknown Response Type {0}.", (char) c),
PSQLState.CONNECTION_FAILURE);
}
}
// did we get an error during this query?
if (error != null) {
throw error;
}
return returnValue;
}
//
// Copy subprotocol implementation
//
/**
* Sends given query to BE to start, initialize and lock connection for a CopyOperation.
*
* @param sql COPY FROM STDIN / COPY TO STDOUT statement
* @return CopyIn or CopyOut operation object
* @throws SQLException on failure
*/
@Override
public CopyOperation startCopy(String sql, boolean suppressBegin)
throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
waitOnLock();
if (!suppressBegin) {
doSubprotocolBegin();
}
byte[] buf = sql.getBytes(StandardCharsets.UTF_8);
try {
LOGGER.log(Level.FINEST, " FE=> Query(CopyStart)");
pgStream.sendChar(PgMessageType.QUERY_REQUEST);
pgStream.sendInteger4(buf.length + 4 + 1);
pgStream.send(buf);
pgStream.sendChar(0);
pgStream.flush();
return castNonNull(processCopyResults(null, true));
// expect a CopyInResponse or CopyOutResponse to our query above
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Database connection failed when starting copy"),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
/**
* Locks connection and calls initializer for a new CopyOperation Called via startCopy ->
* processCopyResults.
*
* @param op an uninitialized CopyOperation
* @throws SQLException on locking failure
* @throws IOException on database connection failure
*/
private void initCopy(CopyOperationImpl op) throws SQLException, IOException {
try (ResourceLock ignore = lock.obtain()) {
pgStream.receiveInteger4(); // length not used
int rowFormat = pgStream.receiveChar();
int numFields = pgStream.receiveInteger2();
int[] fieldFormats = new int[numFields];
for (int i = 0; i < numFields; i++) {
fieldFormats[i] = pgStream.receiveInteger2();
}
lock(op);
op.init(this, rowFormat, fieldFormats);
}
}
/**
* Finishes a copy operation and unlocks connection discarding any exchanged data.
*
* @param op the copy operation presumably currently holding lock on this connection
* @throws SQLException on any additional failure
*/
public void cancelCopy(CopyOperationImpl op) throws SQLException {
if (!hasLock(op)) {
throw new PSQLException(GT.tr("Tried to cancel an inactive copy operation"),
PSQLState.OBJECT_NOT_IN_STATE);
}
SQLException error = null;
int errors = 0;
try {
if (op instanceof CopyIn) {
try (ResourceLock ignore = lock.obtain()) {
LOGGER.log(Level.FINEST, "FE => CopyFail");
final byte[] msg = "Copy cancel requested".getBytes(StandardCharsets.US_ASCII);
pgStream.sendChar(PgMessageType.COPY_FAIL); // CopyFail
pgStream.sendInteger4(5 + msg.length);
pgStream.send(msg);
pgStream.sendChar(0);
pgStream.flush();
do {
try {
processCopyResults(op, true); // discard rest of input
} catch (SQLException se) { // expected error response to failing copy
errors++;
if (error != null) {
SQLException e = se;
SQLException next;
while ((next = e.getNextException()) != null) {
e = next;
}
e.setNextException(error);
}
error = se;
}
} while (hasLock(op));
}
} else if (op instanceof CopyOut) {
sendQueryCancel();
}
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Database connection failed when canceling copy operation"),
PSQLState.CONNECTION_FAILURE, ioe);
} finally {
// Need to ensure the lock isn't held anymore, or else
// future operations, rather than failing due to the
// broken connection, will simply hang waiting for this
// lock.
try (ResourceLock ignore = lock.obtain()) {
if (hasLock(op)) {
unlock(op);
}
}
}
if (op instanceof CopyIn) {
if (errors < 1) {
throw new PSQLException(GT.tr("Missing expected error response to copy cancel request"),
PSQLState.COMMUNICATION_ERROR);
} else if (errors > 1) {
throw new PSQLException(
GT.tr("Got {0} error responses to single copy cancel request", String.valueOf(errors)),
PSQLState.COMMUNICATION_ERROR, error);
}
}
}
/**
* Finishes writing to copy and unlocks connection.
*
* @param op the copy operation presumably currently holding lock on this connection
* @return number of rows updated for server versions 8.2 or newer
* @throws SQLException on failure
*/
public long endCopy(CopyOperationImpl op) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
if (!hasLock(op)) {
throw new PSQLException(GT.tr("Tried to end inactive copy"), PSQLState.OBJECT_NOT_IN_STATE);
}
try {
LOGGER.log(Level.FINEST, " FE=> CopyDone");
pgStream.sendChar(PgMessageType.COPY_DONE); // CopyDone
pgStream.sendInteger4(4);
pgStream.flush();
do {
processCopyResults(op, true);
} while (hasLock(op));
return op.getHandledRowCount();
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Database connection failed when ending copy"),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
/**
* Sends data during a live COPY IN operation. Only unlocks the connection if server suddenly
* returns CommandComplete, which should not happen
*
* @param op the CopyIn operation presumably currently holding lock on this connection
* @param data bytes to send
* @param off index of first byte to send (usually 0)
* @param siz number of bytes to send (usually data.length)
* @throws SQLException on failure
*/
public void writeToCopy(CopyOperationImpl op, byte[] data, int off, int siz)
throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
if (!hasLock(op)) {
throw new PSQLException(GT.tr("Tried to write to an inactive copy operation"),
PSQLState.OBJECT_NOT_IN_STATE);
}
LOGGER.log(Level.FINEST, " FE=> CopyData({0})", siz);
try {
pgStream.sendChar(PgMessageType.COPY_DATA);
pgStream.sendInteger4(siz + 4);
pgStream.send(data, off, siz);
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Database connection failed when writing to copy"),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
/**
* Sends data during a live COPY IN operation. Only unlocks the connection if server suddenly
* returns CommandComplete, which should not happen
*
* @param op the CopyIn operation presumably currently holding lock on this connection
* @param from the source of bytes, e.g. a ByteBufferByteStreamWriter
* @throws SQLException on failure
*/
public void writeToCopy(CopyOperationImpl op, ByteStreamWriter from)
throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
if (!hasLock(op)) {
throw new PSQLException(GT.tr("Tried to write to an inactive copy operation"),
PSQLState.OBJECT_NOT_IN_STATE);
}
int siz = from.getLength();
LOGGER.log(Level.FINEST, " FE=> CopyData({0})", siz);
try {
pgStream.sendChar(PgMessageType.COPY_DATA);
pgStream.sendInteger4(siz + 4);
pgStream.send(from);
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Database connection failed when writing to copy"),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
public void flushCopy(CopyOperationImpl op) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
if (!hasLock(op)) {
throw new PSQLException(GT.tr("Tried to write to an inactive copy operation"),
PSQLState.OBJECT_NOT_IN_STATE);
}
try {
pgStream.flush();
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Database connection failed when writing to copy"),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
/**
* Wait for a row of data to be received from server on an active copy operation
* Connection gets unlocked by processCopyResults() at end of operation.
*
* @param op the copy operation presumably currently holding lock on this connection
* @param block whether to block waiting for input
* @throws SQLException on any failure
*/
void readFromCopy(CopyOperationImpl op, boolean block) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
if (!hasLock(op)) {
throw new PSQLException(GT.tr("Tried to read from inactive copy"),
PSQLState.OBJECT_NOT_IN_STATE);
}
try {
processCopyResults(op, block); // expect a call to handleCopydata() to store the data
} catch (IOException ioe) {
throw new PSQLException(GT.tr("Database connection failed when reading from copy"),
PSQLState.CONNECTION_FAILURE, ioe);
}
}
}
AtomicBoolean processingCopyResults = new AtomicBoolean(false);
/**
* Handles copy sub protocol responses from server. Unlocks at end of sub protocol, so operations
* on pgStream or QueryExecutor are not allowed in a method after calling this!
*
* @param block whether to block waiting for input
* @return CopyIn when COPY FROM STDIN starts; CopyOut when COPY TO STDOUT starts; null when copy
* ends; otherwise, the operation given as parameter.
* @throws SQLException in case of misuse
* @throws IOException from the underlying connection
*/
/* @Nullable */ CopyOperationImpl processCopyResults(/* @Nullable */ CopyOperationImpl op, boolean block)
throws SQLException, IOException {
/*
* fixes issue #1592 where one thread closes the stream and another is reading it
*/
if (pgStream.isClosed()) {
throw new PSQLException(GT.tr("PGStream is closed"),
PSQLState.CONNECTION_DOES_NOT_EXIST);
}
/*
* This is a hack as we should not end up here, but sometimes do with large copy operations.
*/
if (!processingCopyResults.compareAndSet(false, true)) {
LOGGER.log(Level.INFO, "Ignoring request to process copy results, already processing");
return null;
}
// put this all in a try, finally block and reset the processingCopyResults in the finally clause
try {
boolean endReceiving = false;
SQLException error = null;
SQLException errors = null;
int len;
while (!endReceiving && (block || pgStream.hasMessagePending())) {
// There is a bug in the server's implementation of the copy
// protocol. It returns command complete immediately upon
// receiving the EOF marker in the binary protocol,
// potentially before we've issued CopyDone. When we are not
// blocking, we don't think we are done, so we hold off on
// processing command complete and any subsequent messages
// until we actually are done with the copy.
//
if (!block) {
int c = pgStream.peekChar();
if (c == PgMessageType.COMMAND_COMPLETE_RESPONSE) {
LOGGER.log(Level.FINEST, " <=BE CommandStatus, Ignored until CopyDone");
break;
}
}
int c = pgStream.receiveChar();
switch (c) {
case PgMessageType.ASYNCHRONOUS_NOTICE:
LOGGER.log(Level.FINEST, " <=BE Asynchronous Notification while copying");
receiveAsyncNotify();
break;
case PgMessageType.NOTICE_RESPONSE:
LOGGER.log(Level.FINEST, " <=BE Notification while copying");
addWarning(receiveNoticeResponse());
break;
case PgMessageType.COMMAND_COMPLETE_RESPONSE: // Command Complete
String status = receiveCommandStatus();
try {
if (op == null) {
throw new PSQLException(GT
.tr("Received CommandComplete ''{0}'' without an active copy operation", status),
PSQLState.OBJECT_NOT_IN_STATE);
}
op.handleCommandStatus(status);
} catch (SQLException se) {
error = se;
}
block = true;
break;
case PgMessageType.ERROR_RESPONSE: // ErrorMessage (expected response to CopyFail)
error = receiveErrorResponse();
// We've received the error and we now expect to receive
// Ready for query, but we must block because it might still be
// on the wire and not here yet.
block = true;
break;
case PgMessageType.COPY_IN_RESPONSE: // CopyInResponse
LOGGER.log(Level.FINEST, " <=BE CopyInResponse");
if (op != null) {
error = new PSQLException(GT.tr("Got CopyInResponse from server during an active {0}",
op.getClass().getName()), PSQLState.OBJECT_NOT_IN_STATE);
}
op = new CopyInImpl();
initCopy(op);
endReceiving = true;
break;
case PgMessageType.COPY_OUT_RESPONSE: // CopyOutResponse
LOGGER.log(Level.FINEST, " <=BE CopyOutResponse");
if (op != null) {
error = new PSQLException(GT.tr("Got CopyOutResponse from server during an active {0}",
op.getClass().getName()), PSQLState.OBJECT_NOT_IN_STATE);
}
op = new CopyOutImpl();
initCopy(op);
endReceiving = true;
break;
case PgMessageType.COPY_BOTH_RESPONSE: // CopyBothResponse
LOGGER.log(Level.FINEST, " <=BE CopyBothResponse");
if (op != null) {
error = new PSQLException(GT.tr("Got CopyBothResponse from server during an active {0}",
op.getClass().getName()), PSQLState.OBJECT_NOT_IN_STATE);
}
op = new CopyDualImpl();
initCopy(op);
endReceiving = true;
break;
case PgMessageType.COPY_DATA: // CopyData
LOGGER.log(Level.FINEST, " <=BE CopyData");
len = pgStream.receiveInteger4() - 4;
assert len > 0 : "Copy Data length must be greater than 4";
byte[] buf = pgStream.receive(len);
if (op == null) {
error = new PSQLException(GT.tr("Got CopyData without an active copy operation"),
PSQLState.OBJECT_NOT_IN_STATE);
} else if (!(op instanceof CopyOut)) {
error = new PSQLException(
GT.tr("Unexpected copydata from server for {0}", op.getClass().getName()),
PSQLState.COMMUNICATION_ERROR);
} else {
op.handleCopydata(buf);
}
endReceiving = true;
break;
case PgMessageType.COPY_DONE: // CopyDone (expected after all copydata received)
LOGGER.log(Level.FINEST, " <=BE CopyDone");
len = pgStream.receiveInteger4() - 4;
if (len > 0) {
pgStream.receive(len); // not in specification; should never appear
}
if (!(op instanceof CopyOut)) {
error = new PSQLException("Got CopyDone while not copying from server",
PSQLState.OBJECT_NOT_IN_STATE);
}
// keep receiving since we expect a CommandComplete
block = true;
break;
case PgMessageType.PARAMETER_STATUS_RESPONSE: // Parameter Status
try {
receiveParameterStatus();
} catch (SQLException e) {
error = e;
endReceiving = true;
}
break;
case PgMessageType.READY_FOR_QUERY_RESPONSE: // ReadyForQuery: After FE:CopyDone => BE:CommandComplete
receiveRFQ();
if (op != null && hasLock(op)) {
unlock(op);
}
op = null;
endReceiving = true;
break;
// If the user sends a non-copy query, we've got to handle some additional things.
//
case PgMessageType.ROW_DESCRIPTION_RESPONSE: // Row Description (response to Describe)
LOGGER.log(Level.FINEST, " <=BE RowDescription (during copy ignored)");
skipMessage();
break;
case PgMessageType.DATA_ROW_RESPONSE: // DataRow
LOGGER.log(Level.FINEST, " <=BE DataRow (during copy ignored)");
skipMessage();
break;
default:
throw new IOException(
GT.tr("Unexpected packet type during copy: {0}", Integer.toString(c)));
}
// Collect errors into a neat chain for completeness
if (error != null) {
if (errors != null) {
error.setNextException(errors);
}
errors = error;
error = null;
}
}
if (errors != null) {
throw errors;
}
return op;
} finally {
/*
reset here in the finally block to make sure it really is cleared
*/
processingCopyResults.set(false);
}
}
/*
* To prevent client/server protocol deadlocks, we try to manage the estimated recv buffer size
* and force a sync +flush and process results if we think it might be getting too full.
*
* See the comments above MAX_BUFFERED_RECV_BYTES's declaration for details.
*/
private void flushIfDeadlockRisk(Query query, boolean disallowBatching,
ResultHandler resultHandler,
/* @Nullable */ BatchResultHandler batchHandler,
final int flags) throws IOException {
// Assume all statements need at least this much reply buffer space,
// plus params
estimatedReceiveBufferBytes += NODATA_QUERY_RESPONSE_SIZE_BYTES;
SimpleQuery sq = (SimpleQuery) query;
if (sq.isStatementDescribed()) {
/*
* Estimate the response size of the fields and add it to the expected response size.
*
* It's impossible for us to estimate the rowcount. We'll assume one row, as that's the common
* case for batches and we're leaving plenty of breathing room in this approach. It's still
* not deadlock-proof though; see pgjdbc github issues #194 and #195.
*/
int maxResultRowSize = sq.getMaxResultRowSize();
if (maxResultRowSize >= 0) {
estimatedReceiveBufferBytes += maxResultRowSize;
} else {
LOGGER.log(Level.FINEST, "Couldn't estimate result size or result size unbounded, "
+ "disabling batching for this query.");
disallowBatching = true;
}
} else {
/*
* We only describe a statement if we're expecting results from it, so it's legal to batch
* unprepared statements. We'll abort later if we get any uresults from them where none are
* expected. For now all we can do is hope the user told us the truth and assume that
* NODATA_QUERY_RESPONSE_SIZE_BYTES is enough to cover it.
*/
}
if (disallowBatching || estimatedReceiveBufferBytes >= MAX_BUFFERED_RECV_BYTES) {
LOGGER.log(Level.FINEST, "Forcing Sync, receive buffer full or batching disallowed");
sendSync();
processResults(resultHandler, flags);
estimatedReceiveBufferBytes = 0;
if (batchHandler != null) {
batchHandler.secureProgress();
}
}
}
/*
* Send a query to the backend.
*/
private void sendQuery(Query query, V3ParameterList parameters, int maxRows, int fetchSize,
int flags, ResultHandler resultHandler,
/* @Nullable */ BatchResultHandler batchHandler, boolean adaptiveFetch) throws IOException, SQLException {
// Now the query itself.
Query[] subqueries = query.getSubqueries();
SimpleParameterList[] subparams = parameters.getSubparams();
// We know this is deprecated, but still respect it in case anyone's using it.
// PgJDBC its self no longer does.
@SuppressWarnings("deprecation")
boolean disallowBatching = (flags & QueryExecutor.QUERY_DISALLOW_BATCHING) != 0;
if (subqueries == null) {
flushIfDeadlockRisk(query, disallowBatching, resultHandler, batchHandler, flags);
// If we saw errors, don't send anything more.
if (resultHandler.getException() == null) {
if (fetchSize != 0) {
adaptiveFetchCache.addNewQuery(adaptiveFetch, query);
}
sendOneQuery((SimpleQuery) query, (SimpleParameterList) parameters, maxRows, fetchSize,
flags);
}
} else {
for (int i = 0; i < subqueries.length; i++) {
final Query subquery = subqueries[i];
flushIfDeadlockRisk(subquery, disallowBatching, resultHandler, batchHandler, flags);
// If we saw errors, don't send anything more.
if (resultHandler.getException() != null) {
break;
}
// In the situation where parameters is already
// NO_PARAMETERS it cannot know the correct
// number of array elements to return in the
// above call to getSubparams(), so it must
// return null which we check for here.
//
SimpleParameterList subparam = SimpleQuery.NO_PARAMETERS;
if (subparams != null) {
subparam = subparams[i];
}
if (fetchSize != 0) {
adaptiveFetchCache.addNewQuery(adaptiveFetch, subquery);
}
sendOneQuery((SimpleQuery) subquery, subparam, maxRows, fetchSize, flags);
}
}
}
//
// Message sending
//
private void sendSync() throws IOException {
LOGGER.log(Level.FINEST, " FE=> Sync");
pgStream.sendChar(PgMessageType.SYNC_REQUEST); // Sync
pgStream.sendInteger4(4); // Length
pgStream.flush();
// Below "add queues" are likely not required at all
pendingExecuteQueue.add(new ExecuteRequest(sync, null, true));
pendingDescribePortalQueue.add(sync);
}
private void sendParse(SimpleQuery query, SimpleParameterList params, boolean oneShot)
throws IOException {
// Already parsed, or we have a Parse pending and the types are right?
int[] typeOIDs = params.getTypeOIDs();
if (query.isPreparedFor(typeOIDs, deallocateEpoch)) {
return;
}
// Clean up any existing statement, as we can't use it.
query.unprepare();
processDeadParsedQueries();
// Remove any cached Field values. The re-parsed query might report different
// fields because input parameter types may result in different type inferences
// for unspecified types.
query.setFields(null);
String statementName = null;
if (!oneShot) {
// Generate a statement name to use.
statementName = "S_" + (nextUniqueID++);
// And prepare the new statement.
// NB: Must clone the OID array, as it's a direct reference to
// the SimpleParameterList's internal array that might be modified
// under us.
query.setStatementName(statementName, deallocateEpoch);
query.setPrepareTypes(typeOIDs);
registerParsedQuery(query, statementName);
}
byte[] encodedStatementName = query.getEncodedStatementName();
String nativeSql = query.getNativeSql();
if (LOGGER.isLoggable(Level.FINEST)) {
StringBuilder sbuf = new StringBuilder(" FE=> Parse(stmt=" + statementName + ",query=\"");
sbuf.append(nativeSql);
sbuf.append("\",oids={");
for (int i = 1; i <= params.getParameterCount(); i++) {
if (i != 1) {
sbuf.append(",");
}
sbuf.append(params.getTypeOID(i));
}
sbuf.append("})");
LOGGER.log(Level.FINEST, sbuf.toString());
}
//
// Send Parse.
//
byte[] queryUtf8 = nativeSql.getBytes(StandardCharsets.UTF_8);
// Total size = 4 (size field)
// + N + 1 (statement name, zero-terminated)
// + N + 1 (query, zero terminated)
// + 2 (parameter count) + N * 4 (parameter types)
int encodedSize = 4
+ (encodedStatementName == null ? 0 : encodedStatementName.length) + 1
+ queryUtf8.length + 1
+ 2 + 4 * params.getParameterCount();
pgStream.sendChar(PgMessageType.PARSE_REQUEST); // Parse
pgStream.sendInteger4(encodedSize);
if (encodedStatementName != null) {
pgStream.send(encodedStatementName);
}
pgStream.sendChar(0); // End of statement name
pgStream.send(queryUtf8); // Query string
pgStream.sendChar(0); // End of query string.
pgStream.sendInteger2(params.getParameterCount()); // # of parameter types specified
for (int i = 1; i <= params.getParameterCount(); i++) {
pgStream.sendInteger4(params.getTypeOID(i));
}
pendingParseQueue.add(query);
}
private void sendBind(SimpleQuery query, SimpleParameterList params, /* @Nullable */ Portal portal,
boolean noBinaryTransfer) throws IOException {
//
// Send Bind.
//
String statementName = query.getStatementName();
byte[] encodedStatementName = query.getEncodedStatementName();
byte[] encodedPortalName = portal == null ? null : portal.getEncodedPortalName();
if (LOGGER.isLoggable(Level.FINEST)) {
StringBuilder sbuf = new StringBuilder(" FE=> Bind(stmt=" + statementName + ",portal=" + portal);
for (int i = 1; i <= params.getParameterCount(); i++) {
sbuf.append(",$").append(i).append("=<")
.append(params.toString(i, getStandardConformingStrings()))
.append(">,type=").append(Oid.toString(params.getTypeOID(i)));
}
sbuf.append(")");
LOGGER.log(Level.FINEST, sbuf.toString());
}
// Total size = 4 (size field) + N + 1 (destination portal)
// + N + 1 (statement name)
// + 2 (param format code count) + N * 2 (format codes)
// + 2 (param value count) + N (encoded param value size)
// + 2 (result format code count, 0)
long encodedSize = 0;
for (int i = 1; i <= params.getParameterCount(); i++) {
if (params.isNull(i)) {
encodedSize += 4;
} else {
encodedSize += 4L + params.getV3Length(i);
}
}
Field[] fields = query.getFields();
if (!noBinaryTransfer && query.needUpdateFieldFormats() && fields != null) {
for (Field field : fields) {
if (useBinary(field)) {
field.setFormat(Field.BINARY_FORMAT);
query.setHasBinaryFields(true);
}
}
}
// If text-only results are required (e.g. updateable resultset), and the query has binary columns,
// flip to text format.
if (noBinaryTransfer && query.hasBinaryFields() && fields != null) {
for (Field field : fields) {
if (field.getFormat() != Field.TEXT_FORMAT) {
field.setFormat(Field.TEXT_FORMAT);
}
}
query.resetNeedUpdateFieldFormats();
query.setHasBinaryFields(false);
}
// This is not the number of binary fields, but the total number
// of fields if any of them are binary or zero if all of them
// are text.
int numBinaryFields = !noBinaryTransfer && query.hasBinaryFields() && fields != null
? fields.length : 0;
encodedSize = 4
+ (encodedPortalName == null ? 0 : encodedPortalName.length) + 1
+ (encodedStatementName == null ? 0 : encodedStatementName.length) + 1
+ 2 + params.getParameterCount() * 2L
+ 2 + encodedSize
+ 2 + numBinaryFields * 2L;
// backend's MaxAllocSize is the largest message that can
// be received from a client. If we have a bigger value
// from either very large parameters or incorrect length
// descriptions of setXXXStream we do not send the bind
// message.
//
if (encodedSize > 0x3fffffff) {
throw new PGBindException(new IOException(GT.tr(
"Bind message length {0} too long. This can be caused by very large or incorrect length specifications on InputStream parameters.",
encodedSize)));
}
pgStream.sendChar(PgMessageType.BIND); // Bind
pgStream.sendInteger4((int) encodedSize); // Message size
if (encodedPortalName != null) {
pgStream.send(encodedPortalName); // Destination portal name.
}
pgStream.sendChar(0); // End of portal name.
if (encodedStatementName != null) {
pgStream.send(encodedStatementName); // Source statement name.
}
pgStream.sendChar(0); // End of statement name.
pgStream.sendInteger2(params.getParameterCount()); // # of parameter format codes
for (int i = 1; i <= params.getParameterCount(); i++) {
pgStream.sendInteger2(params.isBinary(i) ? 1 : 0); // Parameter format code
}
pgStream.sendInteger2(params.getParameterCount()); // # of parameter values
// If an error occurs when reading a stream we have to
// continue pumping out data to match the length we
// said we would. Once we've done that we throw
// this exception. Multiple exceptions can occur and
// it really doesn't matter which one is reported back
// to the caller.
//
PGBindException bindException = null;
for (int i = 1; i <= params.getParameterCount(); i++) {
if (params.isNull(i)) {
pgStream.sendInteger4(-1); // Magic size of -1 means NULL
} else {
pgStream.sendInteger4(params.getV3Length(i)); // Parameter size
try {
params.writeV3Value(i, pgStream); // Parameter value
} catch (SourceStreamIOException sse) {
// Remember the error for rethrow later
if (bindException == null) {
bindException = new PGBindException(sse.getCause());
} else {
bindException.addSuppressed(sse.getCause());
}
// Write out the missing bytes so the stream does not corrupt
pgStream.sendZeros(sse.getBytesRemaining());
}
}
}
pgStream.sendInteger2(numBinaryFields); // # of result format codes
for (int i = 0; fields != null && i < numBinaryFields; i++) {
pgStream.sendInteger2(fields[i].getFormat());
}
pendingBindQueue.add(portal == null ? UNNAMED_PORTAL : portal);
if (bindException != null) {
throw bindException;
}
}
/**
* Returns true if the specified field should be retrieved using binary encoding.
*
* @param field The field whose Oid type to analyse.
* @return True if {@link Field#BINARY_FORMAT} should be used, false if
* {@link Field#BINARY_FORMAT}.
*/
private boolean useBinary(Field field) {
int oid = field.getOID();
return useBinaryForReceive(oid);
}
private void sendDescribePortal(SimpleQuery query, /* @Nullable */ Portal portal) throws IOException {
//
// Send Describe.
//
LOGGER.log(Level.FINEST, " FE=> Describe(portal={0})", portal);
byte[] encodedPortalName = portal == null ? null : portal.getEncodedPortalName();
// Total size = 4 (size field) + 1 (describe type, 'P') + N + 1 (portal name)
int encodedSize = 4 + 1 + (encodedPortalName == null ? 0 : encodedPortalName.length) + 1;
pgStream.sendChar(PgMessageType.DESCRIBE_REQUEST); // Describe
pgStream.sendInteger4(encodedSize); // message size
pgStream.sendChar(PgMessageType.PORTAL); // Describe (Portal)
if (encodedPortalName != null) {
pgStream.send(encodedPortalName); // portal name to close
}
pgStream.sendChar(0); // end of portal name
pendingDescribePortalQueue.add(query);
query.setPortalDescribed(true);
}
private void sendDescribeStatement(SimpleQuery query, SimpleParameterList params,
boolean describeOnly) throws IOException {
// Send Statement Describe
LOGGER.log(Level.FINEST, " FE=> Describe(statement={0})", query.getStatementName());
byte[] encodedStatementName = query.getEncodedStatementName();
// Total size = 4 (size field) + 1 (describe type, 'S') + N + 1 (portal name)
int encodedSize = 4 + 1 + (encodedStatementName == null ? 0 : encodedStatementName.length) + 1;
pgStream.sendChar(PgMessageType.DESCRIBE_REQUEST); // Describe
pgStream.sendInteger4(encodedSize); // Message size
pgStream.sendChar(PgMessageType.STATEMENT); // Describe (Statement);
if (encodedStatementName != null) {
pgStream.send(encodedStatementName); // Statement name
}
pgStream.sendChar(0); // end message
// Note: statement name can change over time for the same query object
// Thus we take a snapshot of the query name
pendingDescribeStatementQueue.add(
new DescribeRequest(query, params, describeOnly, query.getStatementName()));
pendingDescribePortalQueue.add(query);
query.setStatementDescribed(true);
query.setPortalDescribed(true);
}
private void sendExecute(SimpleQuery query, /* @Nullable */ Portal portal, int limit)
throws IOException {
//
// Send Execute.
//
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " FE=> Execute(portal={0},limit={1})", new Object[]{portal, limit});
}
byte[] encodedPortalName = portal == null ? null : portal.getEncodedPortalName();
int encodedSize = encodedPortalName == null ? 0 : encodedPortalName.length;
// Total size = 4 (size field) + 1 + N (source portal) + 4 (max rows)
pgStream.sendChar(PgMessageType.EXECUTE_REQUEST); // Execute
pgStream.sendInteger4(4 + 1 + encodedSize + 4); // message size
if (encodedPortalName != null) {
pgStream.send(encodedPortalName); // portal name
}
pgStream.sendChar(0); // portal name terminator
pgStream.sendInteger4(limit); // row limit
pendingExecuteQueue.add(new ExecuteRequest(query, portal, false));
}
private void sendClosePortal(String portalName) throws IOException {
//
// Send Close.
//
LOGGER.log(Level.FINEST, " FE=> ClosePortal({0})", portalName);
byte[] encodedPortalName = portalName == null ? null : portalName.getBytes(StandardCharsets.UTF_8);
int encodedSize = encodedPortalName == null ? 0 : encodedPortalName.length;
// Total size = 4 (size field) + 1 (close type, 'P') + 1 + N (portal name)
pgStream.sendChar(PgMessageType.CLOSE_REQUEST); // Close
pgStream.sendInteger4(4 + 1 + 1 + encodedSize); // message size
pgStream.sendChar(PgMessageType.PORTAL); // Close (Portal)
if (encodedPortalName != null) {
pgStream.send(encodedPortalName);
}
pgStream.sendChar(0); // unnamed portal
}
private void sendCloseStatement(String statementName) throws IOException {
//
// Send Close.
//
LOGGER.log(Level.FINEST, " FE=> CloseStatement({0})", statementName);
byte[] encodedStatementName = statementName.getBytes(StandardCharsets.UTF_8);
// Total size = 4 (size field) + 1 (close type, 'S') + N + 1 (statement name)
pgStream.sendChar(PgMessageType.CLOSE_REQUEST); // Close
pgStream.sendInteger4(4 + 1 + encodedStatementName.length + 1); // message size
pgStream.sendChar(PgMessageType.STATEMENT); // Close (Statement)
pgStream.send(encodedStatementName); // statement to close
pgStream.sendChar(0); // statement name terminator
}
// sendOneQuery sends a single statement via the extended query protocol.
// Per the FE/BE docs this is essentially the same as how a simple query runs
// (except that it generates some extra acknowledgement messages, and we
// can send several queries before doing the Sync)
//
// Parse S_n from "query string with parameter placeholders"; skipped if already done previously
// or if oneshot
// Bind C_n from S_n plus parameters (or from unnamed statement for oneshot queries)
// Describe C_n; skipped if caller doesn't want metadata
// Execute C_n with maxRows limit; maxRows = 1 if caller doesn't want results
// (above repeats once per call to sendOneQuery)
// Sync (sent by caller)
//
private void sendOneQuery(SimpleQuery query, SimpleParameterList params, int maxRows,
int fetchSize, int flags) throws IOException {
boolean asSimple = (flags & QueryExecutor.QUERY_EXECUTE_AS_SIMPLE) != 0;
if (asSimple) {
assert (flags & QueryExecutor.QUERY_DESCRIBE_ONLY) == 0
: "Simple mode does not support describe requests. sql = " + query.getNativeSql()
+ ", flags = " + flags;
sendSimpleQuery(query, params);
return;
}
assert !query.getNativeQuery().multiStatement
: "Queries that might contain ; must be executed with QueryExecutor.QUERY_EXECUTE_AS_SIMPLE mode. "
+ "Given query is " + query.getNativeSql();
// Per https://www.postgresql.org/docs/current/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY
// A Bind message can use the unnamed prepared statement to create a named portal.
// If the Bind is successful, an Execute message can reference that named portal until either
// the end of the current transaction
// or the named portal is explicitly destroyed
boolean noResults = (flags & QueryExecutor.QUERY_NO_RESULTS) != 0;
boolean noMeta = (flags & QueryExecutor.QUERY_NO_METADATA) != 0;
boolean describeOnly = (flags & QueryExecutor.QUERY_DESCRIBE_ONLY) != 0;
// extended queries always use a portal
// the usePortal flag controls whether or not we use a *named* portal
boolean usePortal = (flags & QueryExecutor.QUERY_FORWARD_CURSOR) != 0 && !noResults && !noMeta
&& fetchSize > 0 && !describeOnly;
boolean oneShot = (flags & QueryExecutor.QUERY_ONESHOT) != 0;
boolean noBinaryTransfer = (flags & QUERY_NO_BINARY_TRANSFER) != 0;
boolean forceDescribePortal = (flags & QUERY_FORCE_DESCRIBE_PORTAL) != 0;
// Work out how many rows to fetch in this pass.
int rows;
if (noResults) {
rows = 1; // We're discarding any results anyway, so limit data transfer to a minimum
} else if (!usePortal) {
rows = maxRows; // Not using a portal -- fetchSize is irrelevant
} else if (maxRows != 0 && fetchSize > maxRows) {
// fetchSize > maxRows, use maxRows (nb: fetchSize cannot be 0 if usePortal == true)
rows = maxRows;
} else {
rows = fetchSize; // maxRows > fetchSize
}
sendParse(query, params, oneShot);
// Must do this after sendParse to pick up any changes to the
// query's state.
//
boolean queryHasUnknown = query.hasUnresolvedTypes();
boolean paramsHasUnknown = params.hasUnresolvedTypes();
boolean describeStatement = describeOnly
|| (!oneShot && paramsHasUnknown && queryHasUnknown && !query.isStatementDescribed());
if (!describeStatement && paramsHasUnknown && !queryHasUnknown) {
int[] queryOIDs = castNonNull(query.getPrepareTypes());
int[] paramOIDs = params.getTypeOIDs();
for (int i = 0; i < paramOIDs.length; i++) {
// Only supply type information when there isn't any
// already, don't arbitrarily overwrite user supplied
// type information.
if (paramOIDs[i] == Oid.UNSPECIFIED) {
params.setResolvedType(i + 1, queryOIDs[i]);
}
}
}
if (describeStatement) {
sendDescribeStatement(query, params, describeOnly);
if (describeOnly) {
return;
}
}
// Construct a new portal if needed.
Portal portal = null;
if (usePortal) {
String portalName = "C_" + (nextUniqueID++);
portal = new Portal(query, portalName);
}
sendBind(query, params, portal, noBinaryTransfer);
// A statement describe will also output a RowDescription,
// so don't reissue it here if we've already done so.
//
if (!noMeta && !describeStatement) {
/*
* don't send describe if we already have cached the row description from previous executions
*
* XXX Clearing the fields / unpreparing the query (in sendParse) is incorrect, see bug #267.
* We might clear the cached fields in a later execution of this query if the bind parameter
* types change, but we're assuming here that they'll still be valid when we come to process
* the results of this query, so we don't send a new describe here. We re-describe after the
* fields are cleared, but the result of that gets processed after processing the results from
* earlier executions that we didn't describe because we didn't think we had to.
*
* To work around this, force a Describe at each execution in batches where this can be a
* problem. It won't cause more round trips so the performance impact is low, and it'll ensure
* that the field information available when we decoded the results. This is undeniably a
* hack, but there aren't many good alternatives.
*/
if (!query.isPortalDescribed() || forceDescribePortal) {
sendDescribePortal(query, portal);
}
}
sendExecute(query, portal, rows);
}
private void sendSimpleQuery(SimpleQuery query, SimpleParameterList params) throws IOException {
String nativeSql = query.toString(
params,
SqlSerializationContext.of(getStandardConformingStrings(), false));
LOGGER.log(Level.FINEST, " FE=> SimpleQuery(query=\"{0}\")", nativeSql);
Encoding encoding = pgStream.getEncoding();
byte[] encoded = encoding.encode(nativeSql);
pgStream.sendChar(PgMessageType.QUERY_REQUEST);
pgStream.sendInteger4(encoded.length + 4 + 1);
pgStream.send(encoded);
pgStream.sendChar(0);
pgStream.flush();
pendingExecuteQueue.add(new ExecuteRequest(query, null, true));
pendingDescribePortalQueue.add(query);
}
//
// Garbage collection of parsed statements.
//
// When a statement is successfully parsed, registerParsedQuery is called.
// This creates a PhantomReference referring to the "owner" of the statement
// (the originating Query object) and inserts that reference as a key in
// parsedQueryMap. The values of parsedQueryMap are the corresponding allocated
// statement names. The originating Query object also holds a reference to the
// PhantomReference.
//
// When the owning Query object is closed, it enqueues and clears the associated
// PhantomReference.
//
// If the owning Query object becomes unreachable (see java.lang.ref javadoc) before
// being closed, the corresponding PhantomReference is enqueued on
// parsedQueryCleanupQueue. In the Sun JVM, phantom references are only enqueued
// when a GC occurs, so this is not necessarily prompt but should eventually happen.
//
// Periodically (currently, just before query execution), the parsedQueryCleanupQueue
// is polled. For each enqueued PhantomReference we find, we remove the corresponding
// entry from parsedQueryMap, obtaining the name of the underlying statement in the
// process. Then we send a message to the backend to deallocate that statement.
//
private final HashMap, String> parsedQueryMap =
new HashMap<>();
private final ReferenceQueue parsedQueryCleanupQueue =
new ReferenceQueue<>();
private void registerParsedQuery(SimpleQuery query, String statementName) {
if (statementName == null) {
return;
}
PhantomReference cleanupRef =
new PhantomReference<>(query, parsedQueryCleanupQueue);
parsedQueryMap.put(cleanupRef, statementName);
query.setCleanupRef(cleanupRef);
}
private void processDeadParsedQueries() throws IOException {
Reference extends SimpleQuery> deadQuery;
while ((deadQuery = parsedQueryCleanupQueue.poll()) != null) {
String statementName = castNonNull(parsedQueryMap.remove(deadQuery));
sendCloseStatement(statementName);
deadQuery.clear();
}
}
//
// Essentially the same strategy is used for the cleanup of portals.
// Note that each Portal holds a reference to the corresponding Query
// that generated it, so the Query won't be collected (and the statement
// closed) until all the Portals are, too. This is required by the mechanics
// of the backend protocol: when a statement is closed, all dependent portals
// are also closed.
//
private final HashMap, String> openPortalMap =
new HashMap<>();
private final ReferenceQueue openPortalCleanupQueue = new ReferenceQueue<>();
private static final Portal UNNAMED_PORTAL = new Portal(null, "unnamed");
private void registerOpenPortal(Portal portal) {
if (portal == UNNAMED_PORTAL) {
return; // Using the unnamed portal.
}
String portalName = portal.getPortalName();
PhantomReference cleanupRef =
new PhantomReference<>(portal, openPortalCleanupQueue);
openPortalMap.put(cleanupRef, portalName);
portal.setCleanupRef(cleanupRef);
}
private void processDeadPortals() throws IOException {
Reference extends Portal> deadPortal;
while ((deadPortal = openPortalCleanupQueue.poll()) != null) {
String portalName = castNonNull(openPortalMap.remove(deadPortal));
sendClosePortal(portalName);
deadPortal.clear();
}
}
protected void processResults(ResultHandler handler, int flags) throws IOException {
processResults(handler, flags, false);
}
protected void processResults(ResultHandler handler, int flags, boolean adaptiveFetch)
throws IOException {
boolean noResults = (flags & QueryExecutor.QUERY_NO_RESULTS) != 0;
boolean bothRowsAndStatus = (flags & QueryExecutor.QUERY_BOTH_ROWS_AND_STATUS) != 0;
List tuples = null;
int c;
boolean endQuery = false;
// At the end of a command execution we have the CommandComplete
// message to tell us we're done, but with a describeOnly command
// we have no real flag to let us know we're done. We've got to
// look for the next RowDescription or NoData message and return
// from there.
boolean doneAfterRowDescNoData = false;
while (!endQuery) {
c = pgStream.receiveChar();
switch (c) {
case 'A': // Asynchronous Notify
receiveAsyncNotify();
break;
case PgMessageType.PARSE_COMPLETE_RESPONSE: // Parse Complete (response to Parse)
pgStream.receiveInteger4(); // len, discarded
SimpleQuery parsedQuery = pendingParseQueue.removeFirst();
String parsedStatementName = parsedQuery.getStatementName();
LOGGER.log(Level.FINEST, " <=BE ParseComplete [{0}]", parsedStatementName);
break;
case PgMessageType.PARAMETER_DESCRIPTION_RESPONSE: {
pgStream.receiveInteger4(); // len, discarded
LOGGER.log(Level.FINEST, " <=BE ParameterDescription");
DescribeRequest describeData = pendingDescribeStatementQueue.getFirst();
SimpleQuery query = describeData.query;
SimpleParameterList params = describeData.parameterList;
boolean describeOnly = describeData.describeOnly;
// This might differ from query.getStatementName if the query was re-prepared
String origStatementName = describeData.statementName;
int numParams = pgStream.receiveInteger2();
for (int i = 1; i <= numParams; i++) {
int typeOid = pgStream.receiveInteger4();
params.setResolvedType(i, typeOid);
}
// Since we can issue multiple Parse and DescribeStatement
// messages in a single network trip, we need to make
// sure the describe results we requested are still
// applicable to the latest parsed query.
//
if ((origStatementName == null && query.getStatementName() == null)
|| (origStatementName != null
&& origStatementName.equals(query.getStatementName()))) {
query.setPrepareTypes(params.getTypeOIDs());
}
if (describeOnly) {
doneAfterRowDescNoData = true;
} else {
pendingDescribeStatementQueue.removeFirst();
}
break;
}
case PgMessageType.BIND_COMPLETE_RESPONSE: // (response to Bind)
pgStream.receiveInteger4(); // len, discarded
Portal boundPortal = pendingBindQueue.removeFirst();
LOGGER.log(Level.FINEST, " <=BE BindComplete [{0}]", boundPortal);
registerOpenPortal(boundPortal);
break;
case PgMessageType.CLOSE_COMPLETE_RESPONSE: // response to Close
pgStream.receiveInteger4(); // len, discarded
LOGGER.log(Level.FINEST, " <=BE CloseComplete");
break;
case PgMessageType.NO_DATA_RESPONSE: // response to Describe
pgStream.receiveInteger4(); // len, discarded
LOGGER.log(Level.FINEST, " <=BE NoData");
pendingDescribePortalQueue.removeFirst();
if (doneAfterRowDescNoData) {
DescribeRequest describeData = pendingDescribeStatementQueue.removeFirst();
SimpleQuery currentQuery = describeData.query;
Field[] fields = currentQuery.getFields();
if (fields != null) { // There was a resultset.
tuples = new ArrayList<>();
handler.handleResultRows(currentQuery, fields, tuples, null);
tuples = null;
}
}
break;
case PgMessageType.PORTAL_SUSPENDED_RESPONSE: { // end of Execute
// nb: this appears *instead* of CommandStatus.
// Must be a SELECT if we suspended, so don't worry about it.
pgStream.receiveInteger4(); // len, discarded
LOGGER.log(Level.FINEST, " <=BE PortalSuspended");
ExecuteRequest executeData = pendingExecuteQueue.removeFirst();
SimpleQuery currentQuery = executeData.query;
Portal currentPortal = executeData.portal;
if (currentPortal != null) {
// Existence of portal defines if query was using fetching.
adaptiveFetchCache
.updateQueryFetchSize(adaptiveFetch, currentQuery, pgStream.getMaxRowSizeBytes());
}
pgStream.clearMaxRowSizeBytes();
Field[] fields = currentQuery.getFields();
if (fields != null && tuples == null) {
// When no results expected, pretend an empty resultset was returned
// Not sure if new ArrayList can be always replaced with emptyList
tuples = noResults ? Collections.emptyList() : new ArrayList();
}
if (fields != null && tuples != null) {
handler.handleResultRows(currentQuery, fields, tuples, currentPortal);
}
tuples = null;
break;
}
case PgMessageType.COMMAND_COMPLETE_RESPONSE: { // end of Execute
// Handle status.
String status = receiveCommandStatus();
if (isFlushCacheOnDeallocate()
&& (status.startsWith("DEALLOCATE ALL") || status.startsWith("DISCARD ALL"))) {
deallocateEpoch++;
}
doneAfterRowDescNoData = false;
ExecuteRequest executeData = castNonNull(pendingExecuteQueue.peekFirst());
SimpleQuery currentQuery = executeData.query;
Portal currentPortal = executeData.portal;
if (currentPortal != null) {
// Existence of portal defines if query was using fetching.
// Command executed, adaptive fetch size can be removed for this query, max row size can be cleared
adaptiveFetchCache.removeQuery(adaptiveFetch, currentQuery);
// Update to change fetch size for other fetch portals of this query
adaptiveFetchCache
.updateQueryFetchSize(adaptiveFetch, currentQuery, pgStream.getMaxRowSizeBytes());
}
pgStream.clearMaxRowSizeBytes();
if (status.startsWith("SET")) {
String nativeSql = currentQuery.getNativeQuery().nativeSql;
// Scan only the first 1024 characters to
// avoid big overhead for long queries.
if (nativeSql.lastIndexOf("search_path", 1024) != -1
&& !nativeSql.equals(lastSetSearchPathQuery)) {
// Search path was changed, invalidate prepared statement cache
lastSetSearchPathQuery = nativeSql;
deallocateEpoch++;
}
}
if (!executeData.asSimple) {
pendingExecuteQueue.removeFirst();
} else {
// For simple 'Q' queries, executeQueue is cleared via ReadyForQuery message
}
// we want to make sure we do not add any results from these queries to the result set
if (currentQuery == autoSaveQuery
|| currentQuery == releaseAutoSave) {
// ignore "SAVEPOINT" or RELEASE SAVEPOINT status from autosave query
break;
}
Field[] fields = currentQuery.getFields();
if (fields != null && tuples == null) {
// When no results expected, pretend an empty resultset was returned
// Not sure if new ArrayList can be always replaced with emptyList
tuples = noResults ? Collections.emptyList() : new ArrayList();
}
// If we received tuples we must know the structure of the
// resultset, otherwise we won't be able to fetch columns
// from it, etc, later.
if (fields == null && tuples != null) {
throw new IllegalStateException(
"Received resultset tuples, but no field structure for them");
}
if (fields != null && tuples != null) {
// There was a resultset.
handler.handleResultRows(currentQuery, fields, tuples, null);
tuples = null;
if (bothRowsAndStatus) {
interpretCommandStatus(status, handler);
}
} else {
interpretCommandStatus(status, handler);
}
if (executeData.asSimple) {
// Simple queries might return several resultsets, thus we clear
// fields, so queries like "select 1;update; select2" will properly
// identify that "update" did not return any results
currentQuery.setFields(null);
}
if (currentPortal != null) {
currentPortal.close();
}
break;
}
case PgMessageType.DATA_ROW_RESPONSE: // Data Transfer (ongoing Execute response)
Tuple tuple = null;
try {
tuple = pgStream.receiveTupleV3();
} catch (OutOfMemoryError oome) {
if (!noResults) {
handler.handleError(
new PSQLException(GT.tr("Ran out of memory retrieving query results."),
PSQLState.OUT_OF_MEMORY, oome));
}
} catch (SQLException e) {
handler.handleError(e);
}
if (!noResults) {
if (tuples == null) {
tuples = new ArrayList<>();
}
if (tuple != null) {
tuples.add(tuple);
}
}
if (LOGGER.isLoggable(Level.FINEST)) {
int length;
if (tuple == null) {
length = -1;
} else {
length = tuple.length();
}
LOGGER.log(Level.FINEST, " <=BE DataRow(len={0})", length);
}
break;
case PgMessageType.ERROR_RESPONSE:
// Error Response (response to pretty much everything; backend then skips until Sync)
SQLException error = receiveErrorResponse();
handler.handleError(error);
if (willHealViaReparse(error)) {
// prepared statement ... is not valid kind of error
// Technically speaking, the error is unexpected, thus we invalidate other
// server-prepared statements just in case.
deallocateEpoch++;
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " FE: received {0}, will invalidate statements. deallocateEpoch is now {1}",
new Object[]{error.getSQLState(), deallocateEpoch});
}
}
// keep processing
break;
case PgMessageType.EMPTY_QUERY_RESPONSE: { // Empty Query (end of Execute)
pgStream.receiveInteger4();
LOGGER.log(Level.FINEST, " <=BE EmptyQuery");
ExecuteRequest executeData = pendingExecuteQueue.removeFirst();
Portal currentPortal = executeData.portal;
handler.handleCommandStatus("EMPTY", 0, 0);
if (currentPortal != null) {
currentPortal.close();
}
break;
}
case PgMessageType.NOTICE_RESPONSE:
SQLWarning warning = receiveNoticeResponse();
handler.handleWarning(warning);
break;
case PgMessageType.PARAMETER_STATUS_RESPONSE:
try {
receiveParameterStatus();
} catch (SQLException e) {
handler.handleError(e);
endQuery = true;
}
break;
case PgMessageType.ROW_DESCRIPTION_RESPONSE: // response to Describe
Field[] fields = receiveFields();
tuples = new ArrayList<>();
SimpleQuery query = castNonNull(pendingDescribePortalQueue.peekFirst());
if (!pendingExecuteQueue.isEmpty()
&& !castNonNull(pendingExecuteQueue.peekFirst()).asSimple) {
pendingDescribePortalQueue.removeFirst();
}
query.setFields(fields);
if (doneAfterRowDescNoData) {
DescribeRequest describeData = pendingDescribeStatementQueue.removeFirst();
SimpleQuery currentQuery = describeData.query;
currentQuery.setFields(fields);
handler.handleResultRows(currentQuery, fields, tuples, null);
tuples = null;
}
break;
case PgMessageType.READY_FOR_QUERY_RESPONSE: // eventual response to Sync
receiveRFQ();
if (!pendingExecuteQueue.isEmpty()
&& castNonNull(pendingExecuteQueue.peekFirst()).asSimple) {
tuples = null;
pgStream.clearResultBufferCount();
ExecuteRequest executeRequest = pendingExecuteQueue.removeFirst();
// Simple queries might return several resultsets, thus we clear
// fields, so queries like "select 1;update; select2" will properly
// identify that "update" did not return any results
executeRequest.query.setFields(null);
pendingDescribePortalQueue.removeFirst();
if (!pendingExecuteQueue.isEmpty()) {
if (getTransactionState() == TransactionState.IDLE) {
handler.secureProgress();
}
// process subsequent results (e.g. for cases like batched execution of simple 'Q' queries)
break;
}
}
endQuery = true;
// Reset the statement name of Parses that failed.
while (!pendingParseQueue.isEmpty()) {
SimpleQuery failedQuery = pendingParseQueue.removeFirst();
failedQuery.unprepare();
}
pendingParseQueue.clear(); // No more ParseComplete messages expected.
// Pending "describe" requests might be there in case of error
// If that is the case, reset "described" status, so the statement is properly
// described on next execution
while (!pendingDescribeStatementQueue.isEmpty()) {
DescribeRequest request = pendingDescribeStatementQueue.removeFirst();
LOGGER.log(Level.FINEST, " FE marking setStatementDescribed(false) for query {0}", request.query);
request.query.setStatementDescribed(false);
}
while (!pendingDescribePortalQueue.isEmpty()) {
SimpleQuery describePortalQuery = pendingDescribePortalQueue.removeFirst();
LOGGER.log(Level.FINEST, " FE marking setPortalDescribed(false) for query {0}", describePortalQuery);
describePortalQuery.setPortalDescribed(false);
}
pendingBindQueue.clear(); // No more BindComplete messages expected.
pendingExecuteQueue.clear(); // No more query executions expected.
break;
case PgMessageType.COPY_IN_RESPONSE:
LOGGER.log(Level.FINEST, " <=BE CopyInResponse");
LOGGER.log(Level.FINEST, " FE=> CopyFail");
// COPY sub-protocol is not implemented yet
// We'll send a CopyFail message for COPY FROM STDIN so that
// server does not wait for the data.
byte[] buf = "COPY commands are only supported using the CopyManager API.".getBytes(StandardCharsets.US_ASCII);
pgStream.sendChar(PgMessageType.COPY_FAIL);
pgStream.sendInteger4(buf.length + 4 + 1);
pgStream.send(buf);
pgStream.sendChar(0);
pgStream.flush();
sendSync(); // send sync message
skipMessage(); // skip the response message
break;
case PgMessageType.COPY_OUT_RESPONSE:
LOGGER.log(Level.FINEST, " <=BE CopyOutResponse");
skipMessage();
// In case of CopyOutResponse, we cannot abort data transfer,
// so just throw an error and ignore CopyData messages
handler.handleError(
new PSQLException(GT.tr("COPY commands are only supported using the CopyManager API."),
PSQLState.NOT_IMPLEMENTED));
break;
case PgMessageType.COPY_DONE:
skipMessage();
LOGGER.log(Level.FINEST, " <=BE CopyDone");
break;
case PgMessageType.COPY_DATA:
skipMessage();
LOGGER.log(Level.FINEST, " <=BE CopyData");
break;
default:
throw new IOException("Unexpected packet type: " + c);
}
}
}
/**
* Ignore the response message by reading the message length and skipping over those bytes in the
* communication stream.
*/
private void skipMessage() throws IOException {
int len = pgStream.receiveInteger4();
assert len >= 4 : "Length from skip message must be at least 4 ";
// skip len-4 (length includes the 4 bytes for message length itself
pgStream.skip(len - 4);
}
@Override
public void fetch(ResultCursor cursor, ResultHandler handler, int fetchSize,
boolean adaptiveFetch) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
waitOnLock();
final Portal portal = (Portal) cursor;
// Insert a ResultHandler that turns bare command statuses into empty datasets
// (if the fetch returns no rows, we see just a CommandStatus..)
final ResultHandler delegateHandler = handler;
final SimpleQuery query = castNonNull(portal.getQuery());
handler = new ResultHandlerDelegate(delegateHandler) {
@Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
handleResultRows(query, NO_FIELDS, new ArrayList<>(), null);
}
};
// Now actually run it.
try {
processDeadParsedQueries();
processDeadPortals();
sendExecute(query, portal, fetchSize);
sendSync();
processResults(handler, 0, adaptiveFetch);
estimatedReceiveBufferBytes = 0;
} catch (IOException e) {
abort();
handler.handleError(
new PSQLException(GT.tr("An I/O error occurred while sending to the backend."),
PSQLState.CONNECTION_FAILURE, e));
}
handler.handleCompletion();
}
}
@Override
public int getAdaptiveFetchSize(boolean adaptiveFetch, ResultCursor cursor) {
if (cursor instanceof Portal) {
Query query = ((Portal) cursor).getQuery();
if (Objects.nonNull(query)) {
return adaptiveFetchCache
.getFetchSizeForQuery(adaptiveFetch, query);
}
}
return -1;
}
@Override
public void setAdaptiveFetch(boolean adaptiveFetch) {
this.adaptiveFetchCache.setAdaptiveFetch(adaptiveFetch);
}
@Override
public boolean getAdaptiveFetch() {
return this.adaptiveFetchCache.getAdaptiveFetch();
}
@Override
public void addQueryToAdaptiveFetchCache(boolean adaptiveFetch, ResultCursor cursor) {
if (cursor instanceof Portal) {
Query query = ((Portal) cursor).getQuery();
if (Objects.nonNull(query)) {
adaptiveFetchCache.addNewQuery(adaptiveFetch, query);
}
}
}
@Override
public void removeQueryFromAdaptiveFetchCache(boolean adaptiveFetch, ResultCursor cursor) {
if (cursor instanceof Portal) {
Query query = ((Portal) cursor).getQuery();
if (Objects.nonNull(query)) {
adaptiveFetchCache.removeQuery(adaptiveFetch, query);
}
}
}
/*
* Receive the field descriptions from the back end.
*/
private Field[] receiveFields() throws IOException {
pgStream.receiveInteger4(); // MESSAGE SIZE
int size = pgStream.receiveInteger2();
Field[] fields = new Field[size];
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE RowDescription({0})", size);
}
for (int i = 0; i < fields.length; i++) {
String columnLabel = pgStream.receiveCanonicalString();
int tableOid = pgStream.receiveInteger4();
short positionInTable = (short) pgStream.receiveInteger2();
int typeOid = pgStream.receiveInteger4();
int typeLength = pgStream.receiveInteger2();
int typeModifier = pgStream.receiveInteger4();
int formatType = pgStream.receiveInteger2();
fields[i] = new Field(columnLabel,
typeOid, typeLength, typeModifier, tableOid, positionInTable);
fields[i].setFormat(formatType);
LOGGER.log(Level.FINEST, " {0}", fields[i]);
}
return fields;
}
private void receiveAsyncNotify() throws IOException {
int len = pgStream.receiveInteger4(); // MESSAGE SIZE
assert len > 4 : "Length for AsyncNotify must be at least 4";
int pid = pgStream.receiveInteger4();
String msg = pgStream.receiveCanonicalString();
String param = pgStream.receiveString();
addNotification(new Notification(msg, pid, param));
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE AsyncNotify({0},{1},{2})", new Object[]{pid, msg, param});
}
}
private SQLException receiveErrorResponse() throws IOException {
// it's possible to get more than one error message for a query
// see libpq comments wrt backend closing a connection
// so, append messages to a string buffer and keep processing
// check at the bottom to see if we need to throw an exception
int elen = pgStream.receiveInteger4();
assert elen > 4 : "Error response length must be greater than 4";
EncodingPredictor.DecodeResult totalMessage = pgStream.receiveErrorString(elen - 4);
ServerErrorMessage errorMsg = new ServerErrorMessage(totalMessage);
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE ErrorMessage({0})", errorMsg.toString());
}
PSQLException error = new PSQLException(errorMsg, this.logServerErrorDetail);
if (transactionFailCause == null) {
transactionFailCause = error;
} else {
error.initCause(transactionFailCause);
}
return error;
}
private SQLWarning receiveNoticeResponse() throws IOException {
int nlen = pgStream.receiveInteger4();
assert nlen > 4 : "Notice Response length must be greater than 4";
ServerErrorMessage warnMsg = new ServerErrorMessage(pgStream.receiveString(nlen - 4));
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE NoticeResponse({0})", warnMsg.toString());
}
return new PSQLWarning(warnMsg);
}
private String receiveCommandStatus() throws IOException {
// TODO: better handle the msg len
int len = pgStream.receiveInteger4();
// read len -5 bytes (-4 for len and -1 for trailing \0)
String status = pgStream.receiveString(len - 5);
// now read and discard the trailing \0
pgStream.receiveChar(); // Receive(1) would allocate new byte[1], so avoid it
LOGGER.log(Level.FINEST, " <=BE CommandStatus({0})", status);
return status;
}
private void interpretCommandStatus(String status, ResultHandler handler) {
try {
commandCompleteParser.parse(status);
} catch (SQLException e) {
handler.handleError(e);
return;
}
long oid = commandCompleteParser.getOid();
long count = commandCompleteParser.getRows();
handler.handleCommandStatus(status, count, oid);
}
private void receiveRFQ() throws IOException {
if (pgStream.receiveInteger4() != 5) {
throw new IOException("unexpected length of ReadyForQuery message");
}
char tStatus = (char) pgStream.receiveChar();
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE ReadyForQuery({0})", tStatus);
}
// Update connection state.
switch (tStatus) {
case 'I':
transactionFailCause = null;
setTransactionState(TransactionState.IDLE);
break;
case 'T':
transactionFailCause = null;
setTransactionState(TransactionState.OPEN);
break;
case 'E':
setTransactionState(TransactionState.FAILED);
break;
default:
throw new IOException(
"unexpected transaction state in ReadyForQuery message: " + (int) tStatus);
}
}
@Override
@SuppressWarnings("deprecation")
protected void sendCloseMessage() throws IOException {
closeAction.sendCloseMessage(pgStream);
}
public void readStartupMessages() throws IOException, SQLException {
for (int i = 0; i < 1000; i++) {
int beresp = pgStream.receiveChar();
switch (beresp) {
case PgMessageType.READY_FOR_QUERY_RESPONSE:
receiveRFQ();
// Ready For Query; we're done.
return;
case PgMessageType.BACKEND_KEY_DATA_RESPONSE:
// BackendKeyData
int msgLen = pgStream.receiveInteger4();
int pid = pgStream.receiveInteger4();
int keyLen = msgLen - 8;
byte[] ckey;
if (ProtocolVersion.v3_0.equals(protocolVersion)) {
if (keyLen != 4) {
throw new PSQLException(GT.tr("Protocol error. Cancel Key should be 4 bytes for protocol version {0},"
+ " but received {1} bytes. Session setup failed.", ProtocolVersion.v3_0, keyLen),
PSQLState.PROTOCOL_VIOLATION);
}
}
if (ProtocolVersion.v3_2.equals(protocolVersion)) {
if (keyLen > 256) {
throw new PSQLException(GT.tr(
"Protocol error. Cancel Key cannot be greater than 256 for protocol version {0},"
+ " but received {1} bytes. Session setup failed.",
ProtocolVersion.v3_2, keyLen),
PSQLState.PROTOCOL_VIOLATION);
}
}
ckey = pgStream.receive(keyLen);
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE BackendKeyData(pid={0},ckey={1})", new Object[]{pid, ckey});
}
setBackendKeyData(pid, ckey);
break;
case PgMessageType.ERROR_RESPONSE:
// Error
throw receiveErrorResponse();
case PgMessageType.NOTICE_RESPONSE:
// Warning
addWarning(receiveNoticeResponse());
break;
case PgMessageType.PARAMETER_STATUS_RESPONSE:
// ParameterStatus
receiveParameterStatus();
break;
default:
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " invalid message type={0}", (char) beresp);
}
throw new PSQLException(GT.tr("Protocol error. Session setup failed."),
PSQLState.PROTOCOL_VIOLATION);
}
}
throw new PSQLException(GT.tr("Protocol error. Session setup failed."),
PSQLState.PROTOCOL_VIOLATION);
}
public void receiveParameterStatus() throws IOException, SQLException {
// ParameterStatus
pgStream.receiveInteger4(); // MESSAGE SIZE
final String name = pgStream.receiveCanonicalStringIfPresent();
final String value = pgStream.receiveCanonicalStringIfPresent();
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE ParameterStatus({0} = {1})", new Object[]{name, value});
}
// if the name is empty, there is nothing to do
if (name.isEmpty()) {
return;
}
// Update client-visible parameter status map for getParameterStatuses()
onParameterStatus(name, value);
if ("client_encoding".equals(name)) {
if (allowEncodingChanges) {
if (!"UTF8".equalsIgnoreCase(value) && !"UTF-8".equalsIgnoreCase(value)) {
LOGGER.log(Level.FINE,
"pgjdbc expects client_encoding to be UTF8 for proper operation. Actual encoding is {0}",
value);
}
pgStream.setEncoding(Encoding.getDatabaseEncoding(value));
} else if (!"UTF8".equalsIgnoreCase(value) && !"UTF-8".equalsIgnoreCase(value)) {
close(); // we're screwed now; we can't trust any subsequent string.
throw new PSQLException(GT.tr(
"The server''s client_encoding parameter was changed to {0}. The JDBC driver requires client_encoding to be UTF8 for correct operation.",
value), PSQLState.CONNECTION_FAILURE);
}
}
if ("DateStyle".equals(name) && !value.startsWith("ISO")
&& !value.toUpperCase(Locale.ROOT).startsWith("ISO")) {
close(); // we're screwed now; we can't trust any subsequent date.
throw new PSQLException(GT.tr(
"The server''s DateStyle parameter was changed to {0}. The JDBC driver requires DateStyle to begin with ISO for correct operation.",
value), PSQLState.CONNECTION_FAILURE);
}
if ("standard_conforming_strings".equals(name)) {
if ("on".equals(value)) {
setStandardConformingStrings(true);
} else if ("off".equals(value)) {
setStandardConformingStrings(false);
} else {
close();
// we're screwed now; we don't know how to escape string literals
throw new PSQLException(GT.tr(
"The server''s standard_conforming_strings parameter was reported as {0}. The JDBC driver expected on or off.",
value), PSQLState.CONNECTION_FAILURE);
}
return;
}
if ("TimeZone".equals(name)) {
setTimeZone(TimestampUtils.parseBackendTimeZone(value));
} else if ("application_name".equals(name)) {
setApplicationName(value);
} else if ("server_version_num".equals(name)) {
setServerVersionNum(Integer.parseInt(value));
} else if ("server_version".equals(name)) {
setServerVersion(value);
} else if ("integer_datetimes".equals(name)) {
if ("on".equals(value)) {
setIntegerDateTimes(true);
} else if ("off".equals(value)) {
setIntegerDateTimes(false);
} else {
throw new PSQLException(GT.tr("Protocol error. Session setup failed."),
PSQLState.PROTOCOL_VIOLATION);
}
}
}
public void setTimeZone(TimeZone timeZone) {
this.timeZone = timeZone;
}
@Override
public /* @Nullable */ TimeZone getTimeZone() {
return timeZone;
}
public void setApplicationName(String applicationName) {
this.applicationName = applicationName;
}
@Override
public String getApplicationName() {
if (applicationName == null) {
return "";
}
return applicationName;
}
@Override
public ReplicationProtocol getReplicationProtocol() {
return replicationProtocol;
}
@Override
public void addBinaryReceiveOid(int oid) {
synchronized (useBinaryReceiveForOids) {
useBinaryReceiveForOids.add(oid);
}
}
@Override
public void removeBinaryReceiveOid(int oid) {
synchronized (useBinaryReceiveForOids) {
useBinaryReceiveForOids.remove(oid);
}
}
@Override
@SuppressWarnings("deprecation")
public Set extends Integer> getBinaryReceiveOids() {
// copy the values to prevent ConcurrentModificationException when reader accesses the elements
synchronized (useBinaryReceiveForOids) {
return useBinaryReceiveForOids.toMutableSet();
}
}
@Override
public boolean useBinaryForReceive(int oid) {
synchronized (useBinaryReceiveForOids) {
return useBinaryReceiveForOids.contains(oid);
}
}
@Override
public void setBinaryReceiveOids(Set oids) {
synchronized (useBinaryReceiveForOids) {
useBinaryReceiveForOids.clear();
useBinaryReceiveForOids.addAll(oids);
}
}
@Override
public void addBinarySendOid(int oid) {
synchronized (useBinarySendForOids) {
useBinarySendForOids.add(oid);
}
}
@Override
public void removeBinarySendOid(int oid) {
synchronized (useBinarySendForOids) {
useBinarySendForOids.remove(oid);
}
}
@Override
@SuppressWarnings("deprecation")
public Set extends Integer> getBinarySendOids() {
// copy the values to prevent ConcurrentModificationException when reader accesses the elements
synchronized (useBinarySendForOids) {
return useBinarySendForOids.toMutableSet();
}
}
@Override
public boolean useBinaryForSend(int oid) {
synchronized (useBinarySendForOids) {
return useBinarySendForOids.contains(oid);
}
}
@Override
public void setBinarySendOids(Set oids) {
synchronized (useBinarySendForOids) {
useBinarySendForOids.clear();
useBinarySendForOids.addAll(oids);
}
}
private void setIntegerDateTimes(boolean state) {
integerDateTimes = state;
}
@Override
public boolean getIntegerDateTimes() {
return integerDateTimes;
}
private final Deque pendingParseQueue = new ArrayDeque<>();
private final Deque pendingBindQueue = new ArrayDeque<>();
private final Deque pendingExecuteQueue = new ArrayDeque<>();
private final Deque pendingDescribeStatementQueue =
new ArrayDeque<>();
private final Deque pendingDescribePortalQueue = new ArrayDeque<>();
private long nextUniqueID = 1;
private final boolean allowEncodingChanges;
private final boolean cleanupSavePoints;
/**
* The estimated server response size since we last consumed the input stream from the server, in
* bytes.
*
* Starts at zero, reset by every Sync message. Mainly used for batches.
*
* Used to avoid deadlocks, see MAX_BUFFERED_RECV_BYTES.
*/
private int estimatedReceiveBufferBytes;
private final SimpleQuery beginTransactionQuery =
new SimpleQuery(
new NativeQuery("BEGIN", null, false, SqlCommand.BLANK),
null, false);
private final SimpleQuery beginReadOnlyTransactionQuery =
new SimpleQuery(
new NativeQuery("BEGIN READ ONLY", null, false, SqlCommand.BLANK),
null, false);
private final SimpleQuery emptyQuery =
new SimpleQuery(
new NativeQuery("", null, false,
SqlCommand.createStatementTypeInfo(SqlCommandType.BLANK)
), null, false);
private final SimpleQuery autoSaveQuery =
new SimpleQuery(
new NativeQuery("SAVEPOINT PGJDBC_AUTOSAVE", null, false, SqlCommand.BLANK),
null, false);
private final SimpleQuery releaseAutoSave =
new SimpleQuery(
new NativeQuery("RELEASE SAVEPOINT PGJDBC_AUTOSAVE", null, false, SqlCommand.BLANK),
null, false);
/*
In autosave mode we use this query to roll back errored transactions
*/
private final SimpleQuery restoreToAutoSave =
new SimpleQuery(
new NativeQuery("ROLLBACK TO SAVEPOINT PGJDBC_AUTOSAVE", null, false, SqlCommand.BLANK),
null, false);
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/ScramAuthenticator.java 0100664 0000000 0000000 00000017574 00000250600 027600 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2024, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
import org.postgresql.core.PGStream;
import org.postgresql.core.PgMessageType;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import com.ongres.scram.client.ScramClient;
import com.ongres.scram.common.ClientFinalMessage;
import com.ongres.scram.common.ClientFirstMessage;
import com.ongres.scram.common.StringPreparation;
import com.ongres.scram.common.exception.ScramException;
import com.ongres.scram.common.util.TlsServerEndpoint;
import java.io.IOException;
import java.net.Socket;
import java.nio.charset.StandardCharsets;
import java.security.cert.Certificate;
import java.security.cert.CertificateEncodingException;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.net.ssl.SSLPeerUnverifiedException;
import javax.net.ssl.SSLSession;
import javax.net.ssl.SSLSocket;
final class ScramAuthenticator {
private static final Logger LOGGER = Logger.getLogger(ScramAuthenticator.class.getName());
private final PGStream pgStream;
private final ScramClient scramClient;
ScramAuthenticator(char[] password, PGStream pgStream, Properties info) throws PSQLException {
this.pgStream = pgStream;
this.scramClient = initializeScramClient(password, pgStream, info);
}
private static ScramClient initializeScramClient(char[] password, PGStream stream, Properties info) throws PSQLException {
try {
final ChannelBindingOption channelBinding = ChannelBindingOption.of(info);
LOGGER.log(Level.FINEST, "channelBinding( {0} )", channelBinding);
final byte[] cbindData = getChannelBindingData(stream, channelBinding);
final List advertisedMechanisms = advertisedMechanisms(stream, channelBinding);
ScramClient client = ScramClient.builder()
.advertisedMechanisms(advertisedMechanisms)
.username("*") // username is ignored by server, startup message is used instead
.password(password)
.channelBinding(TlsServerEndpoint.TLS_SERVER_END_POINT, cbindData)
.stringPreparation(StringPreparation.POSTGRESQL_PREPARATION)
.build();
LOGGER.log(Level.FINEST, () -> " Using SCRAM mechanism: "
+ client.getScramMechanism().getName());
return client;
} catch (IllegalArgumentException | IOException e) {
throw new PSQLException(
GT.tr("Invalid SCRAM client initialization", e),
PSQLState.CONNECTION_REJECTED);
}
}
private static List advertisedMechanisms(PGStream stream, ChannelBindingOption channelBinding)
throws PSQLException, IOException {
List mechanisms = new ArrayList<>();
do {
mechanisms.add(stream.receiveString());
} while (stream.peekChar() != 0);
int c = stream.receiveChar();
assert c == 0;
if (mechanisms.isEmpty()) {
throw new PSQLException(
GT.tr("Received AuthenticationSASL message with 0 mechanisms!"),
PSQLState.CONNECTION_REJECTED);
}
LOGGER.log(Level.FINEST, " <=BE AuthenticationSASL( {0} )", mechanisms);
if (channelBinding == ChannelBindingOption.REQUIRE
&& !mechanisms.stream().anyMatch(m -> m.endsWith("-PLUS"))) {
throw new PSQLException(
GT.tr("Channel Binding is required, but server did not offer an "
+ "authentication method that supports channel binding"),
PSQLState.CONNECTION_REJECTED);
}
return mechanisms;
}
private static byte[] getChannelBindingData(PGStream stream, ChannelBindingOption channelBinding)
throws PSQLException {
if (channelBinding == ChannelBindingOption.DISABLE) {
return new byte[0];
}
Socket socket = stream.getSocket();
if (socket instanceof SSLSocket) {
SSLSession session = ((SSLSocket) socket).getSession();
try {
Certificate[] certificates = session.getPeerCertificates();
if (certificates != null && certificates.length > 0) {
Certificate peerCert = certificates[0]; // First certificate is the peer's certificate
if (peerCert instanceof X509Certificate) {
X509Certificate cert = (X509Certificate) peerCert;
return TlsServerEndpoint.getChannelBindingData(cert);
}
}
} catch (CertificateEncodingException | SSLPeerUnverifiedException e) {
LOGGER.log(Level.FINEST, "Error extracting channel binding data", e);
if (channelBinding == ChannelBindingOption.REQUIRE) {
throw new PSQLException(
GT.tr("Channel Binding is required, but could not extract "
+ "channel binding data from SSL session"),
PSQLState.CONNECTION_REJECTED);
}
}
} else if (channelBinding == ChannelBindingOption.REQUIRE) {
throw new PSQLException(
GT.tr("Channel Binding is required, but SSL is not in use"),
PSQLState.CONNECTION_REJECTED);
}
return new byte[0];
}
void handleAuthenticationSASL() throws IOException {
ClientFirstMessage clientFirstMessage = scramClient.clientFirstMessage();
LOGGER.log(Level.FINEST, " FE=> SASLInitialResponse( {0} )", clientFirstMessage);
String scramMechanismName = scramClient.getScramMechanism().getName();
final byte[] scramMechanismNameBytes = scramMechanismName.getBytes(StandardCharsets.UTF_8);
final byte[] clientFirstMessageBytes =
clientFirstMessage.toString().getBytes(StandardCharsets.UTF_8);
sendAuthenticationMessage(
(scramMechanismNameBytes.length + 1) + 4 + clientFirstMessageBytes.length,
pgStream -> {
pgStream.send(scramMechanismNameBytes);
pgStream.sendChar(0); // List terminated in '\0'
pgStream.sendInteger4(clientFirstMessageBytes.length);
pgStream.send(clientFirstMessageBytes);
});
}
void handleAuthenticationSASLContinue(int length) throws IOException, PSQLException {
String receivedServerFirstMessage = pgStream.receiveString(length);
LOGGER.log(Level.FINEST, " <=BE AuthenticationSASLContinue( {0} )", receivedServerFirstMessage);
try {
scramClient.serverFirstMessage(receivedServerFirstMessage);
} catch (ScramException | IllegalStateException | IllegalArgumentException e) {
throw new PSQLException(
GT.tr("SCRAM authentication failed: {0}", e.getMessage()),
PSQLState.CONNECTION_REJECTED,
e);
}
ClientFinalMessage clientFinalMessage = scramClient.clientFinalMessage();
LOGGER.log(Level.FINEST, " FE=> SASLResponse( {0} )", clientFinalMessage);
final byte[] clientFinalMessageBytes =
clientFinalMessage.toString().getBytes(StandardCharsets.UTF_8);
sendAuthenticationMessage(
clientFinalMessageBytes.length,
pgStream -> pgStream.send(clientFinalMessageBytes)
);
}
void handleAuthenticationSASLFinal(int length) throws IOException, PSQLException {
String serverFinalMessage = pgStream.receiveString(length);
LOGGER.log(Level.FINEST, " <=BE AuthenticationSASLFinal( {0} )", serverFinalMessage);
try {
scramClient.serverFinalMessage(serverFinalMessage);
} catch (ScramException | IllegalStateException | IllegalArgumentException e) {
throw new PSQLException(
GT.tr("SCRAM authentication failed: {0}", e.getMessage()),
PSQLState.CONNECTION_REJECTED,
e);
}
}
private interface BodySender {
void sendBody(PGStream pgStream) throws IOException;
}
private void sendAuthenticationMessage(int bodyLength, BodySender bodySender)
throws IOException {
pgStream.sendChar(PgMessageType.SASL_INITIAL_RESPONSE);
pgStream.sendInteger4(Integer.BYTES + bodyLength);
bodySender.sendBody(pgStream);
pgStream.flush();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/SimpleParameterList.java 0100664 0000000 0000000 00000045230 00000250600 027714 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import org.postgresql.core.Oid;
import org.postgresql.core.PGStream;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Utils;
import org.postgresql.geometric.PGbox;
import org.postgresql.geometric.PGpoint;
import org.postgresql.jdbc.UUIDArrayAssistant;
import org.postgresql.util.ByteConverter;
import org.postgresql.util.ByteStreamWriter;
import org.postgresql.util.GT;
import org.postgresql.util.PGbytea;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.StreamWrapper;
// import org.checkerframework.checker.index.qual.NonNegative;
// import org.checkerframework.checker.index.qual.Positive;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.sql.SQLException;
import java.util.Arrays;
/**
* Parameter list for a single-statement V3 query.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
class SimpleParameterList implements V3ParameterList {
private static final byte IN = 1;
private static final byte OUT = 2;
private static final byte INOUT = IN | OUT;
private static final byte TEXT = 0;
private static final byte BINARY = 4;
SimpleParameterList(int paramCount, /* @Nullable */ TypeTransferModeRegistry transferModeRegistry) {
this.paramValues = new Object[paramCount];
this.paramTypes = new int[paramCount];
this.encoded = new byte[paramCount][];
this.flags = new byte[paramCount];
this.transferModeRegistry = transferModeRegistry;
}
@Override
public void registerOutParameter(int index, int sqlType) throws SQLException {
if (index < 1 || index > paramValues.length) {
throw new PSQLException(
GT.tr("The column index is out of range: {0}, number of columns: {1}.",
index, paramValues.length),
PSQLState.INVALID_PARAMETER_VALUE);
}
flags[index - 1] |= OUT;
}
private void bind(int index, Object value, int oid, byte binary) throws SQLException {
if (index < 1 || index > paramValues.length) {
throw new PSQLException(
GT.tr("The column index is out of range: {0}, number of columns: {1}.",
index, paramValues.length),
PSQLState.INVALID_PARAMETER_VALUE);
}
--index;
encoded[index] = null;
paramValues[index] = value;
flags[index] = (byte) (direction(index) | IN | binary);
// If we are setting something to an UNSPECIFIED NULL, don't overwrite
// our existing type for it. We don't need the correct type info to
// send this value, and we don't want to overwrite and require a
// reparse.
if (oid == Oid.UNSPECIFIED && paramTypes[index] != Oid.UNSPECIFIED && value == NULL_OBJECT) {
return;
}
paramTypes[index] = oid;
pos = index + 1;
}
@Override
public /* @NonNegative */ int getParameterCount() {
return paramValues.length;
}
@Override
public /* @NonNegative */ int getOutParameterCount() {
int count = 0;
for (int i = 0; i < paramTypes.length; i++) {
if ((direction(i) & OUT) == OUT) {
count++;
}
}
// Every function has at least one output.
if (count == 0) {
count = 1;
}
return count;
}
@Override
public /* @NonNegative */ int getInParameterCount() {
int count = 0;
for (int i = 0; i < paramTypes.length; i++) {
if (direction(i) != OUT) {
count++;
}
}
return count;
}
@Override
public void setIntParameter(/* @Positive */ int index, int value) throws SQLException {
byte[] data = new byte[4];
ByteConverter.int4(data, 0, value);
bind(index, data, Oid.INT4, BINARY);
}
@Override
public void setLiteralParameter(/* @Positive */ int index, String value, int oid) throws SQLException {
bind(index, value, oid, TEXT);
}
@Override
public void setStringParameter(/* @Positive */ int index, String value, int oid) throws SQLException {
bind(index, value, oid, TEXT);
}
@Override
public void setBinaryParameter(/* @Positive */ int index, byte[] value, int oid) throws SQLException {
bind(index, value, oid, BINARY);
}
@Override
public void setBytea(/* @Positive */ int index, byte[] data, int offset, /* @NonNegative */ int length) throws SQLException {
bind(index, new StreamWrapper(data, offset, length), Oid.BYTEA, BINARY);
}
@Override
public void setBytea(/* @Positive */ int index, InputStream stream, /* @NonNegative */ int length) throws SQLException {
bind(index, new StreamWrapper(stream, length), Oid.BYTEA, BINARY);
}
@Override
public void setBytea(/* @Positive */ int index, InputStream stream) throws SQLException {
bind(index, new StreamWrapper(stream), Oid.BYTEA, BINARY);
}
@Override
public void setBytea(/* @Positive */ int index, ByteStreamWriter writer) throws SQLException {
bind(index, writer, Oid.BYTEA, BINARY);
}
@Override
public void setText(/* @Positive */ int index, InputStream stream) throws SQLException {
bind(index, new StreamWrapper(stream), Oid.TEXT, TEXT);
}
@Override
public void setNull(/* @Positive */ int index, int oid) throws SQLException {
byte binaryTransfer = TEXT;
if (transferModeRegistry != null && transferModeRegistry.useBinaryForReceive(oid)) {
binaryTransfer = BINARY;
}
bind(index, NULL_OBJECT, oid, binaryTransfer);
}
/**
* Escapes a given text value as a literal, wraps it in single quotes, casts it to the
* to the given data type, and finally wraps the whole thing in parentheses.
*
* For example, "123" and "int4" becomes "('123'::int)"
*
* The additional parentheses is added to ensure that the surrounding text of where the
* parameter value is entered does modify the interpretation of the value.
*
* For example if our input SQL is: SELECT ?b
*
* Using a parameter value of '{}' and type of json we'd get:
*
*
* test=# SELECT ('{}'::json)b;
* b
* ----
* {}
*
*
* But without the parentheses the result changes:
*
*
* test=# SELECT '{}'::jsonb;
* jsonb
* -------
* {}
*
**/
private static String quoteAndCast(String text, /* @Nullable */ String type, boolean standardConformingStrings) {
StringBuilder sb = new StringBuilder((text.length() + 10) / 10 * 11); // Add 10% for escaping.
sb.append("('");
try {
Utils.escapeLiteral(sb, text, standardConformingStrings);
} catch (SQLException e) {
// This should only happen if we have an embedded null
// and there's not much we can do if we do hit one.
//
// To force a server side failure, we deliberately include
// a zero byte character in the literal to force the server
// to reject the command.
sb.append('\u0000');
}
sb.append("'");
if (type != null) {
sb.append("::");
sb.append(type);
}
sb.append(")");
return sb.toString();
}
private static RuntimeException sneakyThrow(Throwable e) throws E {
throw (E) e;
}
@Override
public String toString(/* @Positive */ int index, boolean standardConformingStrings) {
return toString(index, SqlSerializationContext.of(standardConformingStrings, true));
}
@Override
public String toString(/* @Positive */ int index, SqlSerializationContext context) {
--index;
Object paramValue = paramValues[index];
if (paramValue == null) {
return "?";
} else if (paramValue == NULL_OBJECT) {
return "(NULL)";
}
String textValue;
String type;
if (paramTypes[index] == Oid.BYTEA) {
try {
return PGbytea.toPGLiteral(paramValue, context);
} catch (Throwable e) {
Throwable cause = e;
if (!(cause instanceof IOException)) {
// This is for compatibilty with the similar handling in QueryExecutorImpl
cause = new IOException("Error writing bytes to stream", e);
}
throw sneakyThrow(
new PSQLException(
GT.tr("Unable to convert bytea parameter at position {0} to literal",
index),
PSQLState.INVALID_PARAMETER_VALUE,
cause));
}
}
if ((flags[index] & BINARY) == BINARY) {
// handle some of the numeric types
switch (paramTypes[index]) {
case Oid.INT2:
short s = ByteConverter.int2((byte[]) paramValue, 0);
textValue = Short.toString(s);
type = "int2";
break;
case Oid.INT4:
int i = ByteConverter.int4((byte[]) paramValue, 0);
textValue = Integer.toString(i);
type = "int4";
break;
case Oid.INT8:
long l = ByteConverter.int8((byte[]) paramValue, 0);
textValue = Long.toString(l);
type = "int8";
break;
case Oid.FLOAT4:
float f = ByteConverter.float4((byte[]) paramValue, 0);
if (Float.isNaN(f)) {
return "('NaN'::real)";
}
textValue = Float.toString(f);
type = "real";
break;
case Oid.FLOAT8:
double d = ByteConverter.float8((byte[]) paramValue, 0);
if (Double.isNaN(d)) {
return "('NaN'::double precision)";
}
textValue = Double.toString(d);
type = "double precision";
break;
case Oid.NUMERIC:
Number n = ByteConverter.numeric((byte[]) paramValue);
if (n instanceof Double) {
assert ((Double) n).isNaN();
return "('NaN'::numeric)";
}
textValue = n.toString();
type = "numeric";
break;
case Oid.UUID:
textValue =
new UUIDArrayAssistant().buildElement((byte[]) paramValue, 0, 16).toString();
type = "uuid";
break;
case Oid.POINT:
PGpoint pgPoint = new PGpoint();
pgPoint.setByteValue((byte[]) paramValue, 0);
textValue = pgPoint.toString();
type = "point";
break;
case Oid.BOX:
PGbox pgBox = new PGbox();
pgBox.setByteValue((byte[]) paramValue, 0);
textValue = pgBox.toString();
type = "box";
break;
default:
return "?";
}
} else {
textValue = paramValue.toString();
switch (paramTypes[index]) {
case Oid.INT2:
type = "int2";
break;
case Oid.INT4:
type = "int4";
break;
case Oid.INT8:
type = "int8";
break;
case Oid.FLOAT4:
type = "real";
break;
case Oid.FLOAT8:
type = "double precision";
break;
case Oid.TIMESTAMP:
type = "timestamp";
break;
case Oid.TIMESTAMPTZ:
type = "timestamp with time zone";
break;
case Oid.TIME:
type = "time";
break;
case Oid.TIMETZ:
type = "time with time zone";
break;
case Oid.DATE:
type = "date";
break;
case Oid.INTERVAL:
type = "interval";
break;
case Oid.NUMERIC:
type = "numeric";
break;
case Oid.UUID:
type = "uuid";
break;
case Oid.BOOL:
type = "boolean";
break;
case Oid.BOX:
type = "box";
break;
case Oid.POINT:
type = "point";
break;
default:
type = null;
}
}
return quoteAndCast(textValue, type, context.getStandardConformingStrings());
}
@Override
public void checkAllParametersSet() throws SQLException {
for (int i = 0; i < paramTypes.length; i++) {
if (direction(i) != OUT && paramValues[i] == null) {
throw new PSQLException(GT.tr("No value specified for parameter {0}.", i + 1),
PSQLState.INVALID_PARAMETER_VALUE);
}
}
}
@Override
public void convertFunctionOutParameters() {
for (int i = 0; i < paramTypes.length; i++) {
if (direction(i) == OUT) {
paramTypes[i] = Oid.VOID;
paramValues[i] = NULL_OBJECT;
}
}
}
//
// bytea helper
//
private static void streamBytea(PGStream pgStream, StreamWrapper wrapper) throws IOException {
byte[] rawData = wrapper.getBytes();
if (rawData != null) {
pgStream.send(rawData, wrapper.getOffset(), wrapper.getLength());
return;
}
pgStream.sendStream(wrapper.getStream(), wrapper.getLength());
}
//
// byte stream writer support
//
private static void streamBytea(PGStream pgStream, ByteStreamWriter writer) throws IOException {
pgStream.send(writer);
}
@Override
public int[] getTypeOIDs() {
return paramTypes;
}
//
// Package-private V3 accessors
//
int getTypeOID(/* @Positive */ int index) {
return paramTypes[index - 1];
}
boolean hasUnresolvedTypes() {
for (int paramType : paramTypes) {
if (paramType == Oid.UNSPECIFIED) {
return true;
}
}
return false;
}
void setResolvedType(/* @Positive */ int index, int oid) {
// only allow overwriting an unknown value or VOID value
if (paramTypes[index - 1] == Oid.UNSPECIFIED || paramTypes[index - 1] == Oid.VOID) {
paramTypes[index - 1] = oid;
} else if (paramTypes[index - 1] != oid) {
throw new IllegalArgumentException("Can't change resolved type for param: " + index + " from "
+ paramTypes[index - 1] + " to " + oid);
}
}
boolean isNull(/* @Positive */ int index) {
return paramValues[index - 1] == NULL_OBJECT;
}
boolean isBinary(/* @Positive */ int index) {
return (flags[index - 1] & BINARY) != 0;
}
private byte direction(/* @Positive */ int index) {
return (byte) (flags[index] & INOUT);
}
int getV3Length(/* @Positive */ int index) {
--index;
// Null?
Object value = paramValues[index];
if (value == null || value == NULL_OBJECT) {
throw new IllegalArgumentException("can't getV3Length() on a null parameter");
}
// Directly encoded?
if (value instanceof byte[]) {
return ((byte[]) value).length;
}
// Binary-format bytea?
if (value instanceof StreamWrapper) {
return ((StreamWrapper) value).getLength();
}
// Binary-format bytea?
if (value instanceof ByteStreamWriter) {
return ((ByteStreamWriter) value).getLength();
}
// Already encoded?
byte[] encoded = this.encoded[index];
if (encoded == null) {
// Encode value and compute actual length using UTF-8.
this.encoded[index] = encoded = value.toString().getBytes(StandardCharsets.UTF_8);
}
return encoded.length;
}
void writeV3Value(/* @Positive */ int index, PGStream pgStream) throws IOException {
--index;
// Null?
Object paramValue = paramValues[index];
if (paramValue == null || paramValue == NULL_OBJECT) {
throw new IllegalArgumentException("can't writeV3Value() on a null parameter");
}
// Directly encoded?
if (paramValue instanceof byte[]) {
pgStream.send((byte[]) paramValue);
return;
}
// Binary-format bytea?
if (paramValue instanceof StreamWrapper) {
try (StreamWrapper streamWrapper = (StreamWrapper) paramValue) {
streamBytea(pgStream, streamWrapper);
}
return;
}
// Streamed bytea?
if (paramValue instanceof ByteStreamWriter) {
streamBytea(pgStream, (ByteStreamWriter) paramValue);
return;
}
// Encoded string.
if (encoded[index] == null) {
encoded[index] = ((String) paramValue).getBytes(StandardCharsets.UTF_8);
}
pgStream.send(encoded[index]);
}
@Override
public ParameterList copy() {
SimpleParameterList newCopy = new SimpleParameterList(paramValues.length, transferModeRegistry);
System.arraycopy(paramValues, 0, newCopy.paramValues, 0, paramValues.length);
System.arraycopy(paramTypes, 0, newCopy.paramTypes, 0, paramTypes.length);
System.arraycopy(flags, 0, newCopy.flags, 0, flags.length);
newCopy.pos = pos;
return newCopy;
}
@Override
public void clear() {
Arrays.fill(paramValues, null);
Arrays.fill(paramTypes, 0);
Arrays.fill(encoded, null);
Arrays.fill(flags, (byte) 0);
pos = 0;
}
@Override
public SimpleParameterList /* @Nullable */ [] getSubparams() {
return null;
}
@Override
public /* @Nullable */ Object[] getValues() {
return paramValues;
}
@Override
public int[] getParamTypes() {
return paramTypes;
}
@Override
public byte[] getFlags() {
return flags;
}
@Override
public byte[] /* @Nullable */ [] getEncoding() {
return encoded;
}
@Override
public void appendAll(ParameterList list) throws SQLException {
if (list instanceof SimpleParameterList ) {
/* only v3.SimpleParameterList is compatible with this type
we need to create copies of our parameters, otherwise the values can be changed */
SimpleParameterList spl = (SimpleParameterList) list;
int inParamCount = spl.getInParameterCount();
if ((pos + inParamCount) > paramValues.length) {
throw new PSQLException(
GT.tr("Added parameters index out of range: {0}, number of columns: {1}.",
(pos + inParamCount), paramValues.length),
PSQLState.INVALID_PARAMETER_VALUE);
}
System.arraycopy(spl.getValues(), 0, this.paramValues, pos, inParamCount);
System.arraycopy(spl.getParamTypes(), 0, this.paramTypes, pos, inParamCount);
System.arraycopy(spl.getFlags(), 0, this.flags, pos, inParamCount);
System.arraycopy(spl.getEncoding(), 0, this.encoded, pos, inParamCount);
pos += inParamCount;
}
}
/**
* Useful implementation of toString.
* @return String representation of the list values
*/
@Override
public String toString() {
StringBuilder ts = new StringBuilder("<[");
if (paramValues.length > 0) {
ts.append(toString(1, true));
for (int c = 2; c <= paramValues.length; c++) {
ts.append(" ,").append(toString(c, true));
}
}
ts.append("]>");
return ts.toString();
}
private final /* @Nullable */ Object[] paramValues;
private final int[] paramTypes;
private final byte[] flags;
private final byte[] /* @Nullable */ [] encoded;
private final /* @Nullable */ TypeTransferModeRegistry transferModeRegistry;
/**
* Marker object representing NULL; this distinguishes "parameter never set" from "parameter set
* to null".
*/
private static final Object NULL_OBJECT = new Object();
private int pos;
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/SimpleQuery.java 0100664 0000000 0000000 00000030011 00000250600 026234 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import org.postgresql.core.Field;
import org.postgresql.core.NativeQuery;
import org.postgresql.core.Oid;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Query;
import org.postgresql.core.SqlCommand;
import org.postgresql.jdbc.PgResultSet;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.lang.ref.PhantomReference;
import java.nio.charset.StandardCharsets;
import java.util.BitSet;
import java.util.Map;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* V3 Query implementation for a single-statement query. This also holds the state of any associated
* server-side named statement. We use a PhantomReference managed by the QueryExecutor to handle
* statement cleanup.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
class SimpleQuery implements Query {
private static final Logger LOGGER = Logger.getLogger(SimpleQuery.class.getName());
SimpleQuery(SimpleQuery src) {
this(src.nativeQuery, src.transferModeRegistry, src.sanitiserDisabled);
}
SimpleQuery(NativeQuery query, /* @Nullable */ TypeTransferModeRegistry transferModeRegistry,
boolean sanitiserDisabled) {
this.nativeQuery = query;
this.transferModeRegistry = transferModeRegistry;
this.sanitiserDisabled = sanitiserDisabled;
}
@Override
public ParameterList createParameterList() {
if (nativeQuery.bindPositions.length == 0) {
return NO_PARAMETERS;
}
return new SimpleParameterList(getBindCount(), transferModeRegistry);
}
@Override
public String toString(/* @Nullable */ ParameterList parameters) {
return toString(parameters, DefaultSqlSerializationContext.STDSTR_IDEMPOTENT);
}
@Override
public String toString(/* @Nullable */ ParameterList parameters, SqlSerializationContext context) {
return nativeQuery.toString(parameters, context);
}
@Override
public String toString() {
return toString(null);
}
@Override
public void close() {
unprepare();
}
@Override
public SimpleQuery /* @Nullable */ [] getSubqueries() {
return null;
}
/**
* Return maximum size in bytes that each result row from this query may return. Mainly used for
* batches that return results.
*
* Results are cached until/unless the query is re-described.
*
* @return Max size of result data in bytes according to returned fields, 0 if no results, -1 if
* result is unbounded.
* @throws IllegalStateException if the query is not described
*/
public int getMaxResultRowSize() {
if (cachedMaxResultRowSize != null) {
return cachedMaxResultRowSize;
}
if (!this.statementDescribed) {
throw new IllegalStateException(
"Cannot estimate result row size on a statement that is not described");
}
int maxResultRowSize = 0;
if (fields != null) {
for (Field f : fields) {
final int fieldLength = f.getLength();
if (fieldLength < 1 || fieldLength >= 65535) {
/*
* Field length unknown or large; we can't make any safe estimates about the result size,
* so we have to fall back to sending queries individually.
*/
maxResultRowSize = -1;
break;
}
maxResultRowSize += fieldLength;
}
}
cachedMaxResultRowSize = maxResultRowSize;
return maxResultRowSize;
}
//
// Implementation guts
//
@Override
public String getNativeSql() {
return nativeQuery.nativeSql;
}
void setStatementName(String statementName, short deallocateEpoch) {
assert statementName != null : "statement name should not be null";
this.statementName = statementName;
this.encodedStatementName = statementName.getBytes(StandardCharsets.UTF_8);
this.deallocateEpoch = deallocateEpoch;
}
void setPrepareTypes(int[] paramTypes) {
// Remember which parameters were unspecified since the parameters will be overridden later by
// ParameterDescription message
for (int i = 0; i < paramTypes.length; i++) {
int paramType = paramTypes[i];
if (paramType == Oid.UNSPECIFIED) {
if (this.unspecifiedParams == null) {
this.unspecifiedParams = new BitSet();
}
this.unspecifiedParams.set(i);
}
}
// paramTypes is changed by "describe statement" response, so we clone the array
// However, we can reuse array if there is one
if (this.preparedTypes == null) {
this.preparedTypes = paramTypes.clone();
return;
}
System.arraycopy(paramTypes, 0, this.preparedTypes, 0, paramTypes.length);
}
int /* @Nullable */ [] getPrepareTypes() {
return preparedTypes;
}
/* @Nullable */ String getStatementName() {
return statementName;
}
boolean isPreparedFor(int[] paramTypes, short deallocateEpoch) {
if (statementName == null || preparedTypes == null) {
return false; // Not prepared.
}
if (this.deallocateEpoch != deallocateEpoch) {
return false;
}
assert paramTypes.length == preparedTypes.length
: String.format("paramTypes:%1$d preparedTypes:%2$d", paramTypes.length,
preparedTypes.length);
// Check for compatible types.
BitSet unspecified = this.unspecifiedParams;
for (int i = 0; i < paramTypes.length; i++) {
int paramType = paramTypes[i];
// Either paramType should match prepared type
// Or paramType==UNSPECIFIED and the prepare type was UNSPECIFIED
// Note: preparedTypes can be updated by "statement describe"
// 1) parse(name="S_01", sql="select ?::timestamp", types={UNSPECIFIED})
// 2) statement describe: bind 1 type is TIMESTAMP
// 3) SimpleQuery.preparedTypes is updated to TIMESTAMP
// ...
// 4.1) bind(name="S_01", ..., types={TIMESTAMP}) -> OK (since preparedTypes is equal to TIMESTAMP)
// 4.2) bind(name="S_01", ..., types={UNSPECIFIED}) -> OK (since the query was initially parsed with UNSPECIFIED)
// 4.3) bind(name="S_01", ..., types={DATE}) -> KO, unprepare and parse required
int preparedType = preparedTypes[i];
if (paramType != preparedType
&& (paramType != Oid.UNSPECIFIED
|| unspecified == null
|| !unspecified.get(i))) {
if (LOGGER.isLoggable(Level.FINER)) {
LOGGER.log(Level.FINER,
"Statement {0} does not match new parameter types. Will have to un-prepare it and parse once again."
+ " To avoid performance issues, use the same data type for the same bind position. Bind index (1-based) is {1},"
+ " preparedType was {2} (after describe {3}), current bind type is {4}",
new Object[]{statementName, i + 1,
Oid.toString(unspecified != null && unspecified.get(i) ? 0 : preparedType),
Oid.toString(preparedType), Oid.toString(paramType)});
}
return false;
}
}
return true;
}
boolean hasUnresolvedTypes() {
if (preparedTypes == null) {
return true;
}
return this.unspecifiedParams != null && !this.unspecifiedParams.isEmpty();
}
byte /* @Nullable */ [] getEncodedStatementName() {
return encodedStatementName;
}
/**
* Sets the fields that this query will return.
*
* @param fields The fields that this query will return.
*/
void setFields(Field /* @Nullable */ [] fields) {
this.fields = fields;
this.resultSetColumnNameIndexMap = null;
this.cachedMaxResultRowSize = null;
this.needUpdateFieldFormats = fields != null;
this.hasBinaryFields = false; // just in case
}
/**
* Returns the fields that this query will return. If the result set fields are not known returns
* null.
*
* @return the fields that this query will return.
*/
Field /* @Nullable */ [] getFields() {
return fields;
}
/**
* Returns true if current query needs field formats be adjusted as per connection configuration.
* Subsequent invocations would return {@code false}. The idea is to perform adjustments only
* once, not for each
* {@link QueryExecutorImpl#sendBind(SimpleQuery, SimpleParameterList, Portal, boolean)}.
*
* @return true if current query needs field formats be adjusted as per connection configuration
*/
boolean needUpdateFieldFormats() {
if (needUpdateFieldFormats) {
needUpdateFieldFormats = false;
return true;
}
return false;
}
public void resetNeedUpdateFieldFormats() {
needUpdateFieldFormats = fields != null;
}
public boolean hasBinaryFields() {
return hasBinaryFields;
}
public void setHasBinaryFields(boolean hasBinaryFields) {
this.hasBinaryFields = hasBinaryFields;
}
// Have we sent a Describe Portal message for this query yet?
boolean isPortalDescribed() {
return portalDescribed;
}
void setPortalDescribed(boolean portalDescribed) {
this.portalDescribed = portalDescribed;
this.cachedMaxResultRowSize = null;
}
// Have we sent a Describe Statement message for this query yet?
// Note that we might not have need to, so this may always be false.
@Override
public boolean isStatementDescribed() {
return statementDescribed;
}
void setStatementDescribed(boolean statementDescribed) {
this.statementDescribed = statementDescribed;
this.cachedMaxResultRowSize = null;
}
@Override
public boolean isEmpty() {
return getNativeSql().isEmpty();
}
void setCleanupRef(PhantomReference> cleanupRef) {
PhantomReference> oldCleanupRef = this.cleanupRef;
if (oldCleanupRef != null) {
oldCleanupRef.clear();
oldCleanupRef.enqueue();
}
this.cleanupRef = cleanupRef;
}
void unprepare() {
PhantomReference> cleanupRef = this.cleanupRef;
if (cleanupRef != null) {
cleanupRef.clear();
cleanupRef.enqueue();
this.cleanupRef = null;
}
if (this.unspecifiedParams != null) {
this.unspecifiedParams.clear();
}
statementName = null;
encodedStatementName = null;
fields = null;
this.resultSetColumnNameIndexMap = null;
portalDescribed = false;
statementDescribed = false;
cachedMaxResultRowSize = null;
}
@Override
public int getBatchSize() {
return 1;
}
NativeQuery getNativeQuery() {
return nativeQuery;
}
public final int getBindCount() {
return nativeQuery.bindPositions.length * getBatchSize();
}
private /* @Nullable */ Map resultSetColumnNameIndexMap;
@Override
public /* @Nullable */ Map getResultSetColumnNameIndexMap() {
Map columnPositions = this.resultSetColumnNameIndexMap;
if (columnPositions == null && fields != null) {
columnPositions =
PgResultSet.createColumnNameIndexMap(fields, sanitiserDisabled);
if (statementName != null) {
// Cache column positions for server-prepared statements only
this.resultSetColumnNameIndexMap = columnPositions;
}
}
return columnPositions;
}
@Override
public SqlCommand getSqlCommand() {
return nativeQuery.getCommand();
}
private final NativeQuery nativeQuery;
private final /* @Nullable */ TypeTransferModeRegistry transferModeRegistry;
private /* @Nullable */ String statementName;
private byte /* @Nullable */ [] encodedStatementName;
/**
* The stored fields from previous execution or describe of a prepared statement. Always null for
* non-prepared statements.
*/
private Field /* @Nullable */ [] fields;
private boolean needUpdateFieldFormats;
private boolean hasBinaryFields;
private boolean portalDescribed;
private boolean statementDescribed;
private final boolean sanitiserDisabled;
private /* @Nullable */ PhantomReference> cleanupRef;
private int /* @Nullable */ [] preparedTypes;
private /* @Nullable */ BitSet unspecifiedParams;
private short deallocateEpoch;
private /* @Nullable */ Integer cachedMaxResultRowSize;
static final SimpleParameterList NO_PARAMETERS = new SimpleParameterList(0, null);
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/SqlSerializationContext.java 0100664 0000000 0000000 00000003147 00000250600 030631 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2025, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
/**
* Specifies the properties required to convert SQL to String.
*/
public interface SqlSerializationContext {
/**
* Returns SqlSerializationContext instance with the given parameters
* @param standardConformingStrings true when string literals should be standard conforming
* @param idempotent true when idempotent conversion is needed
* @return Returns SqlSerializationContext instance with the given parameters
*/
static SqlSerializationContext of(boolean standardConformingStrings, boolean idempotent) {
if (standardConformingStrings) {
return idempotent
? DefaultSqlSerializationContext.STDSTR_IDEMPOTENT
: DefaultSqlSerializationContext.STDSTR_NONIDEMPOTENT;
}
return idempotent
? DefaultSqlSerializationContext.NONSTDSTR_IDEMPOTENT
: DefaultSqlSerializationContext.NONSTDSTR_NONIDEMPOTENT;
}
/**
* Returns true if strings literals should use {@code standard_conforming_strings=on} encoding.
* @return true if strings literals should use {@code standard_conforming_strings=on} encoding.
*/
boolean getStandardConformingStrings();
/**
* Returns true if the SQL to String conversion should be idempotent.
* For instance, if a query parameter comes from an {@link java.io.InputStream},
* then the stream could be skipped when writing SQL with idempotent mode.
* @return true if the SQL to String conversion should be idempotent
*/
boolean getIdempotent();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/TypeTransferModeRegistry.java 0100664 0000000 0000000 00000001150 00000250600 030743 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3;
public interface TypeTransferModeRegistry {
/**
* Returns if given oid should be sent in binary format.
* @param oid type oid
* @return true if given oid should be sent in binary format
*/
boolean useBinaryForSend(int oid);
/**
* Returns if given oid should be received in binary format.
* @param oid type oid
* @return true if given oid should be received in binary format
*/
boolean useBinaryForReceive(int oid);
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/V3ParameterList.java 0100664 0000000 0000000 00000003566 00000250600 026761 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
// Copyright (c) 2004, Open Cloud Limited.
package org.postgresql.core.v3;
import org.postgresql.core.ParameterList;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
/**
* Common interface for all V3 parameter list implementations.
*
* @author Oliver Jowett (oliver@opencloud.com)
*/
interface V3ParameterList extends ParameterList {
/**
* Ensure that all parameters in this list have been assigned values. Return silently if all is
* well, otherwise throw an appropriate exception.
*
* @throws SQLException if not all parameters are set.
*/
void checkAllParametersSet() throws SQLException;
/**
* Convert any function output parameters to the correct type (void) and set an ignorable value
* for it.
*/
void convertFunctionOutParameters();
/**
* Return a list of the SimpleParameterList objects that make up this parameter list. If this
* object is already a SimpleParameterList, returns null (avoids an extra array construction in
* the common case).
*
* @return an array of single-statement parameter lists, or null
if this object is
* already a single-statement parameter list.
*/
SimpleParameterList /* @Nullable */ [] getSubparams();
/**
* Return the parameter type information.
* @return an array of {@link org.postgresql.core.Oid} type information
*/
int /* @Nullable */ [] getParamTypes();
/**
* Return the flags for each parameter.
* @return an array of bytes used to store flags.
*/
byte /* @Nullable */ [] getFlags();
/**
* Return the encoding for each parameter.
* @return nested byte array of bytes with encoding information.
*/
byte /* @Nullable */ [] /* @Nullable */ [] getEncoding();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/adaptivefetch/ 0040775 0000000 0000000 00000000000 00000250600 025731 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000146 00000250600 011612 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/adaptivefetch/AdaptiveFetchCache.java postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/adaptivefetch/AdaptiveFetchCache.jav0100664 0000000 0000000 00000015706 00000250600 032074 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3.adaptivefetch;
import org.postgresql.PGProperty;
import org.postgresql.core.Query;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
/**
* The main purpose of this class is to handle adaptive fetching process. Adaptive fetching is used
* to compute fetch size to fully use size defined by maxResultBuffer. Computing is made by dividing
* maxResultBuffer size by max row result size noticed so far. Each query have separate adaptive
* fetch size computed, but same queries have it shared. If adaptive fetch is turned on, first fetch
* is going to be made with defaultRowFetchSize, next fetching of resultSet will be made with
* computed adaptive fetch size. If adaptive fetch is turned on during fetching, then first fetching
* made by ResultSet will be made with defaultRowFetchSize, next will use computed adaptive fetch
* size. Property adaptiveFetch need properties defaultRowFetchSize and maxResultBuffer to work.
*/
public class AdaptiveFetchCache {
private final Map adaptiveFetchInfoMap;
private boolean adaptiveFetch;
private final int minimumAdaptiveFetchSize;
private int maximumAdaptiveFetchSize = -1;
private long maximumResultBufferSize = -1;
public AdaptiveFetchCache(long maximumResultBufferSize, Properties info)
throws SQLException {
this.adaptiveFetchInfoMap = new HashMap<>();
this.adaptiveFetch = PGProperty.ADAPTIVE_FETCH.getBoolean(info);
this.minimumAdaptiveFetchSize = PGProperty.ADAPTIVE_FETCH_MINIMUM.getInt(info);
this.maximumAdaptiveFetchSize = PGProperty.ADAPTIVE_FETCH_MAXIMUM.getInt(info);
this.maximumResultBufferSize = maximumResultBufferSize;
}
/**
* Add query to being cached and computing adaptive fetch size.
*
* @param adaptiveFetch state of adaptive fetch, which should be used during adding query
* @param query query to be cached
*/
public void addNewQuery(boolean adaptiveFetch, Query query) {
if (adaptiveFetch && maximumResultBufferSize != -1) {
String sql = query.getNativeSql().trim();
AdaptiveFetchCacheEntry adaptiveFetchCacheEntry = adaptiveFetchInfoMap.get(sql);
if (adaptiveFetchCacheEntry == null) {
adaptiveFetchCacheEntry = new AdaptiveFetchCacheEntry();
}
adaptiveFetchCacheEntry.incrementCounter();
adaptiveFetchInfoMap.put(sql, adaptiveFetchCacheEntry);
}
}
/**
* Update adaptive fetch size for given query.
*
* @param adaptiveFetch state of adaptive fetch, which should be used during updating fetch
* size for query
* @param query query to be updated
* @param maximumRowSizeBytes max row size used during updating information about adaptive fetch
* size for given query
*/
public void updateQueryFetchSize(boolean adaptiveFetch, Query query, int maximumRowSizeBytes) {
if (adaptiveFetch && maximumResultBufferSize != -1) {
String sql = query.getNativeSql().trim();
AdaptiveFetchCacheEntry adaptiveFetchCacheEntry = adaptiveFetchInfoMap.get(sql);
if (adaptiveFetchCacheEntry != null) {
int adaptiveMaximumRowSize = adaptiveFetchCacheEntry.getMaximumRowSizeBytes();
if (adaptiveMaximumRowSize < maximumRowSizeBytes && maximumRowSizeBytes > 0) {
int newFetchSize = (int) (maximumResultBufferSize / maximumRowSizeBytes);
newFetchSize = adjustFetchSize(newFetchSize);
adaptiveFetchCacheEntry.setMaximumRowSizeBytes(maximumRowSizeBytes);
adaptiveFetchCacheEntry.setSize(newFetchSize);
adaptiveFetchInfoMap.put(sql, adaptiveFetchCacheEntry);
}
}
}
}
/**
* Get adaptive fetch size for given query.
*
* @param adaptiveFetch state of adaptive fetch, which should be used during getting fetch size
* for query
* @param query query to which we want get adaptive fetch size
* @return adaptive fetch size for query or -1 if size doesn't exist/adaptive fetch state is false
*/
public int getFetchSizeForQuery(boolean adaptiveFetch, Query query) {
if (adaptiveFetch && maximumResultBufferSize != -1) {
String sql = query.getNativeSql().trim();
AdaptiveFetchCacheEntry adaptiveFetchCacheEntry = adaptiveFetchInfoMap.get(sql);
if (adaptiveFetchCacheEntry != null) {
return adaptiveFetchCacheEntry.getSize();
}
}
return -1;
}
/**
* Remove query information from caching.
*
* @param adaptiveFetch state of adaptive fetch, which should be used during removing fetch size
* for query
* @param query query to be removed from caching
*/
public void removeQuery(boolean adaptiveFetch, Query query) {
if (adaptiveFetch && maximumResultBufferSize != -1) {
String sql = query.getNativeSql().trim();
AdaptiveFetchCacheEntry adaptiveFetchCacheEntry = adaptiveFetchInfoMap.get(sql);
if (adaptiveFetchCacheEntry != null) {
adaptiveFetchCacheEntry.decrementCounter();
if (adaptiveFetchCacheEntry.getCounter() < 1) {
adaptiveFetchInfoMap.remove(sql);
} else {
adaptiveFetchInfoMap.put(sql, adaptiveFetchCacheEntry);
}
}
}
}
/**
* Set maximum and minimum constraints on given value.
*
* @param actualSize value which should be the computed fetch size
* @return value which meet the constraints
*/
private int adjustFetchSize(int actualSize) {
int size = adjustMaximumFetchSize(actualSize);
size = adjustMinimumFetchSize(size);
return size;
}
/**
* Set minimum constraint on given value.
*
* @param actualSize value which should be the computed fetch size
* @return value which meet the minimum constraint
*/
private int adjustMinimumFetchSize(int actualSize) {
if (minimumAdaptiveFetchSize == 0) {
return actualSize;
}
if (minimumAdaptiveFetchSize > actualSize) {
return minimumAdaptiveFetchSize;
} else {
return actualSize;
}
}
/**
* Set maximum constraint on given value.
*
* @param actualSize value which should be the computed fetch size
* @return value which meet the maximum constraint
*/
private int adjustMaximumFetchSize(int actualSize) {
if (maximumAdaptiveFetchSize == -1) {
return actualSize;
}
if (maximumAdaptiveFetchSize < actualSize) {
return maximumAdaptiveFetchSize;
} else {
return actualSize;
}
}
/**
* Get state of adaptive fetch.
*
* @return state of adaptive fetch
*/
public boolean getAdaptiveFetch() {
return adaptiveFetch;
}
/**
* Set state of adaptive fetch.
*
* @param adaptiveFetch desired state of adaptive fetch
*/
public void setAdaptiveFetch(boolean adaptiveFetch) {
this.adaptiveFetch = adaptiveFetch;
}
}
././@LongLink 0100644 0000000 0000000 00000000153 00000250600 011610 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/adaptivefetch/AdaptiveFetchCacheEntry.java postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/adaptivefetch/AdaptiveFetchCacheEntr0100664 0000000 0000000 00000001771 00000250600 032143 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3.adaptivefetch;
public class AdaptiveFetchCacheEntry {
private int size = -1; // Holds information about adaptive fetch size for query
private int counter; // Number of queries in execution using that query info
private int maximumRowSizeBytes = -1; // Maximum row size in bytes saved for query so far
public int getSize() {
return size;
}
public void setSize(int size) {
this.size = size;
}
public int getCounter() {
return counter;
}
public void setCounter(int counter) {
this.counter = counter;
}
public int getMaximumRowSizeBytes() {
return maximumRowSizeBytes;
}
public void setMaximumRowSizeBytes(int maximumRowSizeBytes) {
this.maximumRowSizeBytes = maximumRowSizeBytes;
}
public void incrementCounter() {
counter++;
}
public void decrementCounter() {
counter--;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/replication/ 0040775 0000000 0000000 00000000000 00000250600 025433 5 ustar 00 0000000 0000000 ././@LongLink 0100644 0000000 0000000 00000000147 00000250600 011613 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/replication/V3PGReplicationStream.java postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/replication/V3PGReplicationStream.ja0100664 0000000 0000000 00000023622 00000250600 032036 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3.replication;
import org.postgresql.copy.CopyDual;
import org.postgresql.replication.LogSequenceNumber;
import org.postgresql.replication.PGReplicationStream;
import org.postgresql.replication.ReplicationType;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.net.SocketTimeoutException;
import java.nio.ByteBuffer;
import java.sql.SQLException;
import java.util.Date;
import java.util.concurrent.TimeUnit;
import java.util.logging.Level;
import java.util.logging.Logger;
public class V3PGReplicationStream implements PGReplicationStream {
private static final Logger LOGGER = Logger.getLogger(V3PGReplicationStream.class.getName());
public static final long POSTGRES_EPOCH_2000_01_01 = 946684800000L;
private static final long NANOS_PER_MILLISECOND = 1000000L;
private final CopyDual copyDual;
private final long updateInterval;
private final ReplicationType replicationType;
private final boolean automaticFlush;
private long lastStatusUpdate;
private boolean closeFlag;
private LogSequenceNumber lastServerLSN = LogSequenceNumber.INVALID_LSN;
/**
* Last receive LSN + payload size.
*/
private volatile LogSequenceNumber lastReceiveLSN = LogSequenceNumber.INVALID_LSN;
private volatile LogSequenceNumber lastAppliedLSN = LogSequenceNumber.INVALID_LSN;
private volatile LogSequenceNumber lastFlushedLSN = LogSequenceNumber.INVALID_LSN;
private volatile LogSequenceNumber startOfLastMessageLSN = LogSequenceNumber.INVALID_LSN;
private volatile LogSequenceNumber explicitlyFlushedLSN = LogSequenceNumber.INVALID_LSN;
/**
* @param copyDual bidirectional copy protocol
* @param startLSN the position in the WAL that we want to initiate replication from
* usually the currentLSN returned by calling pg_current_wal_lsn()for v10
* above or pg_current_xlog_location() depending on the version of the
* server
* @param updateIntervalMs the number of millisecond between status packets sent back to the
* server. A value of zero disables the periodic status updates
* completely, although an update will still be sent when requested by the
* server, to avoid timeout disconnect.
* @param replicationType LOGICAL or PHYSICAL
*/
public V3PGReplicationStream(CopyDual copyDual, LogSequenceNumber startLSN, long updateIntervalMs,
boolean automaticFlush, ReplicationType replicationType
) {
this.copyDual = copyDual;
this.updateInterval = updateIntervalMs * NANOS_PER_MILLISECOND;
this.lastStatusUpdate = System.nanoTime() - (updateIntervalMs * NANOS_PER_MILLISECOND);
this.lastReceiveLSN = startLSN;
this.automaticFlush = automaticFlush;
this.replicationType = replicationType;
}
@Override
public /* @Nullable */ ByteBuffer read() throws SQLException {
checkClose();
ByteBuffer payload = null;
while (payload == null && copyDual.isActive()) {
payload = readInternal(true);
}
return payload;
}
@Override
public /* @Nullable */ ByteBuffer readPending() throws SQLException {
checkClose();
return readInternal(false);
}
@Override
public LogSequenceNumber getLastReceiveLSN() {
return lastReceiveLSN;
}
@Override
public LogSequenceNumber getLastFlushedLSN() {
return lastFlushedLSN;
}
@Override
public LogSequenceNumber getLastAppliedLSN() {
return lastAppliedLSN;
}
@Override
public void setFlushedLSN(LogSequenceNumber flushed) {
this.lastFlushedLSN = flushed;
}
@Override
public void setAppliedLSN(LogSequenceNumber applied) {
this.lastAppliedLSN = applied;
}
@Override
public void forceUpdateStatus() throws SQLException {
checkClose();
updateStatusInternal(lastReceiveLSN, lastFlushedLSN, lastAppliedLSN, true);
}
@Override
public boolean isClosed() {
return closeFlag || !copyDual.isActive();
}
private /* @Nullable */ ByteBuffer readInternal(boolean block) throws SQLException {
boolean updateStatusRequired = false;
while (copyDual.isActive()) {
ByteBuffer buffer = receiveNextData(block);
if (updateStatusRequired || isTimeUpdate()) {
timeUpdateStatus();
}
if (buffer == null) {
return null;
}
int code = buffer.get();
switch (code) {
case 'k': //KeepAlive message
updateStatusRequired = processKeepAliveMessage(buffer);
updateStatusRequired |= updateInterval == 0;
break;
case 'w': //XLogData
return processXLogData(buffer);
default:
throw new PSQLException(
GT.tr("Unexpected packet type during replication: {0}", Integer.toString(code)),
PSQLState.PROTOCOL_VIOLATION
);
}
}
return null;
}
private /* @Nullable */ ByteBuffer receiveNextData(boolean block) throws SQLException {
try {
byte[] message = copyDual.readFromCopy(block);
if (message != null) {
return ByteBuffer.wrap(message);
} else {
return null;
}
} catch (PSQLException e) { //todo maybe replace on thread sleep?
if (e.getCause() instanceof SocketTimeoutException) {
//signal for keep alive
return null;
}
throw e;
}
}
private boolean isTimeUpdate() {
/* a value of 0 disables automatic updates */
if ( updateInterval == 0 ) {
return false;
}
long diff = System.nanoTime() - lastStatusUpdate;
return diff >= updateInterval;
}
private void timeUpdateStatus() throws SQLException {
updateStatusInternal(lastReceiveLSN, lastFlushedLSN, lastAppliedLSN, false);
}
private void updateStatusInternal(
LogSequenceNumber received, LogSequenceNumber flushed, LogSequenceNumber applied,
boolean replyRequired)
throws SQLException {
byte[] reply = prepareUpdateStatus(received, flushed, applied, replyRequired);
copyDual.writeToCopy(reply, 0, reply.length);
copyDual.flushCopy();
explicitlyFlushedLSN = flushed;
lastStatusUpdate = System.nanoTime();
}
private byte[] prepareUpdateStatus(LogSequenceNumber received, LogSequenceNumber flushed,
LogSequenceNumber applied, boolean replyRequired) {
ByteBuffer byteBuffer = ByteBuffer.allocate(1 + 8 + 8 + 8 + 8 + 1);
long now = System.nanoTime() / NANOS_PER_MILLISECOND;
long systemClock = TimeUnit.MICROSECONDS.convert((now - POSTGRES_EPOCH_2000_01_01),
TimeUnit.MICROSECONDS);
if (LOGGER.isLoggable(Level.FINEST)) {
@SuppressWarnings("JavaUtilDate")
Date clock = new Date(now);
LOGGER.log(Level.FINEST, " FE=> StandbyStatusUpdate(received: {0}, flushed: {1}, applied: {2}, clock: {3})",
new Object[]{received.asString(), flushed.asString(), applied.asString(), clock});
}
byteBuffer.put((byte) 'r');
byteBuffer.putLong(received.asLong());
byteBuffer.putLong(flushed.asLong());
byteBuffer.putLong(applied.asLong());
byteBuffer.putLong(systemClock);
if (replyRequired) {
byteBuffer.put((byte) 1);
} else {
byteBuffer.put(received.equals(LogSequenceNumber.INVALID_LSN) ? (byte) 1 : (byte) 0);
}
lastStatusUpdate = now;
return byteBuffer.array();
}
private boolean processKeepAliveMessage(ByteBuffer buffer) {
lastServerLSN = LogSequenceNumber.valueOf(buffer.getLong());
if (lastServerLSN.asLong() > lastReceiveLSN.asLong()) {
lastReceiveLSN = lastServerLSN;
}
// if the client has confirmed flush of last XLogData msg and KeepAlive shows ServerLSN is still
// advancing, we can safely advance FlushLSN to ServerLSN
if (automaticFlush && explicitlyFlushedLSN.asLong() >= startOfLastMessageLSN.asLong()
&& lastServerLSN.asLong() > explicitlyFlushedLSN.asLong()
&& lastServerLSN.asLong() > lastFlushedLSN.asLong()) {
lastFlushedLSN = lastServerLSN;
}
long lastServerClock = buffer.getLong();
boolean replyRequired = buffer.get() != 0;
if (LOGGER.isLoggable(Level.FINEST)) {
@SuppressWarnings("JavaUtilDate")
Date clockTime = new Date(
TimeUnit.MILLISECONDS.convert(lastServerClock, TimeUnit.MICROSECONDS)
+ POSTGRES_EPOCH_2000_01_01);
LOGGER.log(Level.FINEST, " <=BE Keepalive(lastServerWal: {0}, clock: {1} needReply: {2})",
new Object[]{lastServerLSN.asString(), clockTime, replyRequired});
}
return replyRequired;
}
private ByteBuffer processXLogData(ByteBuffer buffer) {
long startLsn = buffer.getLong();
startOfLastMessageLSN = LogSequenceNumber.valueOf(startLsn);
lastServerLSN = LogSequenceNumber.valueOf(buffer.getLong());
long systemClock = buffer.getLong();
if (replicationType == ReplicationType.LOGICAL) {
lastReceiveLSN = LogSequenceNumber.valueOf(startLsn);
} else if (replicationType == ReplicationType.PHYSICAL) {
int payloadSize = buffer.limit() - buffer.position();
lastReceiveLSN = LogSequenceNumber.valueOf(startLsn + payloadSize);
}
if (LOGGER.isLoggable(Level.FINEST)) {
LOGGER.log(Level.FINEST, " <=BE XLogData(currWal: {0}, lastServerWal: {1}, clock: {2})",
new Object[]{lastReceiveLSN.asString(), lastServerLSN.asString(), systemClock});
}
return buffer.slice();
}
private void checkClose() throws PSQLException {
if (isClosed()) {
throw new PSQLException(GT.tr("This replication stream has been closed."),
PSQLState.CONNECTION_DOES_NOT_EXIST);
}
}
@Override
public void close() throws SQLException {
if (isClosed()) {
return;
}
LOGGER.log(Level.FINEST, " FE=> StopReplication");
copyDual.endCopy();
closeFlag = true;
}
}
././@LongLink 0100644 0000000 0000000 00000000147 00000250600 011613 L ustar 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/replication/V3ReplicationProtocol.java postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/core/v3/replication/V3ReplicationProtocol.ja0100664 0000000 0000000 00000011152 00000250600 032150 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.core.v3.replication;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.copy.CopyDual;
import org.postgresql.core.PGStream;
import org.postgresql.core.QueryExecutor;
import org.postgresql.core.ReplicationProtocol;
import org.postgresql.replication.PGReplicationStream;
import org.postgresql.replication.ReplicationType;
import org.postgresql.replication.fluent.CommonOptions;
import org.postgresql.replication.fluent.logical.LogicalReplicationOptions;
import org.postgresql.replication.fluent.physical.PhysicalReplicationOptions;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.io.IOException;
import java.sql.SQLException;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
public class V3ReplicationProtocol implements ReplicationProtocol {
private static final Logger LOGGER = Logger.getLogger(V3ReplicationProtocol.class.getName());
private final QueryExecutor queryExecutor;
private final PGStream pgStream;
public V3ReplicationProtocol(QueryExecutor queryExecutor, PGStream pgStream) {
this.queryExecutor = queryExecutor;
this.pgStream = pgStream;
}
@Override
public PGReplicationStream startLogical(LogicalReplicationOptions options)
throws SQLException {
String query = createStartLogicalQuery(options);
return initializeReplication(query, options, ReplicationType.LOGICAL);
}
@Override
public PGReplicationStream startPhysical(PhysicalReplicationOptions options)
throws SQLException {
String query = createStartPhysicalQuery(options);
return initializeReplication(query, options, ReplicationType.PHYSICAL);
}
private PGReplicationStream initializeReplication(String query, CommonOptions options,
ReplicationType replicationType)
throws SQLException {
LOGGER.log(Level.FINEST, " FE=> StartReplication(query: {0})", query);
configureSocketTimeout(options);
CopyDual copyDual = (CopyDual) queryExecutor.startCopy(query, true);
return new V3PGReplicationStream(
castNonNull(copyDual),
options.getStartLSNPosition(),
options.getStatusInterval(),
options.getAutomaticFlush(),
replicationType
);
}
/**
* START_REPLICATION [SLOT slot_name] [PHYSICAL] XXX/XXX.
*/
private static String createStartPhysicalQuery(PhysicalReplicationOptions options) {
StringBuilder builder = new StringBuilder();
builder.append("START_REPLICATION");
if (options.getSlotName() != null) {
builder.append(" SLOT ").append(options.getSlotName());
}
builder.append(" PHYSICAL ").append(options.getStartLSNPosition().asString());
return builder.toString();
}
/**
* START_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ ( option_name [option_value] [, ... ] ) ]
*/
private static String createStartLogicalQuery(LogicalReplicationOptions options) {
StringBuilder builder = new StringBuilder();
builder.append("START_REPLICATION SLOT ")
.append(options.getSlotName())
.append(" LOGICAL ")
.append(options.getStartLSNPosition().asString());
Properties slotOptions = options.getSlotOptions();
if (slotOptions.isEmpty()) {
return builder.toString();
}
//todo replace on java 8
builder.append(" (");
boolean isFirst = true;
for (String name : slotOptions.stringPropertyNames()) {
if (isFirst) {
isFirst = false;
} else {
builder.append(", ");
}
builder.append('\"').append(name).append('\"').append(" ")
.append('\'').append(slotOptions.getProperty(name)).append('\'');
}
builder.append(")");
return builder.toString();
}
private void configureSocketTimeout(CommonOptions options) throws PSQLException {
if (options.getStatusInterval() == 0) {
return;
}
try {
int previousTimeOut = pgStream.getSocket().getSoTimeout();
int minimalTimeOut;
if (previousTimeOut > 0) {
minimalTimeOut = Math.min(previousTimeOut, options.getStatusInterval());
} else {
minimalTimeOut = options.getStatusInterval();
}
pgStream.getSocket().setSoTimeout(minimalTimeOut);
// Use blocking 1ms reads for `available()` checks
pgStream.setMinStreamAvailableCheckDelay(0);
} catch (IOException ioe) {
throw new PSQLException(GT.tr("The connection attempt failed."),
PSQLState.CONNECTION_UNABLE_TO_CONNECT, ioe);
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/ 0040775 0000000 0000000 00000000000 00000250600 022250 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/PGConnectionPoolDataSource.java 0100664 0000000 0000000 00000007147 00000250600 030254 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.ds;
import org.postgresql.ds.common.BaseDataSource;
import org.postgresql.util.DriverInfo;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.sql.SQLException;
import javax.sql.ConnectionPoolDataSource;
import javax.sql.PooledConnection;
/**
* PostgreSQL implementation of ConnectionPoolDataSource. The app server or middleware vendor should
* provide a DataSource implementation that takes advantage of this ConnectionPoolDataSource. If
* not, you can use the PostgreSQL implementation known as PoolingDataSource, but that should only
* be used if your server or middleware vendor does not provide their own. Why? The server may want
* to reuse the same Connection across all EJBs requesting a Connection within the same Transaction,
* or provide other similar advanced features.
*
*
* In any case, in order to use this ConnectionPoolDataSource, you must set the property
* databaseName. The settings for serverName, portNumber, user, and password are optional. Note:
* these properties are declared in the superclass.
*
*
*
* This implementation supports JDK 1.3 and higher.
*
*
* @author Aaron Mulder (ammulder@chariotsolutions.com)
*/
public class PGConnectionPoolDataSource extends BaseDataSource
implements ConnectionPoolDataSource, Serializable {
private boolean defaultAutoCommit = true;
/**
* Gets a description of this DataSource.
*/
@Override
public String getDescription() {
return "ConnectionPoolDataSource from " + DriverInfo.DRIVER_FULL_NAME;
}
/**
* Gets a connection which may be pooled by the app server or middleware implementation of
* DataSource.
*
* @throws java.sql.SQLException Occurs when the physical database connection cannot be
* established.
*/
@Override
public PooledConnection getPooledConnection() throws SQLException {
return new PGPooledConnection(getConnection(), defaultAutoCommit);
}
/**
* Gets a connection which may be pooled by the app server or middleware implementation of
* DataSource.
*
* @throws java.sql.SQLException Occurs when the physical database connection cannot be
* established.
*/
@Override
public PooledConnection getPooledConnection(String user, String password) throws SQLException {
return new PGPooledConnection(getConnection(user, password), defaultAutoCommit);
}
/**
* Gets whether connections supplied by this pool will have autoCommit turned on by default. The
* default value is {@code true}, so that autoCommit will be turned on by default.
*
* @return true if connections supplied by this pool will have autoCommit
*/
public boolean isDefaultAutoCommit() {
return defaultAutoCommit;
}
/**
* Sets whether connections supplied by this pool will have autoCommit turned on by default. The
* default value is {@code true}, so that autoCommit will be turned on by default.
*
* @param defaultAutoCommit whether connections supplied by this pool will have autoCommit
*/
public void setDefaultAutoCommit(boolean defaultAutoCommit) {
this.defaultAutoCommit = defaultAutoCommit;
}
private void writeObject(ObjectOutputStream out) throws IOException {
writeBaseObject(out);
out.writeBoolean(defaultAutoCommit);
}
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
readBaseObject(in);
defaultAutoCommit = in.readBoolean();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/PGPooledConnection.java 0100664 0000000 0000000 00000036262 00000250600 026612 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group.
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.ds;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.PGConnection;
import org.postgresql.PGStatement;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.List;
import javax.sql.ConnectionEvent;
import javax.sql.ConnectionEventListener;
import javax.sql.PooledConnection;
import javax.sql.StatementEventListener;
/**
* PostgreSQL implementation of the PooledConnection interface. This shouldn't be used directly, as
* the pooling client should just interact with the ConnectionPool instead.
*
* @author Aaron Mulder (ammulder@chariotsolutions.com)
* @author Csaba Nagy (ncsaba@yahoo.com)
* @see org.postgresql.ds.PGConnectionPoolDataSource
*/
public class PGPooledConnection implements PooledConnection {
private final List listeners = new ArrayList<>();
private /* @Nullable */ Connection con;
private /* @Nullable */ ConnectionHandler last;
private final boolean autoCommit;
private final boolean isXA;
/**
* Creates a new PooledConnection representing the specified physical connection.
*
* @param con connection
* @param autoCommit whether to autocommit
* @param isXA whether connection is a XA connection
*/
public PGPooledConnection(Connection con, boolean autoCommit, boolean isXA) {
this.con = con;
this.autoCommit = autoCommit;
this.isXA = isXA;
}
public PGPooledConnection(Connection con, boolean autoCommit) {
this(con, autoCommit, false);
}
/**
* Adds a listener for close or fatal error events on the connection handed out to a client.
*/
@Override
public void addConnectionEventListener(ConnectionEventListener connectionEventListener) {
listeners.add(connectionEventListener);
}
/**
* Removes a listener for close or fatal error events on the connection handed out to a client.
*/
@Override
public void removeConnectionEventListener(ConnectionEventListener connectionEventListener) {
listeners.remove(connectionEventListener);
}
/**
* Closes the physical database connection represented by this PooledConnection. If any client has
* a connection based on this PooledConnection, it is forcibly closed as well.
*/
@Override
public void close() throws SQLException {
Connection con = this.con;
ConnectionHandler last = this.last;
if (last != null) {
last.close();
if (con != null && !con.isClosed()) {
if (!con.getAutoCommit()) {
try {
con.rollback();
} catch (SQLException ignored) {
// TODO: should we rethrow it?
}
}
}
}
if (con == null) {
return;
}
try {
con.close();
} finally {
this.con = null;
}
}
/**
* Gets a handle for a client to use. This is a wrapper around the physical connection, so the
* client can call close and it will just return the connection to the pool without really closing
* the physical connection.
*
*
* According to the JDBC 2.0 Optional Package spec (6.2.3), only one client may have an active
* handle to the connection at a time, so if there is a previous handle active when this is
* called, the previous one is forcibly closed and its work rolled back.
*
*/
@Override
public Connection getConnection() throws SQLException {
Connection con = this.con;
if (con == null) {
// Before throwing the exception, let's notify the registered listeners about the error
PSQLException sqlException =
new PSQLException(GT.tr("This PooledConnection has already been closed."),
PSQLState.CONNECTION_DOES_NOT_EXIST);
fireConnectionFatalError(sqlException);
throw sqlException;
}
// If any error occurs while opening a new connection, the listeners
// have to be notified. This gives a chance to connection pools to
// eliminate bad pooled connections.
try {
// Only one connection can be open at a time from this PooledConnection. See JDBC 2.0 Optional
// Package spec section 6.2.3
ConnectionHandler last = this.last;
if (last != null) {
last.close();
if (con != null) {
if (!con.getAutoCommit()) {
try {
con.rollback();
} catch (SQLException ignored) {
// TODO: should we rethrow it?
}
}
con.clearWarnings();
}
}
/*
* In XA-mode, autocommit is handled in PGXAConnection, because it depends on whether an
* XA-transaction is open or not
*/
if (!isXA && con != null) {
con.setAutoCommit(autoCommit);
}
} catch (SQLException sqlException) {
fireConnectionFatalError(sqlException);
throw (SQLException) sqlException.fillInStackTrace();
}
ConnectionHandler handler = new ConnectionHandler(castNonNull(con));
last = handler;
Connection proxyCon = (Connection) Proxy.newProxyInstance(getClass().getClassLoader(),
new Class[]{Connection.class, PGConnection.class}, handler);
handler.setProxy(proxyCon);
return proxyCon;
}
/**
* Used to fire a connection closed event to all listeners.
*/
void fireConnectionClosed() {
ConnectionEvent evt = null;
// Copy the listener list so the listener can remove itself during this method call
ConnectionEventListener[] local =
listeners.toArray(new ConnectionEventListener[0]);
for (ConnectionEventListener listener : local) {
if (evt == null) {
evt = createConnectionEvent(null);
}
listener.connectionClosed(evt);
}
}
/**
* Used to fire a connection error event to all listeners.
*/
void fireConnectionFatalError(SQLException e) {
ConnectionEvent evt = null;
// Copy the listener list so the listener can remove itself during this method call
ConnectionEventListener[] local =
listeners.toArray(new ConnectionEventListener[0]);
for (ConnectionEventListener listener : local) {
if (evt == null) {
evt = createConnectionEvent(e);
}
listener.connectionErrorOccurred(evt);
}
}
protected ConnectionEvent createConnectionEvent(/* @Nullable */ SQLException e) {
return e == null ? new ConnectionEvent(this) : new ConnectionEvent(this, e);
}
// Classes we consider fatal.
private static final String[] fatalClasses = {
"08", // connection error
"53", // insufficient resources
// nb: not just "57" as that includes query cancel which is nonfatal
"57P01", // admin shutdown
"57P02", // crash shutdown
"57P03", // cannot connect now
"58", // system error (backend)
"60", // system error (driver)
"99", // unexpected error
"F0", // configuration file error (backend)
"XX", // internal error (backend)
};
private static boolean isFatalState(/* @Nullable */ String state) {
if (state == null) {
// no info, assume fatal
return true;
}
if (state.length() < 2) {
// no class info, assume fatal
return true;
}
for (String fatalClass : fatalClasses) {
if (state.startsWith(fatalClass)) {
return true; // fatal
}
}
return false;
}
/**
* Fires a connection error event, but only if we think the exception is fatal.
*
* @param e the SQLException to consider
*/
private void fireConnectionError(SQLException e) {
if (!isFatalState(e.getSQLState())) {
return;
}
fireConnectionFatalError(e);
}
/**
* Instead of declaring a class implementing Connection, which would have to be updated for every
* JDK rev, use a dynamic proxy to handle all calls through the Connection interface. This is the
* part that requires JDK 1.3 or higher, though JDK 1.2 could be supported with a 3rd-party proxy
* package.
*/
private class ConnectionHandler implements InvocationHandler {
private /* @Nullable */ Connection con;
private /* @Nullable */ Connection proxy; // the Connection the client is currently using, which is a proxy
private boolean automatic;
ConnectionHandler(Connection con) {
this.con = con;
}
@Override
@SuppressWarnings("throwing.nullable")
public /* @Nullable */ Object invoke(Object proxy, Method method, /* @Nullable */ Object[] args) throws Throwable {
final String methodName = method.getName();
// From Object
if (method.getDeclaringClass() == Object.class) {
if ("toString".equals(methodName)) {
return "Pooled connection wrapping physical connection " + con;
}
if ("equals".equals(methodName)) {
return proxy == args[0];
}
if ("hashCode".equals(methodName)) {
return System.identityHashCode(proxy);
}
try {
return method.invoke(con, args);
} catch (InvocationTargetException e) {
// throwing.nullable
throw e.getTargetException();
}
}
// All the rest is from the Connection or PGConnection interface
Connection con = this.con;
if ("isClosed".equals(methodName)) {
return con == null || con.isClosed();
}
if ("close".equals(methodName)) {
// we are already closed and a double close
// is not an error.
if (con == null) {
return null;
}
SQLException ex = null;
if (!con.isClosed()) {
if (!isXA && !con.getAutoCommit()) {
try {
con.rollback();
} catch (SQLException e) {
ex = e;
}
}
con.clearWarnings();
}
this.con = null;
this.proxy = null;
last = null;
fireConnectionClosed();
if (ex != null) {
throw ex;
}
return null;
}
if (con == null || con.isClosed()) {
throw new PSQLException(automatic
? GT.tr(
"Connection has been closed automatically because a new connection was opened for the same PooledConnection or the PooledConnection has been closed.")
: GT.tr("Connection has been closed."), PSQLState.CONNECTION_DOES_NOT_EXIST);
}
// From here on in, we invoke via reflection, catch exceptions,
// and check if they're fatal before rethrowing.
try {
if ("createStatement".equals(methodName)) {
Statement st = castNonNull((Statement) method.invoke(con, args));
return Proxy.newProxyInstance(getClass().getClassLoader(),
new Class[]{Statement.class, PGStatement.class},
new StatementHandler(this, st));
} else if ("prepareCall".equals(methodName)) {
Statement st = castNonNull((Statement) method.invoke(con, args));
return Proxy.newProxyInstance(getClass().getClassLoader(),
new Class[]{CallableStatement.class, PGStatement.class},
new StatementHandler(this, st));
} else if ("prepareStatement".equals(methodName)) {
Statement st = castNonNull((Statement) method.invoke(con, args));
return Proxy.newProxyInstance(getClass().getClassLoader(),
new Class[]{PreparedStatement.class, PGStatement.class},
new StatementHandler(this, st));
} else {
return method.invoke(con, args);
}
} catch (final InvocationTargetException ite) {
final Throwable te = ite.getTargetException();
if (te instanceof SQLException) {
fireConnectionError((SQLException) te); // Tell listeners about exception if it's fatal
}
throw te;
}
}
Connection getProxy() {
return castNonNull(proxy);
}
void setProxy(Connection proxy) {
this.proxy = proxy;
}
public void close() {
if (con != null) {
automatic = true;
}
con = null;
proxy = null;
// No close event fired here: see JDBC 2.0 Optional Package spec section 6.3
}
@SuppressWarnings("UnusedMethod")
public boolean isClosed() {
return con == null;
}
}
/**
* Instead of declaring classes implementing Statement, PreparedStatement, and CallableStatement,
* which would have to be updated for every JDK rev, use a dynamic proxy to handle all calls
* through the Statement interfaces. This is the part that requires JDK 1.3 or higher, though JDK
* 1.2 could be supported with a 3rd-party proxy package.
*
* The StatementHandler is required in order to return the proper Connection proxy for the
* getConnection method.
*/
private class StatementHandler implements InvocationHandler {
private /* @Nullable */ ConnectionHandler con;
private /* @Nullable */ Statement st;
StatementHandler(ConnectionHandler con, Statement st) {
this.con = con;
this.st = st;
}
@Override
@SuppressWarnings("throwing.nullable")
public /* @Nullable */ Object invoke(Object proxy, Method method, /* @Nullable */ Object[] args)
throws Throwable {
final String methodName = method.getName();
// From Object
if (method.getDeclaringClass() == Object.class) {
if ("toString".equals(methodName)) {
return "Pooled statement wrapping physical statement " + st;
}
if ("hashCode".equals(methodName)) {
return System.identityHashCode(proxy);
}
if ("equals".equals(methodName)) {
return proxy == args[0];
}
return method.invoke(st, args);
}
Statement st = this.st;
// All the rest is from the Statement interface
if ("isClosed".equals(methodName)) {
return st == null || st.isClosed();
}
if ("close".equals(methodName)) {
if (st == null || st.isClosed()) {
return null;
}
con = null;
this.st = null;
st.close();
return null;
}
if (st == null || st.isClosed()) {
throw new PSQLException(GT.tr("Statement has been closed."), PSQLState.OBJECT_NOT_IN_STATE);
}
if ("getConnection".equals(methodName)) {
return castNonNull(con).getProxy(); // the proxied connection, not a physical connection
}
// Delegate the call to the proxied Statement.
try {
return method.invoke(st, args);
} catch (final InvocationTargetException ite) {
final Throwable te = ite.getTargetException();
if (te instanceof SQLException) {
fireConnectionError((SQLException) te); // Tell listeners about exception if it's fatal
}
throw te;
}
}
}
@Override
public void removeStatementEventListener(StatementEventListener listener) {
}
@Override
public void addStatementEventListener(StatementEventListener listener) {
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/PGPoolingDataSource.java 0100664 0000000 0000000 00000042002 00000250600 026717 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.ds;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.ds.common.BaseDataSource;
import org.postgresql.jdbc.ResourceLock;
import org.postgresql.util.DriverInfo;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.ArrayDeque;
import java.util.Deque;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;
import javax.naming.NamingException;
import javax.naming.Reference;
import javax.naming.StringRefAddr;
import javax.sql.ConnectionEvent;
import javax.sql.ConnectionEventListener;
import javax.sql.DataSource;
import javax.sql.PooledConnection;
/**
* DataSource which uses connection pooling. Don't use this if your
* server/middleware vendor provides a connection pooling implementation which interfaces with the
* PostgreSQL ConnectionPoolDataSource implementation! This class is provided as a
* convenience, but the JDBC Driver is really not supposed to handle the connection pooling
* algorithm. Instead, the server or middleware product is supposed to handle the mechanics of
* connection pooling, and use the PostgreSQL implementation of ConnectionPoolDataSource to provide
* the connections to pool.
*
*
* If you're sure you want to use this, then you must set the properties dataSourceName,
* databaseName, user, and password (if required for the user). The settings for serverName,
* portNumber, initialConnections, and maxConnections are optional. Note that only connections
* for the default user will be pooled! Connections for other users will be normal non-pooled
* connections, and will not count against the maximum pool size limit.
*
*
*
* If you put this DataSource in JNDI, and access it from different JVMs (or otherwise load this
* class from different ClassLoaders), you'll end up with one pool per ClassLoader or VM. This is
* another area where a server-specific implementation may provide advanced features, such as using
* a single pool across all VMs in a cluster.
*
*
*
* This implementation supports JDK 1.5 and higher.
*
*
* @author Aaron Mulder (ammulder@chariotsolutions.com)
*
* @deprecated Since 42.0.0, instead of this class you should use a fully featured connection pool
* like HikariCP, vibur-dbcp, commons-dbcp, c3p0, etc.
*/
@Deprecated
public class PGPoolingDataSource extends BaseDataSource implements DataSource {
protected static ConcurrentMap dataSources =
new ConcurrentHashMap<>();
public static /* @Nullable */ PGPoolingDataSource getDataSource(String name) {
return dataSources.get(name);
}
// Additional Data Source properties
protected /* @Nullable */ String dataSourceName; // Must be protected for subclasses to sync updates to it
private int initialConnections;
private int maxConnections;
// State variables
private boolean initialized;
private final Deque available = new ArrayDeque<>();
private final Deque used = new ArrayDeque<>();
private boolean isClosed;
private final ResourceLock lock = new ResourceLock();
private final Condition lockCondition = lock.newCondition();
private /* @Nullable */ PGConnectionPoolDataSource source;
/**
* Gets a description of this DataSource.
*/
@Override
public String getDescription() {
return "Pooling DataSource '" + dataSourceName + " from " + DriverInfo.DRIVER_FULL_NAME;
}
/**
* Ensures the DataSource properties are not changed after the DataSource has been used.
*
* @throws IllegalStateException The Server Name cannot be changed after the DataSource has been
* used.
*/
@Override
public void setServerName(String serverName) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
super.setServerName(serverName);
}
/**
* Ensures the DataSource properties are not changed after the DataSource has been used.
*
* @throws IllegalStateException The Database Name cannot be changed after the DataSource has been
* used.
*/
@Override
public void setDatabaseName(/* @Nullable */ String databaseName) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
super.setDatabaseName(databaseName);
}
/**
* Ensures the DataSource properties are not changed after the DataSource has been used.
*
* @throws IllegalStateException The User cannot be changed after the DataSource has been used.
*/
@Override
public void setUser(/* @Nullable */ String user) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
super.setUser(user);
}
/**
* Ensures the DataSource properties are not changed after the DataSource has been used.
*
* @throws IllegalStateException The Password cannot be changed after the DataSource has been
* used.
*/
@Override
public void setPassword(/* @Nullable */ String password) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
super.setPassword(password);
}
/**
* Ensures the DataSource properties are not changed after the DataSource has been used.
*
* @throws IllegalStateException The Port Number cannot be changed after the DataSource has been
* used.
*/
@Override
public void setPortNumber(int portNumber) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
super.setPortNumber(portNumber);
}
/**
* Gets the number of connections that will be created when this DataSource is initialized. If you
* do not call initialize explicitly, it will be initialized the first time a connection is drawn
* from it.
*
* @return number of connections that will be created when this DataSource is initialized
*/
public int getInitialConnections() {
return initialConnections;
}
/**
* Sets the number of connections that will be created when this DataSource is initialized. If you
* do not call initialize explicitly, it will be initialized the first time a connection is drawn
* from it.
*
* @param initialConnections number of initial connections
* @throws IllegalStateException The Initial Connections cannot be changed after the DataSource
* has been used.
*/
public void setInitialConnections(int initialConnections) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
this.initialConnections = initialConnections;
}
/**
* Gets the maximum number of connections that the pool will allow. If a request comes in and this
* many connections are in use, the request will block until a connection is available. Note that
* connections for a user other than the default user will not be pooled and don't count against
* this limit.
*
* @return The maximum number of pooled connection allowed, or 0 for no maximum.
*/
public int getMaxConnections() {
return maxConnections;
}
/**
* Sets the maximum number of connections that the pool will allow. If a request comes in and this
* many connections are in use, the request will block until a connection is available. Note that
* connections for a user other than the default user will not be pooled and don't count against
* this limit.
*
* @param maxConnections The maximum number of pooled connection to allow, or 0 for no maximum.
* @throws IllegalStateException The Maximum Connections cannot be changed after the DataSource
* has been used.
*/
public void setMaxConnections(int maxConnections) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
this.maxConnections = maxConnections;
}
/**
* Gets the name of this DataSource. This uniquely identifies the DataSource. You cannot use more
* than one DataSource in the same VM with the same name.
*
* @return name of this DataSource
*/
public /* @Nullable */ String getDataSourceName() {
return dataSourceName;
}
/**
* Sets the name of this DataSource. This is required, and uniquely identifies the DataSource. You
* cannot create or use more than one DataSource in the same VM with the same name.
*
* @param dataSourceName datasource name
* @throws IllegalStateException The Data Source Name cannot be changed after the DataSource has
* been used.
* @throws IllegalArgumentException Another PoolingDataSource with the same dataSourceName already
* exists.
*/
public void setDataSourceName(String dataSourceName) {
if (initialized) {
throw new IllegalStateException(
"Cannot set Data Source properties after DataSource has been used");
}
if (this.dataSourceName != null && dataSourceName != null
&& dataSourceName.equals(this.dataSourceName)) {
return;
}
PGPoolingDataSource previous = dataSources.putIfAbsent(dataSourceName, this);
if (previous != null) {
throw new IllegalArgumentException(
"DataSource with name '" + dataSourceName + "' already exists!");
}
if (this.dataSourceName != null) {
dataSources.remove(this.dataSourceName);
}
this.dataSourceName = dataSourceName;
}
/**
* Initializes this DataSource. If the initialConnections is greater than zero, that number of
* connections will be created. After this method is called, the DataSource properties cannot be
* changed. If you do not call this explicitly, it will be called the first time you get a
* connection from the DataSource.
*
* @throws SQLException Occurs when the initialConnections is greater than zero, but the
* DataSource is not able to create enough physical connections.
*/
public void initialize() throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
PGConnectionPoolDataSource source = createConnectionPool();
this.source = source;
try {
source.initializeFrom(this);
} catch (Exception e) {
throw new PSQLException(GT.tr("Failed to setup DataSource."), PSQLState.UNEXPECTED_ERROR,
e);
}
while (available.size() < initialConnections) {
available.push(source.getPooledConnection());
}
initialized = true;
}
}
protected boolean isInitialized() {
return initialized;
}
/**
* Creates the appropriate ConnectionPool to use for this DataSource.
*
* @return appropriate ConnectionPool to use for this DataSource
*/
protected PGConnectionPoolDataSource createConnectionPool() {
return new PGConnectionPoolDataSource();
}
/**
* Gets a non-pooled connection, unless the user and password are the same as the default
* values for this connection pool.
*
* @return A pooled connection.
* @throws SQLException Occurs when no pooled connection is available, and a new physical
* connection cannot be created.
*/
@Override
public Connection getConnection(/* @Nullable */ String user, /* @Nullable */ String password)
throws SQLException {
// If this is for the default user/password, use a pooled connection
if (user == null || (user.equals(getUser()) && ((password == null && getPassword() == null)
|| (password != null && password.equals(getPassword()))))) {
return getConnection();
}
// Otherwise, use a non-pooled connection
if (!initialized) {
initialize();
}
return super.getConnection(user, password);
}
/**
* Gets a connection from the connection pool.
*
* @return A pooled connection.
* @throws SQLException Occurs when no pooled connection is available, and a new physical
* connection cannot be created.
*/
@Override
public Connection getConnection() throws SQLException {
if (!initialized) {
initialize();
}
return getPooledConnection();
}
/**
* Closes this DataSource, and all the pooled connections, whether in use or not.
*/
public void close() {
try (ResourceLock ignore = lock.obtain()) {
isClosed = true;
while (!available.isEmpty()) {
PooledConnection pci = available.pop();
try {
pci.close();
} catch (SQLException ignored) {
// We can't do much if the connection close fails, try closing the rest
}
}
while (!used.isEmpty()) {
PooledConnection pci = used.pop();
pci.removeConnectionEventListener(connectionEventListener);
try {
pci.close();
} catch (SQLException ignored) {
// We can't do much if the connection close fails, try closing the rest
}
}
}
removeStoredDataSource();
}
protected void removeStoredDataSource() {
dataSources.remove(castNonNull(dataSourceName));
}
protected void addDataSource(String dataSourceName) {
dataSources.put(dataSourceName, this);
}
/**
* Gets a connection from the pool. Will get an available one if present, or create a new one if
* under the max limit. Will block if all used and a new one would exceed the max.
*/
private Connection getPooledConnection() throws SQLException {
PooledConnection pc = null;
try (ResourceLock ignore = lock.obtain()) {
if (isClosed) {
throw new PSQLException(GT.tr("DataSource has been closed."),
PSQLState.CONNECTION_DOES_NOT_EXIST);
}
while (true) {
if (!available.isEmpty()) {
pc = available.pop();
used.push(pc);
break;
}
if (maxConnections == 0 || used.size() < maxConnections) {
pc = castNonNull(source).getPooledConnection();
used.push(pc);
break;
} else {
try {
// Wake up every second at a minimum
lockCondition.await(1000L, TimeUnit.MILLISECONDS);
} catch (InterruptedException ignored) {
// Retry later
}
}
}
}
pc.addConnectionEventListener(connectionEventListener);
return pc.getConnection();
}
/**
* Notified when a pooled connection is closed, or a fatal error occurs on a pooled connection.
* This is the only way connections are marked as unused.
*/
private final ConnectionEventListener connectionEventListener = new ConnectionEventListener() {
@Override
public void connectionClosed(ConnectionEvent event) {
((PooledConnection) event.getSource()).removeConnectionEventListener(this);
try (ResourceLock ignore = lock.obtain()) {
if (isClosed) {
return; // DataSource has been closed
}
boolean removed = used.remove(event.getSource());
if (removed) {
available.push((PooledConnection) event.getSource());
// There's now a new connection available
lockCondition.signal();
} else {
// a connection error occurred
}
}
}
/**
* This is only called for fatal errors, where the physical connection is useless afterward and
* should be removed from the pool.
*/
@Override
public void connectionErrorOccurred(ConnectionEvent event) {
((PooledConnection) event.getSource()).removeConnectionEventListener(this);
try (ResourceLock ignore = lock.obtain()) {
if (isClosed) {
return; // DataSource has been closed
}
used.remove(event.getSource());
// We're now at least 1 connection under the max
lockCondition.signal();
}
}
};
/**
* Adds custom properties for this DataSource to the properties defined in the superclass.
*/
@Override
public Reference getReference() throws NamingException {
Reference ref = super.getReference();
ref.add(new StringRefAddr("dataSourceName", dataSourceName));
if (initialConnections > 0) {
ref.add(new StringRefAddr("initialConnections", Integer.toString(initialConnections)));
}
if (maxConnections > 0) {
ref.add(new StringRefAddr("maxConnections", Integer.toString(maxConnections)));
}
return ref;
}
@Override
public boolean isWrapperFor(Class> iface) throws SQLException {
return iface.isAssignableFrom(getClass());
}
@Override
public T unwrap(Class iface) throws SQLException {
if (iface.isAssignableFrom(getClass())) {
return iface.cast(this);
}
throw new SQLException("Cannot unwrap to " + iface.getName());
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/PGSimpleDataSource.java 0100664 0000000 0000000 00000003141 00000250600 026542 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.ds;
import org.postgresql.ds.common.BaseDataSource;
import org.postgresql.util.DriverInfo;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.sql.SQLException;
import javax.sql.DataSource;
/**
* Simple DataSource which does not perform connection pooling. In order to use the DataSource, you
* must set the property databaseName. The settings for serverName, portNumber, user, and password
* are optional. Note: these properties are declared in the superclass.
*
* @author Aaron Mulder (ammulder@chariotsolutions.com)
*/
public class PGSimpleDataSource extends BaseDataSource implements DataSource, Serializable {
/**
* Gets a description of this DataSource.
*/
@Override
public String getDescription() {
return "Non-Pooling DataSource from " + DriverInfo.DRIVER_FULL_NAME;
}
private void writeObject(ObjectOutputStream out) throws IOException {
writeBaseObject(out);
}
private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
readBaseObject(in);
}
@Override
public boolean isWrapperFor(Class> iface) throws SQLException {
return iface.isAssignableFrom(getClass());
}
@Override
public T unwrap(Class iface) throws SQLException {
if (iface.isAssignableFrom(getClass())) {
return iface.cast(this);
}
throw new SQLException("Cannot unwrap to " + iface.getName());
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/common/ 0040775 0000000 0000000 00000000000 00000250600 023540 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/common/BaseDataSource.java 0100664 0000000 0000000 00000156162 00000250600 027240 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.ds.common;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.Driver;
import org.postgresql.PGProperty;
import org.postgresql.jdbc.AutoSave;
import org.postgresql.jdbc.PreferQueryMode;
import org.postgresql.util.ExpressionProperties;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.URLCoder;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.PrintWriter;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Arrays;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.naming.NamingException;
import javax.naming.RefAddr;
import javax.naming.Reference;
import javax.naming.Referenceable;
import javax.naming.StringRefAddr;
import javax.sql.CommonDataSource;
/**
* Base class for data sources and related classes.
*
* @author Aaron Mulder (ammulder@chariotsolutions.com)
*/
public abstract class BaseDataSource implements CommonDataSource, Referenceable {
private static final Logger LOGGER = Logger.getLogger(BaseDataSource.class.getName());
// Standard properties, defined in the JDBC 2.0 Optional Package spec
private String[] serverNames = new String[]{"localhost"};
private /* @Nullable */ String databaseName = "";
private /* @Nullable */ String user;
private /* @Nullable */ String password;
private int[] portNumbers = new int[]{0};
// Map for all other properties
private Properties properties = new Properties();
/*
* Ensure the driver is loaded as JDBC Driver might be invisible to Java's ServiceLoader.
* Usually, {@code Class.forName(...)} is not required as {@link DriverManager} detects JDBC drivers
* via {@code META-INF/services/java.sql.Driver} entries. However there might be cases when the driver
* is located at the application level classloader, thus it might be required to perform manual
* registration of the driver.
*/
static {
try {
Class.forName("org.postgresql.Driver");
} catch (ClassNotFoundException e) {
throw new IllegalStateException(
"BaseDataSource is unable to load org.postgresql.Driver. Please check if you have proper PostgreSQL JDBC Driver jar on the classpath",
e);
}
}
/**
* Gets a connection to the PostgreSQL database. The database is identified by the DataSource
* properties serverName, databaseName, and portNumber. The user to connect as is identified by
* the DataSource properties user and password.
*
* @return A valid database connection.
* @throws SQLException Occurs when the database connection cannot be established.
*/
public Connection getConnection() throws SQLException {
return getConnection(user, password);
}
/**
* Gets a connection to the PostgreSQL database. The database is identified by the DataSource
* properties serverName, databaseName, and portNumber. The user to connect as is identified by
* the arguments user and password, which override the DataSource properties by the same name.
*
* @param user user
* @param password password
* @return A valid database connection.
* @throws SQLException Occurs when the database connection cannot be established.
*/
public Connection getConnection(/* @Nullable */ String user, /* @Nullable */ String password)
throws SQLException {
try {
Connection con = DriverManager.getConnection(getUrl(), user, password);
if (LOGGER.isLoggable(Level.FINE)) {
LOGGER.log(Level.FINE, "Created a {0} for {1} at {2}",
new Object[]{getDescription(), user, getUrl()});
}
return con;
} catch (SQLException e) {
LOGGER.log(Level.FINE, "Failed to create a {0} for {1} at {2}: {3}",
new Object[]{getDescription(), user, getUrl(), e});
throw e;
}
}
/**
* This implementation don't use a LogWriter.
*/
@Override
public /* @Nullable */ PrintWriter getLogWriter() {
return null;
}
/**
* This implementation don't use a LogWriter.
*
* @param printWriter Not used
*/
@Override
public void setLogWriter(/* @Nullable */ PrintWriter printWriter) {
// NOOP
}
/**
* Gets the name of the host the PostgreSQL database is running on.
*
* @return name of the host the PostgreSQL database is running on
* @deprecated use {@link #getServerNames()}
*/
@Deprecated
public String getServerName() {
return serverNames[0];
}
/**
* Gets the name of the host(s) the PostgreSQL database is running on.
*
* @return name of the host(s) the PostgreSQL database is running on
*/
public String[] getServerNames() {
return serverNames;
}
/**
* Sets the name of the host the PostgreSQL database is running on. If this is changed, it will
* only affect future calls to getConnection. The default value is {@code localhost}.
*
* @param serverName name of the host the PostgreSQL database is running on
* @deprecated use {@link #setServerNames(String[])}
*/
@Deprecated
public void setServerName(String serverName) {
this.setServerNames(new String[]{serverName});
}
/**
* Sets the name of the host(s) the PostgreSQL database is running on. If this is changed, it will
* only affect future calls to getConnection. The default value is {@code localhost}.
*
* @param serverNames name of the host(s) the PostgreSQL database is running on
*/
@SuppressWarnings("nullness")
public void setServerNames(/* @Nullable */ String /* @Nullable */ [] serverNames) {
if (serverNames == null || serverNames.length == 0) {
this.serverNames = new String[]{"localhost"};
} else {
serverNames = serverNames.clone();
for (int i = 0; i < serverNames.length; i++) {
String serverName = serverNames[i];
if (serverName == null || "".equals(serverName)) {
serverNames[i] = "localhost";
}
}
this.serverNames = serverNames;
}
}
/**
* Gets the name of the PostgreSQL database, running on the server identified by the serverName
* property.
*
* @return name of the PostgreSQL database
*/
public /* @Nullable */ String getDatabaseName() {
return databaseName;
}
/**
* Sets the name of the PostgreSQL database, running on the server identified by the serverName
* property. If this is changed, it will only affect future calls to getConnection.
*
* @param databaseName name of the PostgreSQL database
*/
public void setDatabaseName(/* @Nullable */ String databaseName) {
this.databaseName = databaseName;
}
/**
* Gets a description of this DataSource-ish thing. Must be customized by subclasses.
*
* @return description of this DataSource-ish thing
*/
public abstract String getDescription();
/**
* Gets the user to connect as by default. If this is not specified, you must use the
* getConnection method which takes a user and password as parameters.
*
* @return user to connect as by default
*/
public /* @Nullable */ String getUser() {
return user;
}
/**
* Sets the user to connect as by default. If this is not specified, you must use the
* getConnection method which takes a user and password as parameters. If this is changed, it will
* only affect future calls to getConnection.
*
* @param user user to connect as by default
*/
public void setUser(/* @Nullable */ String user) {
this.user = user;
}
/**
* Gets the password to connect with by default. If this is not specified but a password is needed
* to log in, you must use the getConnection method which takes a user and password as parameters.
*
* @return password to connect with by default
*/
public /* @Nullable */ String getPassword() {
return password;
}
/**
* Sets the password to connect with by default. If this is not specified but a password is needed
* to log in, you must use the getConnection method which takes a user and password as parameters.
* If this is changed, it will only affect future calls to getConnection.
*
* @param password password to connect with by default
*/
public void setPassword(/* @Nullable */ String password) {
this.password = password;
}
/**
* Gets the port which the PostgreSQL server is listening on for TCP/IP connections.
*
* @return The port, or 0 if the default port will be used.
* @deprecated use {@link #getPortNumbers()}
*/
@Deprecated
public int getPortNumber() {
if (portNumbers == null || portNumbers.length == 0) {
return 0;
}
return portNumbers[0];
}
/**
* Gets the port(s) which the PostgreSQL server is listening on for TCP/IP connections.
*
* @return The port(s), or 0 if the default port will be used.
*/
public int[] getPortNumbers() {
return portNumbers;
}
/**
* Sets the port which the PostgreSQL server is listening on for TCP/IP connections. Be sure the
* -i flag is passed to postmaster when PostgreSQL is started. If this is not set, or set to 0,
* the default port will be used.
*
* @param portNumber port which the PostgreSQL server is listening on for TCP/IP
* @deprecated use {@link #setPortNumbers(int[])}
*/
@Deprecated
public void setPortNumber(int portNumber) {
setPortNumbers(new int[]{portNumber});
}
/**
* Sets the port(s) which the PostgreSQL server is listening on for TCP/IP connections. Be sure the
* -i flag is passed to postmaster when PostgreSQL is started. If this is not set, or set to 0,
* the default port will be used.
*
* @param portNumbers port(s) which the PostgreSQL server is listening on for TCP/IP
*/
public void setPortNumbers(int /* @Nullable */ [] portNumbers) {
if (portNumbers == null || portNumbers.length == 0) {
portNumbers = new int[]{0};
}
this.portNumbers = Arrays.copyOf(portNumbers, portNumbers.length);
}
/**
* @return command line options for this connection
*/
public /* @Nullable */ String getOptions() {
return PGProperty.OPTIONS.getOrDefault(properties);
}
/**
* Set command line options for this connection
*
* @param options string to set options to
*/
public void setOptions(/* @Nullable */ String options) {
PGProperty.OPTIONS.set(properties, options);
}
/**
* @return login timeout
* @see PGProperty#LOGIN_TIMEOUT
*/
@Override
public int getLoginTimeout() {
return PGProperty.LOGIN_TIMEOUT.getIntNoCheck(properties);
}
/**
* @param loginTimeout login timeout
* @see PGProperty#LOGIN_TIMEOUT
*/
@Override
public void setLoginTimeout(int loginTimeout) {
PGProperty.LOGIN_TIMEOUT.set(properties, loginTimeout);
}
/**
* @return connect timeout
* @see PGProperty#CONNECT_TIMEOUT
*/
public int getConnectTimeout() {
return PGProperty.CONNECT_TIMEOUT.getIntNoCheck(properties);
}
/**
* @param connectTimeout connect timeout
* @see PGProperty#CONNECT_TIMEOUT
*/
public void setConnectTimeout(int connectTimeout) {
PGProperty.CONNECT_TIMEOUT.set(properties, connectTimeout);
}
/**
*
* @return GSS ResponseTimeout
* @see PGProperty#GSS_RESPONSE_TIMEOUT
*/
public int getGssResponseTimeout() {
return PGProperty.GSS_RESPONSE_TIMEOUT.getIntNoCheck(properties);
}
/**
*
* @param gssResponseTimeout gss response timeout
* @see PGProperty#GSS_RESPONSE_TIMEOUT
*/
public void setGssResponseTimeout(int gssResponseTimeout) {
PGProperty.GSS_RESPONSE_TIMEOUT.set(properties, gssResponseTimeout);
}
/**
*
* @return SSL ResponseTimeout
* @see PGProperty#SSL_RESPONSE_TIMEOUT
*/
public int getSslResponseTimeout() {
return PGProperty.SSL_RESPONSE_TIMEOUT.getIntNoCheck(properties);
}
/**
*
* @param sslResponseTimeout ssl response timeout
* @see PGProperty#SSL_RESPONSE_TIMEOUT
*/
public void setSslResponseTimeout(int sslResponseTimeout) {
PGProperty.SSL_RESPONSE_TIMEOUT.set(properties, sslResponseTimeout);
}
/**
* @return protocol version
* @see PGProperty#PROTOCOL_VERSION
*/
public int getProtocolVersion() {
if (!PGProperty.PROTOCOL_VERSION.isPresent(properties)) {
return 0;
} else {
return PGProperty.PROTOCOL_VERSION.getIntNoCheck(properties);
}
}
/**
* @param protocolVersion protocol version
* @see PGProperty#PROTOCOL_VERSION
*/
public void setProtocolVersion(int protocolVersion) {
if (protocolVersion == 0) {
PGProperty.PROTOCOL_VERSION.set(properties, null);
} else {
PGProperty.PROTOCOL_VERSION.set(properties, protocolVersion);
}
}
/**
* @return quoteReturningIdentifiers
* @see PGProperty#QUOTE_RETURNING_IDENTIFIERS
*/
public boolean getQuoteReturningIdentifiers() {
return PGProperty.QUOTE_RETURNING_IDENTIFIERS.getBoolean(properties);
}
/**
* @param quoteIdentifiers indicate whether to quote identifiers
* @see PGProperty#QUOTE_RETURNING_IDENTIFIERS
*/
public void setQuoteReturningIdentifiers(boolean quoteIdentifiers) {
PGProperty.QUOTE_RETURNING_IDENTIFIERS.set(properties, quoteIdentifiers);
}
/**
* @return receive buffer size
* @see PGProperty#RECEIVE_BUFFER_SIZE
*/
public int getReceiveBufferSize() {
return PGProperty.RECEIVE_BUFFER_SIZE.getIntNoCheck(properties);
}
/**
* @param nbytes receive buffer size
* @see PGProperty#RECEIVE_BUFFER_SIZE
*/
public void setReceiveBufferSize(int nbytes) {
PGProperty.RECEIVE_BUFFER_SIZE.set(properties, nbytes);
}
/**
* @return send buffer size
* @see PGProperty#SEND_BUFFER_SIZE
*/
public int getSendBufferSize() {
return PGProperty.SEND_BUFFER_SIZE.getIntNoCheck(properties);
}
/**
* @param nbytes send buffer size
* @see PGProperty#SEND_BUFFER_SIZE
*/
public void setSendBufferSize(int nbytes) {
PGProperty.SEND_BUFFER_SIZE.set(properties, nbytes);
}
/**
* @return send max buffer size
* @see PGProperty#MAX_SEND_BUFFER_SIZE
*/
public int getMaxSendBufferSize() {
return PGProperty.MAX_SEND_BUFFER_SIZE.getIntNoCheck(properties);
}
/**
* @param nbytes send max buffer size
* @see PGProperty#MAX_SEND_BUFFER_SIZE
*/
public void setMaxSendBufferSize(int nbytes) {
PGProperty.MAX_SEND_BUFFER_SIZE.set(properties, nbytes);
}
/**
* @param count prepare threshold
* @see PGProperty#PREPARE_THRESHOLD
*/
public void setPrepareThreshold(int count) {
PGProperty.PREPARE_THRESHOLD.set(properties, count);
}
/**
* @return prepare threshold
* @see PGProperty#PREPARE_THRESHOLD
*/
public int getPrepareThreshold() {
return PGProperty.PREPARE_THRESHOLD.getIntNoCheck(properties);
}
/**
* @return prepared statement cache size (number of statements per connection)
* @see PGProperty#PREPARED_STATEMENT_CACHE_QUERIES
*/
public int getPreparedStatementCacheQueries() {
return PGProperty.PREPARED_STATEMENT_CACHE_QUERIES.getIntNoCheck(properties);
}
/**
* @param cacheSize prepared statement cache size (number of statements per connection)
* @see PGProperty#PREPARED_STATEMENT_CACHE_QUERIES
*/
public void setPreparedStatementCacheQueries(int cacheSize) {
PGProperty.PREPARED_STATEMENT_CACHE_QUERIES.set(properties, cacheSize);
}
/**
* @return prepared statement cache size (number of megabytes per connection)
* @see PGProperty#PREPARED_STATEMENT_CACHE_SIZE_MIB
*/
public int getPreparedStatementCacheSizeMiB() {
return PGProperty.PREPARED_STATEMENT_CACHE_SIZE_MIB.getIntNoCheck(properties);
}
/**
* @param cacheSize statement cache size (number of megabytes per connection)
* @see PGProperty#PREPARED_STATEMENT_CACHE_SIZE_MIB
*/
public void setPreparedStatementCacheSizeMiB(int cacheSize) {
PGProperty.PREPARED_STATEMENT_CACHE_SIZE_MIB.set(properties, cacheSize);
}
/**
* @return database metadata cache fields size (number of fields cached per connection)
* @see PGProperty#DATABASE_METADATA_CACHE_FIELDS
*/
public int getDatabaseMetadataCacheFields() {
return PGProperty.DATABASE_METADATA_CACHE_FIELDS.getIntNoCheck(properties);
}
/**
* @param cacheSize database metadata cache fields size (number of fields cached per connection)
* @see PGProperty#DATABASE_METADATA_CACHE_FIELDS
*/
public void setDatabaseMetadataCacheFields(int cacheSize) {
PGProperty.DATABASE_METADATA_CACHE_FIELDS.set(properties, cacheSize);
}
/**
* @return database metadata cache fields size (number of megabytes per connection)
* @see PGProperty#DATABASE_METADATA_CACHE_FIELDS_MIB
*/
public int getDatabaseMetadataCacheFieldsMiB() {
return PGProperty.DATABASE_METADATA_CACHE_FIELDS_MIB.getIntNoCheck(properties);
}
/**
* @param cacheSize database metadata cache fields size (number of megabytes per connection)
* @see PGProperty#DATABASE_METADATA_CACHE_FIELDS_MIB
*/
public void setDatabaseMetadataCacheFieldsMiB(int cacheSize) {
PGProperty.DATABASE_METADATA_CACHE_FIELDS_MIB.set(properties, cacheSize);
}
/**
* @param fetchSize default fetch size
* @see PGProperty#DEFAULT_ROW_FETCH_SIZE
*/
public void setDefaultRowFetchSize(int fetchSize) {
PGProperty.DEFAULT_ROW_FETCH_SIZE.set(properties, fetchSize);
}
/**
* @return default fetch size
* @see PGProperty#DEFAULT_ROW_FETCH_SIZE
*/
public int getDefaultRowFetchSize() {
return PGProperty.DEFAULT_ROW_FETCH_SIZE.getIntNoCheck(properties);
}
/**
* @param unknownLength unknown length
* @see PGProperty#UNKNOWN_LENGTH
*/
public void setUnknownLength(int unknownLength) {
PGProperty.UNKNOWN_LENGTH.set(properties, unknownLength);
}
/**
* @return unknown length
* @see PGProperty#UNKNOWN_LENGTH
*/
public int getUnknownLength() {
return PGProperty.UNKNOWN_LENGTH.getIntNoCheck(properties);
}
/**
* @param seconds socket timeout
* @see PGProperty#SOCKET_TIMEOUT
*/
public void setSocketTimeout(int seconds) {
PGProperty.SOCKET_TIMEOUT.set(properties, seconds);
}
/**
* @return socket timeout
* @see PGProperty#SOCKET_TIMEOUT
*/
public int getSocketTimeout() {
return PGProperty.SOCKET_TIMEOUT.getIntNoCheck(properties);
}
/**
* @param seconds timeout that is used for sending cancel command
* @see PGProperty#CANCEL_SIGNAL_TIMEOUT
*/
public void setCancelSignalTimeout(int seconds) {
PGProperty.CANCEL_SIGNAL_TIMEOUT.set(properties, seconds);
}
/**
* @return timeout that is used for sending cancel command in seconds
* @see PGProperty#CANCEL_SIGNAL_TIMEOUT
*/
public int getCancelSignalTimeout() {
return PGProperty.CANCEL_SIGNAL_TIMEOUT.getIntNoCheck(properties);
}
/**
* @param enabled if SSL is enabled
* @see PGProperty#SSL
*/
public void setSsl(boolean enabled) {
if (enabled) {
PGProperty.SSL.set(properties, true);
} else {
PGProperty.SSL.set(properties, false);
}
}
/**
* @return true if SSL is enabled
* @see PGProperty#SSL
*/
public boolean getSsl() {
// "true" if "ssl" is set but empty
return PGProperty.SSL.getBoolean(properties) || "".equals(PGProperty.SSL.getOrDefault(properties));
}
/**
* @param classname SSL factory class name
* @see PGProperty#SSL_FACTORY
*/
public void setSslfactory(String classname) {
PGProperty.SSL_FACTORY.set(properties, classname);
}
/**
* @return SSL factory class name
* @see PGProperty#SSL_FACTORY
*/
public /* @Nullable */ String getSslfactory() {
return PGProperty.SSL_FACTORY.getOrDefault(properties);
}
/**
* @return SSL mode
* @see PGProperty#SSL_MODE
*/
public /* @Nullable */ String getSslMode() {
return PGProperty.SSL_MODE.getOrDefault(properties);
}
/**
* @param mode SSL mode
* @see PGProperty#SSL_MODE
*/
public void setSslMode(/* @Nullable */ String mode) {
PGProperty.SSL_MODE.set(properties, mode);
}
/**
* @return SSL mode
* @see PGProperty#SSL_FACTORY_ARG
*/
@SuppressWarnings("deprecation")
public /* @Nullable */ String getSslFactoryArg() {
return PGProperty.SSL_FACTORY_ARG.getOrDefault(properties);
}
/**
* @param arg argument forwarded to SSL factory
* @see PGProperty#SSL_FACTORY_ARG
*/
@SuppressWarnings("deprecation")
public void setSslFactoryArg(/* @Nullable */ String arg) {
PGProperty.SSL_FACTORY_ARG.set(properties, arg);
}
/**
* @return argument forwarded to SSL factory
* @see PGProperty#SSL_HOSTNAME_VERIFIER
*/
public /* @Nullable */ String getSslHostnameVerifier() {
return PGProperty.SSL_HOSTNAME_VERIFIER.getOrDefault(properties);
}
/**
* @param className SSL hostname verifier
* @see PGProperty#SSL_HOSTNAME_VERIFIER
*/
public void setSslHostnameVerifier(/* @Nullable */ String className) {
PGProperty.SSL_HOSTNAME_VERIFIER.set(properties, className);
}
/**
* @return className SSL hostname verifier
* @see PGProperty#SSL_CERT
*/
public /* @Nullable */ String getSslCert() {
return PGProperty.SSL_CERT.getOrDefault(properties);
}
/**
* @param file SSL certificate
* @see PGProperty#SSL_CERT
*/
public void setSslCert(/* @Nullable */ String file) {
PGProperty.SSL_CERT.set(properties, file);
}
/**
* @return SSL certificate
* @see PGProperty#SSL_KEY
*/
public /* @Nullable */ String getSslKey() {
return PGProperty.SSL_KEY.getOrDefault(properties);
}
/**
* @param file SSL key
* @see PGProperty#SSL_KEY
*/
public void setSslKey(/* @Nullable */ String file) {
PGProperty.SSL_KEY.set(properties, file);
}
/**
* @return SSL root certificate
* @see PGProperty#SSL_ROOT_CERT
*/
public /* @Nullable */ String getSslRootCert() {
return PGProperty.SSL_ROOT_CERT.getOrDefault(properties);
}
/**
* @param file SSL root certificate
* @see PGProperty#SSL_ROOT_CERT
*/
public void setSslRootCert(/* @Nullable */ String file) {
PGProperty.SSL_ROOT_CERT.set(properties, file);
}
/**
* @param sslNegotiation one of SSLNegotiation.POSTGRES or SSLNegotiation.DIRECT
* @see PGProperty#SSL_NEGOTIATION
*/
public void setSslNegotiation(/* @Nullable */ String sslNegotiation) {
PGProperty.SSL_NEGOTIATION.set(properties, sslNegotiation);
}
/**
* @return SSL Negotiation scheme
* @see PGProperty#SSL_NEGOTIATION
*/
public String getSslNegotiation() {
return castNonNull(PGProperty.SSL_NEGOTIATION.getOrDefault(properties));
}
/**
* @return SSL password
* @see PGProperty#SSL_PASSWORD
*/
public /* @Nullable */ String getSslPassword() {
return PGProperty.SSL_PASSWORD.getOrDefault(properties);
}
/**
* @param password SSL password
* @see PGProperty#SSL_PASSWORD
*/
public void setSslPassword(/* @Nullable */ String password) {
PGProperty.SSL_PASSWORD.set(properties, password);
}
/**
* @return SSL password callback
* @see PGProperty#SSL_PASSWORD_CALLBACK
*/
public /* @Nullable */ String getSslPasswordCallback() {
return PGProperty.SSL_PASSWORD_CALLBACK.getOrDefault(properties);
}
/**
* @param className SSL password callback class name
* @see PGProperty#SSL_PASSWORD_CALLBACK
*/
public void setSslPasswordCallback(/* @Nullable */ String className) {
PGProperty.SSL_PASSWORD_CALLBACK.set(properties, className);
}
/**
* @param applicationName application name
* @see PGProperty#APPLICATION_NAME
*/
public void setApplicationName(/* @Nullable */ String applicationName) {
PGProperty.APPLICATION_NAME.set(properties, applicationName);
}
/**
* @return application name
* @see PGProperty#APPLICATION_NAME
*/
public String getApplicationName() {
return castNonNull(PGProperty.APPLICATION_NAME.getOrDefault(properties));
}
/**
* @param targetServerType target server type
* @see PGProperty#TARGET_SERVER_TYPE
*/
public void setTargetServerType(/* @Nullable */ String targetServerType) {
PGProperty.TARGET_SERVER_TYPE.set(properties, targetServerType);
}
/**
* @return target server type
* @see PGProperty#TARGET_SERVER_TYPE
*/
public String getTargetServerType() {
return castNonNull(PGProperty.TARGET_SERVER_TYPE.getOrDefault(properties));
}
/**
* @param loadBalanceHosts load balance hosts
* @see PGProperty#LOAD_BALANCE_HOSTS
*/
public void setLoadBalanceHosts(boolean loadBalanceHosts) {
PGProperty.LOAD_BALANCE_HOSTS.set(properties, loadBalanceHosts);
}
/**
* @return load balance hosts
* @see PGProperty#LOAD_BALANCE_HOSTS
*/
public boolean getLoadBalanceHosts() {
return PGProperty.LOAD_BALANCE_HOSTS.isPresent(properties);
}
/**
* @param hostRecheckSeconds host recheck seconds
* @see PGProperty#HOST_RECHECK_SECONDS
*/
public void setHostRecheckSeconds(int hostRecheckSeconds) {
PGProperty.HOST_RECHECK_SECONDS.set(properties, hostRecheckSeconds);
}
/**
* @return host recheck seconds
* @see PGProperty#HOST_RECHECK_SECONDS
*/
public int getHostRecheckSeconds() {
return PGProperty.HOST_RECHECK_SECONDS.getIntNoCheck(properties);
}
/**
* @param enabled if TCP keep alive should be enabled
* @see PGProperty#TCP_KEEP_ALIVE
*/
public void setTcpKeepAlive(boolean enabled) {
PGProperty.TCP_KEEP_ALIVE.set(properties, enabled);
}
/**
* @return true if TCP keep alive is enabled
* @see PGProperty#TCP_KEEP_ALIVE
*/
public boolean getTcpKeepAlive() {
return PGProperty.TCP_KEEP_ALIVE.getBoolean(properties);
}
/**
* @param enabled if TCP no delay should be enabled
* @see PGProperty#TCP_NO_DELAY
*/
public void setTcpNoDelay(boolean enabled) {
PGProperty.TCP_NO_DELAY.set(properties, enabled);
}
/**
* @return true if TCP no delay is enabled
* @see PGProperty#TCP_NO_DELAY
*/
public boolean getTcpNoDelay() {
return PGProperty.TCP_NO_DELAY.getBoolean(properties);
}
/**
* @param enabled if binary transfer should be enabled
* @see PGProperty#BINARY_TRANSFER
*/
public void setBinaryTransfer(boolean enabled) {
PGProperty.BINARY_TRANSFER.set(properties, enabled);
}
/**
* @return true if binary transfer is enabled
* @see PGProperty#BINARY_TRANSFER
*/
public boolean getBinaryTransfer() {
return PGProperty.BINARY_TRANSFER.getBoolean(properties);
}
/**
* @param oidList list of OIDs that are allowed to use binary transfer
* @see PGProperty#BINARY_TRANSFER_ENABLE
*/
public void setBinaryTransferEnable(/* @Nullable */ String oidList) {
PGProperty.BINARY_TRANSFER_ENABLE.set(properties, oidList);
}
/**
* @return list of OIDs that are allowed to use binary transfer
* @see PGProperty#BINARY_TRANSFER_ENABLE
*/
public String getBinaryTransferEnable() {
return castNonNull(PGProperty.BINARY_TRANSFER_ENABLE.getOrDefault(properties));
}
/**
* @param oidList list of OIDs that are not allowed to use binary transfer
* @see PGProperty#BINARY_TRANSFER_DISABLE
*/
public void setBinaryTransferDisable(/* @Nullable */ String oidList) {
PGProperty.BINARY_TRANSFER_DISABLE.set(properties, oidList);
}
/**
* @return list of OIDs that are not allowed to use binary transfer
* @see PGProperty#BINARY_TRANSFER_DISABLE
*/
public String getBinaryTransferDisable() {
return castNonNull(PGProperty.BINARY_TRANSFER_DISABLE.getOrDefault(properties));
}
/**
* @return string type
* @see PGProperty#STRING_TYPE
*/
public /* @Nullable */ String getStringType() {
return PGProperty.STRING_TYPE.getOrDefault(properties);
}
/**
* @param stringType string type
* @see PGProperty#STRING_TYPE
*/
public void setStringType(/* @Nullable */ String stringType) {
PGProperty.STRING_TYPE.set(properties, stringType);
}
/**
* @return true if column sanitizer is disabled
* @see PGProperty#DISABLE_COLUMN_SANITISER
*/
public boolean isColumnSanitiserDisabled() {
return PGProperty.DISABLE_COLUMN_SANITISER.getBoolean(properties);
}
/**
* @return true if column sanitizer is disabled
* @see PGProperty#DISABLE_COLUMN_SANITISER
*/
public boolean getDisableColumnSanitiser() {
return PGProperty.DISABLE_COLUMN_SANITISER.getBoolean(properties);
}
/**
* @param disableColumnSanitiser if column sanitizer should be disabled
* @see PGProperty#DISABLE_COLUMN_SANITISER
*/
public void setDisableColumnSanitiser(boolean disableColumnSanitiser) {
PGProperty.DISABLE_COLUMN_SANITISER.set(properties, disableColumnSanitiser);
}
/**
* @return current schema
* @see PGProperty#CURRENT_SCHEMA
*/
public /* @Nullable */ String getCurrentSchema() {
return PGProperty.CURRENT_SCHEMA.getOrDefault(properties);
}
/**
* @param currentSchema current schema
* @see PGProperty#CURRENT_SCHEMA
*/
public void setCurrentSchema(/* @Nullable */ String currentSchema) {
PGProperty.CURRENT_SCHEMA.set(properties, currentSchema);
}
/**
* @return true if connection is readonly
* @see PGProperty#READ_ONLY
*/
public boolean getReadOnly() {
return PGProperty.READ_ONLY.getBoolean(properties);
}
/**
* @param readOnly if connection should be readonly
* @see PGProperty#READ_ONLY
*/
public void setReadOnly(boolean readOnly) {
PGProperty.READ_ONLY.set(properties, readOnly);
}
/**
* @return The behavior when set read only
* @see PGProperty#READ_ONLY_MODE
*/
public String getReadOnlyMode() {
return castNonNull(PGProperty.READ_ONLY_MODE.getOrDefault(properties));
}
/**
* @param mode the behavior when set read only
* @see PGProperty#READ_ONLY_MODE
*/
public void setReadOnlyMode(/* @Nullable */ String mode) {
PGProperty.READ_ONLY_MODE.set(properties, mode);
}
/**
* @return true if driver should log unclosed connections
* @see PGProperty#LOG_UNCLOSED_CONNECTIONS
*/
public boolean getLogUnclosedConnections() {
return PGProperty.LOG_UNCLOSED_CONNECTIONS.getBoolean(properties);
}
/**
* @param enabled true if driver should log unclosed connections
* @see PGProperty#LOG_UNCLOSED_CONNECTIONS
*/
public void setLogUnclosedConnections(boolean enabled) {
PGProperty.LOG_UNCLOSED_CONNECTIONS.set(properties, enabled);
}
/**
* @return true if driver should log include detail in server error messages
* @see PGProperty#LOG_SERVER_ERROR_DETAIL
*/
public boolean getLogServerErrorDetail() {
return PGProperty.LOG_SERVER_ERROR_DETAIL.getBoolean(properties);
}
/**
* @param enabled true if driver should include detail in server error messages
* @see PGProperty#LOG_SERVER_ERROR_DETAIL
*/
public void setLogServerErrorDetail(boolean enabled) {
PGProperty.LOG_SERVER_ERROR_DETAIL.set(properties, enabled);
}
/**
* @return assumed minimal server version
* @see PGProperty#ASSUME_MIN_SERVER_VERSION
*/
public /* @Nullable */ String getAssumeMinServerVersion() {
return PGProperty.ASSUME_MIN_SERVER_VERSION.getOrDefault(properties);
}
/**
* @param minVersion assumed minimal server version
* @see PGProperty#ASSUME_MIN_SERVER_VERSION
*/
public void setAssumeMinServerVersion(/* @Nullable */ String minVersion) {
PGProperty.ASSUME_MIN_SERVER_VERSION.set(properties, minVersion);
}
/**
* This is important in pool-by-transaction scenarios in order to make sure that all the statements
* reaches the same connection that is being initialized. If set then we will group the startup
* parameters in a transaction
* @return whether to group startup parameters or not
* @see PGProperty#GROUP_STARTUP_PARAMETERS
* @deprecated since we can send the startup parameters as a multistatment transaction
*/
@Deprecated
public boolean getGroupStartupParameters() {
return PGProperty.GROUP_STARTUP_PARAMETERS.getBoolean(properties);
}
/**
*
* @param groupStartupParameters whether to group startup Parameters in a transaction or not
* @see PGProperty#GROUP_STARTUP_PARAMETERS
* @deprecated since we can send the startup parameters as a multistatment transaction
*/
@Deprecated
public void setGroupStartupParameters(boolean groupStartupParameters) {
PGProperty.GROUP_STARTUP_PARAMETERS.set(properties, groupStartupParameters);
}
/**
* @return JAAS application name
* @see PGProperty#JAAS_APPLICATION_NAME
*/
public /* @Nullable */ String getJaasApplicationName() {
return PGProperty.JAAS_APPLICATION_NAME.getOrDefault(properties);
}
/**
* @param name JAAS application name
* @see PGProperty#JAAS_APPLICATION_NAME
*/
public void setJaasApplicationName(/* @Nullable */ String name) {
PGProperty.JAAS_APPLICATION_NAME.set(properties, name);
}
/**
* @return true if perform JAAS login before GSS authentication
* @see PGProperty#JAAS_LOGIN
*/
public boolean getJaasLogin() {
return PGProperty.JAAS_LOGIN.getBoolean(properties);
}
/**
* @param doLogin true if perform JAAS login before GSS authentication
* @see PGProperty#JAAS_LOGIN
*/
public void setJaasLogin(boolean doLogin) {
PGProperty.JAAS_LOGIN.set(properties, doLogin);
}
/**
* @return true if using default GSS credentials
* @see PGProperty#GSS_USE_DEFAULT_CREDS
*/
public boolean getGssUseDefaultCreds() {
return PGProperty.GSS_USE_DEFAULT_CREDS.getBoolean(properties);
}
/**
* @param gssUseDefaultCreds true if using default GSS credentials
* @see PGProperty#GSS_USE_DEFAULT_CREDS
*/
public void setGssUseDefaultCreds(boolean gssUseDefaultCreds) {
PGProperty.GSS_USE_DEFAULT_CREDS.set(properties, gssUseDefaultCreds);
}
/**
* @return Kerberos server name
* @see PGProperty#KERBEROS_SERVER_NAME
*/
public /* @Nullable */ String getKerberosServerName() {
return PGProperty.KERBEROS_SERVER_NAME.getOrDefault(properties);
}
/**
* @param serverName Kerberos server name
* @see PGProperty#KERBEROS_SERVER_NAME
*/
public void setKerberosServerName(/* @Nullable */ String serverName) {
PGProperty.KERBEROS_SERVER_NAME.set(properties, serverName);
}
/**
* @return true if use SPNEGO
* @see PGProperty#USE_SPNEGO
*/
public boolean getUseSpNego() {
return PGProperty.USE_SPNEGO.getBoolean(properties);
}
/**
* @param use true if use SPNEGO
* @see PGProperty#USE_SPNEGO
*/
public void setUseSpNego(boolean use) {
PGProperty.USE_SPNEGO.set(properties, use);
}
/**
* @return GSS mode: auto, sspi, or gssapi
* @see PGProperty#GSS_LIB
*/
public /* @Nullable */ String getGssLib() {
return PGProperty.GSS_LIB.getOrDefault(properties);
}
/**
* @param lib GSS mode: auto, sspi, or gssapi
* @see PGProperty#GSS_LIB
*/
public void setGssLib(/* @Nullable */ String lib) {
PGProperty.GSS_LIB.set(properties, lib);
}
/**
*
* @return GSS encryption mode: disable, prefer or require
*/
public String getGssEncMode() {
return castNonNull(PGProperty.GSS_ENC_MODE.getOrDefault(properties));
}
/**
*
* @param mode encryption mode: disable, prefer or require
*/
public void setGssEncMode(/* @Nullable */ String mode) {
PGProperty.GSS_ENC_MODE.set(properties, mode);
}
/**
* @return SSPI service class
* @see PGProperty#SSPI_SERVICE_CLASS
*/
public /* @Nullable */ String getSspiServiceClass() {
return PGProperty.SSPI_SERVICE_CLASS.getOrDefault(properties);
}
/**
* @param serviceClass SSPI service class
* @see PGProperty#SSPI_SERVICE_CLASS
*/
public void setSspiServiceClass(/* @Nullable */ String serviceClass) {
PGProperty.SSPI_SERVICE_CLASS.set(properties, serviceClass);
}
/**
* @return if connection allows encoding changes
* @see PGProperty#ALLOW_ENCODING_CHANGES
*/
public boolean getAllowEncodingChanges() {
return PGProperty.ALLOW_ENCODING_CHANGES.getBoolean(properties);
}
/**
* @param allow if connection allows encoding changes
* @see PGProperty#ALLOW_ENCODING_CHANGES
*/
public void setAllowEncodingChanges(boolean allow) {
PGProperty.ALLOW_ENCODING_CHANGES.set(properties, allow);
}
/**
* @return socket factory class name
* @see PGProperty#SOCKET_FACTORY
*/
public /* @Nullable */ String getSocketFactory() {
return PGProperty.SOCKET_FACTORY.getOrDefault(properties);
}
/**
* @param socketFactoryClassName socket factory class name
* @see PGProperty#SOCKET_FACTORY
*/
public void setSocketFactory(/* @Nullable */ String socketFactoryClassName) {
PGProperty.SOCKET_FACTORY.set(properties, socketFactoryClassName);
}
/**
* @return socket factory argument
* @see PGProperty#SOCKET_FACTORY_ARG
*/
@SuppressWarnings("deprecation")
public /* @Nullable */ String getSocketFactoryArg() {
return PGProperty.SOCKET_FACTORY_ARG.getOrDefault(properties);
}
/**
* @param socketFactoryArg socket factory argument
* @see PGProperty#SOCKET_FACTORY_ARG
*/
@SuppressWarnings("deprecation")
public void setSocketFactoryArg(/* @Nullable */ String socketFactoryArg) {
PGProperty.SOCKET_FACTORY_ARG.set(properties, socketFactoryArg);
}
/**
* @param replication set to 'database' for logical replication or 'true' for physical replication
* @see PGProperty#REPLICATION
*/
public void setReplication(/* @Nullable */ String replication) {
PGProperty.REPLICATION.set(properties, replication);
}
/**
* @return 'select', "callIfNoReturn', or 'call'
* @see PGProperty#ESCAPE_SYNTAX_CALL_MODE
*/
public String getEscapeSyntaxCallMode() {
return castNonNull(PGProperty.ESCAPE_SYNTAX_CALL_MODE.getOrDefault(properties));
}
/**
* @param callMode the call mode to use for JDBC escape call syntax
* @see PGProperty#ESCAPE_SYNTAX_CALL_MODE
*/
public void setEscapeSyntaxCallMode(/* @Nullable */ String callMode) {
PGProperty.ESCAPE_SYNTAX_CALL_MODE.set(properties, callMode);
}
/**
* @return null, 'database', or 'true
* @see PGProperty#REPLICATION
*/
public /* @Nullable */ String getReplication() {
return PGProperty.REPLICATION.getOrDefault(properties);
}
/**
* @return the localSocketAddress
* @see PGProperty#LOCAL_SOCKET_ADDRESS
*/
public /* @Nullable */ String getLocalSocketAddress() {
return PGProperty.LOCAL_SOCKET_ADDRESS.getOrDefault(properties);
}
/**
* @param localSocketAddress local address to bind client side to
* @see PGProperty#LOCAL_SOCKET_ADDRESS
*/
public void setLocalSocketAddress(String localSocketAddress) {
PGProperty.LOCAL_SOCKET_ADDRESS.set(properties, localSocketAddress);
}
/**
* This property is no longer used by the driver and will be ignored.
* @return loggerLevel in properties
* @deprecated Configure via java.util.logging
*/
@Deprecated
public /* @Nullable */ String getLoggerLevel() {
return PGProperty.LOGGER_LEVEL.getOrDefault(properties);
}
/**
* This property is no longer used by the driver and will be ignored.
* @param loggerLevel loggerLevel to set, will be ignored
* @deprecated Configure via java.util.logging
*/
@Deprecated
public void setLoggerLevel(/* @Nullable */ String loggerLevel) {
PGProperty.LOGGER_LEVEL.set(properties, loggerLevel);
}
/**
* This property is no longer used by the driver and will be ignored.
* @return loggerFile in properties
* @deprecated Configure via java.util.logging
*/
@Deprecated
public /* @Nullable */ String getLoggerFile() {
ExpressionProperties exprProps = new ExpressionProperties(properties, System.getProperties());
return PGProperty.LOGGER_FILE.getOrDefault(exprProps);
}
/**
* This property is no longer used by the driver and will be ignored.
* @param loggerFile will be ignored
* @deprecated Configure via java.util.logging
*/
@Deprecated
public void setLoggerFile(/* @Nullable */ String loggerFile) {
PGProperty.LOGGER_FILE.set(properties, loggerFile);
}
/**
* @return Channel binding option
* @see PGProperty#CHANNEL_BINDING
*/
public /* @Nullable */ String getChannelBinding() {
return PGProperty.CHANNEL_BINDING.getOrDefault(properties);
}
/**
* @param channelBinding Channel binding option
* @see PGProperty#CHANNEL_BINDING
*/
public void setChannelBinding(/* @Nullable */ String channelBinding) {
PGProperty.CHANNEL_BINDING.set(properties, channelBinding);
}
/**
* Generates a {@link DriverManager} URL from the other properties supplied.
*
* @return {@link DriverManager} URL from the other properties supplied
*/
public String getUrl() {
StringBuilder url = new StringBuilder(100);
url.append("jdbc:postgresql://");
for (int i = 0; i < serverNames.length; i++) {
if (i > 0) {
url.append(",");
}
url.append(serverNames[i]);
if (portNumbers != null) {
if (serverNames.length != portNumbers.length) {
throw new IllegalArgumentException(
String.format("Invalid argument: number of port %s entries must equal number of serverNames %s",
Arrays.toString(portNumbers), Arrays.toString(serverNames)));
}
if (portNumbers.length >= i && portNumbers[i] != 0) {
url.append(":").append(portNumbers[i]);
}
}
}
url.append("/");
if (databaseName != null) {
url.append(URLCoder.encode(databaseName));
}
StringBuilder query = new StringBuilder(100);
for (PGProperty property : PGProperty.values()) {
if (property.isPresent(properties)) {
if (query.length() != 0) {
query.append("&");
}
query.append(property.getName());
query.append("=");
String value = castNonNull(property.getOrDefault(properties));
query.append(URLCoder.encode(value));
}
}
if (query.length() > 0) {
url.append("?");
url.append(query);
}
return url.toString();
}
/**
* Generates a {@link DriverManager} URL from the other properties supplied.
*
* @return {@link DriverManager} URL from the other properties supplied
*/
public String getURL() {
return getUrl();
}
/**
* Sets properties from a {@link DriverManager} URL.
*
* @param url properties to set
*/
public void setUrl(String url) {
Properties p = Driver.parseURL(url, null);
if (p == null) {
throw new IllegalArgumentException("URL invalid " + url);
}
for (PGProperty property : PGProperty.values()) {
if (!this.properties.containsKey(property.getName())) {
setProperty(property, property.getOrDefault(p));
}
}
}
/**
* Sets properties from a {@link DriverManager} URL.
* Added to follow convention used in other DBMS.
*
* @param url properties to set
*/
public void setURL(String url) {
setUrl(url);
}
/**
*
* @return the class name to use for the Authentication Plugin.
* This can be null in which case the default password authentication plugin will be used
*/
public /* @Nullable */ String getAuthenticationPluginClassName() {
return PGProperty.AUTHENTICATION_PLUGIN_CLASS_NAME.getOrDefault(properties);
}
/**
*
* @param className name of a class which implements {@link org.postgresql.plugin.AuthenticationPlugin}
* This class will be used to get the encoded bytes to be sent to the server as the
* password to authenticate the user.
*
*/
public void setAuthenticationPluginClassName(String className) {
PGProperty.AUTHENTICATION_PLUGIN_CLASS_NAME.set(properties, className);
}
public /* @Nullable */ String getProperty(String name) throws SQLException {
PGProperty pgProperty = PGProperty.forName(name);
if (pgProperty != null) {
return getProperty(pgProperty);
} else {
throw new PSQLException(GT.tr("Unsupported property name: {0}", name),
PSQLState.INVALID_PARAMETER_VALUE);
}
}
public void setProperty(String name, /* @Nullable */ String value) throws SQLException {
PGProperty pgProperty = PGProperty.forName(name);
if (pgProperty != null) {
setProperty(pgProperty, value);
} else {
throw new PSQLException(GT.tr("Unsupported property name: {0}", name),
PSQLState.INVALID_PARAMETER_VALUE);
}
}
public /* @Nullable */ String getProperty(PGProperty property) {
return property.getOrDefault(properties);
}
public void setProperty(PGProperty property, /* @Nullable */ String value) {
if (value == null) {
// TODO: this is not consistent with PGProperty.PROPERTY.set(prop, null)
// PGProperty removes an entry for put(null) call, however here we just ignore null
return;
}
switch (property) {
case PG_HOST:
setServerNames(value.split(","));
break;
case PG_PORT:
String[] ps = value.split(",");
int[] ports = new int[ps.length];
for (int i = 0; i < ps.length; i++) {
try {
ports[i] = Integer.parseInt(ps[i]);
} catch (NumberFormatException e) {
ports[i] = 0;
}
}
setPortNumbers(ports);
break;
case PG_DBNAME:
setDatabaseName(value);
break;
case USER:
setUser(value);
break;
case PASSWORD:
setPassword(value);
break;
default:
properties.setProperty(property.getName(), value);
}
}
/**
* Generates a reference using the appropriate object factory.
*
* @return reference using the appropriate object factory
*/
protected Reference createReference() {
return new Reference(getClass().getName(), PGObjectFactory.class.getName(), null);
}
@Override
public Reference getReference() throws NamingException {
Reference ref = createReference();
StringBuilder serverString = new StringBuilder();
for (int i = 0; i < serverNames.length; i++) {
if (i > 0) {
serverString.append(",");
}
String serverName = serverNames[i];
serverString.append(serverName);
}
ref.add(new StringRefAddr("serverName", serverString.toString()));
StringBuilder portString = new StringBuilder();
for (int i = 0; i < portNumbers.length; i++) {
if (i > 0) {
portString.append(",");
}
int p = portNumbers[i];
portString.append(Integer.toString(p));
}
ref.add(new StringRefAddr("portNumber", portString.toString()));
ref.add(new StringRefAddr("databaseName", databaseName));
if (user != null) {
ref.add(new StringRefAddr("user", user));
}
if (password != null) {
ref.add(new StringRefAddr("password", password));
}
for (PGProperty property : PGProperty.values()) {
if (property.isPresent(properties)) {
String value = castNonNull(property.getOrDefault(properties));
ref.add(new StringRefAddr(property.getName(), value));
}
}
return ref;
}
public void setFromReference(Reference ref) {
databaseName = getReferenceProperty(ref, "databaseName");
String portNumberString = getReferenceProperty(ref, "portNumber");
if (portNumberString != null) {
String[] ps = portNumberString.split(",");
int[] ports = new int[ps.length];
for (int i = 0; i < ps.length; i++) {
try {
ports[i] = Integer.parseInt(ps[i]);
} catch (NumberFormatException e) {
ports[i] = 0;
}
}
setPortNumbers(ports);
} else {
setPortNumbers(null);
}
String serverName = castNonNull(getReferenceProperty(ref, "serverName"));
setServerNames(serverName.split(","));
for (PGProperty property : PGProperty.values()) {
setProperty(property, getReferenceProperty(ref, property.getName()));
}
}
private static /* @Nullable */ String getReferenceProperty(Reference ref, String propertyName) {
RefAddr addr = ref.get(propertyName);
if (addr == null) {
return null;
}
return (String) addr.getContent();
}
protected void writeBaseObject(ObjectOutputStream out) throws IOException {
out.writeObject(serverNames);
out.writeObject(databaseName);
out.writeObject(user);
out.writeObject(password);
out.writeObject(portNumbers);
out.writeObject(properties);
}
protected void readBaseObject(ObjectInputStream in) throws IOException, ClassNotFoundException {
serverNames = (String[]) in.readObject();
databaseName = (String) in.readObject();
user = (String) in.readObject();
password = (String) in.readObject();
portNumbers = (int[]) in.readObject();
properties = (Properties) in.readObject();
}
public void initializeFrom(BaseDataSource source) throws IOException, ClassNotFoundException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(baos);
source.writeBaseObject(oos);
oos.close();
ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());
ObjectInputStream ois = new ObjectInputStream(bais);
readBaseObject(ois);
}
/**
* @return preferred query execution mode
* @see PGProperty#PREFER_QUERY_MODE
*/
public PreferQueryMode getPreferQueryMode() {
return PreferQueryMode.of(castNonNull(PGProperty.PREFER_QUERY_MODE.getOrDefault(properties)));
}
/**
* @param preferQueryMode extended, simple, extendedForPrepared, or extendedCacheEverything
* @see PGProperty#PREFER_QUERY_MODE
*/
public void setPreferQueryMode(PreferQueryMode preferQueryMode) {
PGProperty.PREFER_QUERY_MODE.set(properties, preferQueryMode.value());
}
/**
* @return connection configuration regarding automatic per-query savepoints
* @see PGProperty#AUTOSAVE
*/
public AutoSave getAutosave() {
return AutoSave.of(castNonNull(PGProperty.AUTOSAVE.getOrDefault(properties)));
}
/**
* @param autoSave connection configuration regarding automatic per-query savepoints
* @see PGProperty#AUTOSAVE
*/
public void setAutosave(AutoSave autoSave) {
PGProperty.AUTOSAVE.set(properties, autoSave.value());
}
/**
* see PGProperty#CLEANUP_SAVEPOINTS
*
* @return boolean indicating property set
*/
public boolean getCleanupSavepoints() {
return PGProperty.CLEANUP_SAVEPOINTS.getBoolean(properties);
}
/**
* see PGProperty#CLEANUP_SAVEPOINTS
*
* @param cleanupSavepoints will cleanup savepoints after a successful transaction
*/
public void setCleanupSavepoints(boolean cleanupSavepoints) {
PGProperty.CLEANUP_SAVEPOINTS.set(properties, cleanupSavepoints);
}
/**
* @return boolean indicating property is enabled or not.
* @see PGProperty#REWRITE_BATCHED_INSERTS
*/
public boolean getReWriteBatchedInserts() {
return PGProperty.REWRITE_BATCHED_INSERTS.getBoolean(properties);
}
/**
* @param reWrite boolean value to set the property in the properties collection
* @see PGProperty#REWRITE_BATCHED_INSERTS
*/
public void setReWriteBatchedInserts(boolean reWrite) {
PGProperty.REWRITE_BATCHED_INSERTS.set(properties, reWrite);
}
/**
* @return boolean indicating property is enabled or not.
* @see PGProperty#HIDE_UNPRIVILEGED_OBJECTS
*/
public boolean getHideUnprivilegedObjects() {
return PGProperty.HIDE_UNPRIVILEGED_OBJECTS.getBoolean(properties);
}
/**
* @param hideUnprivileged boolean value to set the property in the properties collection
* @see PGProperty#HIDE_UNPRIVILEGED_OBJECTS
*/
public void setHideUnprivilegedObjects(boolean hideUnprivileged) {
PGProperty.HIDE_UNPRIVILEGED_OBJECTS.set(properties, hideUnprivileged);
}
public /* @Nullable */ String getMaxResultBuffer() {
return PGProperty.MAX_RESULT_BUFFER.getOrDefault(properties);
}
public void setMaxResultBuffer(/* @Nullable */ String maxResultBuffer) {
PGProperty.MAX_RESULT_BUFFER.set(properties, maxResultBuffer);
}
public boolean getAdaptiveFetch() {
return PGProperty.ADAPTIVE_FETCH.getBoolean(properties);
}
public void setAdaptiveFetch(boolean adaptiveFetch) {
PGProperty.ADAPTIVE_FETCH.set(properties, adaptiveFetch);
}
public int getAdaptiveFetchMaximum() {
return PGProperty.ADAPTIVE_FETCH_MAXIMUM.getIntNoCheck(properties);
}
public void setAdaptiveFetchMaximum(int adaptiveFetchMaximum) {
PGProperty.ADAPTIVE_FETCH_MAXIMUM.set(properties, adaptiveFetchMaximum);
}
public int getAdaptiveFetchMinimum() {
return PGProperty.ADAPTIVE_FETCH_MINIMUM.getIntNoCheck(properties);
}
public void setAdaptiveFetchMinimum(int adaptiveFetchMinimum) {
PGProperty.ADAPTIVE_FETCH_MINIMUM.set(properties, adaptiveFetchMinimum);
}
@Override
public Logger getParentLogger() {
return Logger.getLogger("org.postgresql");
}
public String getXmlFactoryFactory() {
return castNonNull(PGProperty.XML_FACTORY_FACTORY.getOrDefault(properties));
}
public void setXmlFactoryFactory(/* @Nullable */ String xmlFactoryFactory) {
PGProperty.XML_FACTORY_FACTORY.set(properties, xmlFactoryFactory);
}
/*
* Alias methods below, these are to help with ease-of-use with other database tools / frameworks
* which expect normal java bean getters / setters to exist for the property names.
*/
public boolean isSsl() {
return getSsl();
}
public /* @Nullable */ String getSslfactoryarg() {
return getSslFactoryArg();
}
public void setSslfactoryarg(final /* @Nullable */ String arg) {
setSslFactoryArg(arg);
}
public /* @Nullable */ String getSslcert() {
return getSslCert();
}
public void setSslcert(final /* @Nullable */ String file) {
setSslCert(file);
}
public /* @Nullable */ String getSslmode() {
return getSslMode();
}
public void setSslmode(final /* @Nullable */ String mode) {
setSslMode(mode);
}
public /* @Nullable */ String getSslhostnameverifier() {
return getSslHostnameVerifier();
}
public void setSslhostnameverifier(final /* @Nullable */ String className) {
setSslHostnameVerifier(className);
}
public /* @Nullable */ String getSslkey() {
return getSslKey();
}
public void setSslkey(final /* @Nullable */ String file) {
setSslKey(file);
}
public /* @Nullable */ String getSslrootcert() {
return getSslRootCert();
}
public void setSslrootcert(final /* @Nullable */ String file) {
setSslRootCert(file);
}
public /* @Nullable */ String getSslpasswordcallback() {
return getSslPasswordCallback();
}
public void setSslpasswordcallback(final /* @Nullable */ String className) {
setSslPasswordCallback(className);
}
public /* @Nullable */ String getSslpassword() {
return getSslPassword();
}
public void setSslpassword(final String sslpassword) {
setSslPassword(sslpassword);
}
public int getRecvBufferSize() {
return getReceiveBufferSize();
}
public void setRecvBufferSize(final int nbytes) {
setReceiveBufferSize(nbytes);
}
public boolean isAllowEncodingChanges() {
return getAllowEncodingChanges();
}
public boolean isLogUnclosedConnections() {
return getLogUnclosedConnections();
}
public boolean isTcpKeepAlive() {
return getTcpKeepAlive();
}
public boolean isReadOnly() {
return getReadOnly();
}
public boolean isDisableColumnSanitiser() {
return getDisableColumnSanitiser();
}
public boolean isLoadBalanceHosts() {
return getLoadBalanceHosts();
}
public boolean isCleanupSavePoints() {
return getCleanupSavepoints();
}
public void setCleanupSavePoints(final boolean cleanupSavepoints) {
setCleanupSavepoints(cleanupSavepoints);
}
public boolean isReWriteBatchedInserts() {
return getReWriteBatchedInserts();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/ds/common/PGObjectFactory.java 0100664 0000000 0000000 00000007310 00000250600 027366 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.ds.common;
import org.postgresql.ds.PGConnectionPoolDataSource;
import org.postgresql.ds.PGSimpleDataSource;
import org.postgresql.util.internal.Nullness;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.Name;
import javax.naming.RefAddr;
import javax.naming.Reference;
import javax.naming.spi.ObjectFactory;
/**
* Returns a DataSource-ish thing based on a JNDI reference. In the case of a SimpleDataSource or
* ConnectionPool, a new instance is created each time, as there is no connection state to maintain.
* In the case of a PoolingDataSource, the same DataSource will be returned for every invocation
* within the same VM/ClassLoader, so that the state of the connections in the pool will be
* consistent.
*
* @author Aaron Mulder (ammulder@chariotsolutions.com)
*/
public class PGObjectFactory implements ObjectFactory {
/**
* Dereferences a PostgreSQL DataSource. Other types of references are ignored.
*/
@Override
public /* @Nullable */ Object getObjectInstance(Object obj, Name name, Context nameCtx,
Hashtable, ?> environment) throws Exception {
Reference ref = (Reference) obj;
String className = ref.getClassName();
// Old names are here for those who still use them
if ("org.postgresql.ds.PGSimpleDataSource".equals(className)
|| "org.postgresql.jdbc2.optional.SimpleDataSource".equals(className)
|| "org.postgresql.jdbc3.Jdbc3SimpleDataSource".equals(className)) {
return loadSimpleDataSource(ref);
} else if ("org.postgresql.ds.PGConnectionPoolDataSource".equals(className)
|| "org.postgresql.jdbc2.optional.ConnectionPool".equals(className)
|| "org.postgresql.jdbc3.Jdbc3ConnectionPool".equals(className)) {
return loadConnectionPool(ref);
} else if ("org.postgresql.ds.PGPoolingDataSource".equals(className)
|| "org.postgresql.jdbc2.optional.PoolingDataSource".equals(className)
|| "org.postgresql.jdbc3.Jdbc3PoolingDataSource".equals(className)) {
return loadPoolingDataSource(ref);
} else {
return null;
}
}
@SuppressWarnings("deprecation")
private Object loadPoolingDataSource(Reference ref) {
// If DataSource exists, return it
String name = Nullness.castNonNull(getProperty(ref, "dataSourceName"));
org.postgresql.ds.PGPoolingDataSource pds =
org.postgresql.ds.PGPoolingDataSource.getDataSource(name);
if (pds != null) {
return pds;
}
// Otherwise, create a new one
pds = new org.postgresql.ds.PGPoolingDataSource();
pds.setDataSourceName(name);
loadBaseDataSource(pds, ref);
String min = getProperty(ref, "initialConnections");
if (min != null) {
pds.setInitialConnections(Integer.parseInt(min));
}
String max = getProperty(ref, "maxConnections");
if (max != null) {
pds.setMaxConnections(Integer.parseInt(max));
}
return pds;
}
private Object loadSimpleDataSource(Reference ref) {
PGSimpleDataSource ds = new PGSimpleDataSource();
return loadBaseDataSource(ds, ref);
}
private Object loadConnectionPool(Reference ref) {
PGConnectionPoolDataSource cp = new PGConnectionPoolDataSource();
return loadBaseDataSource(cp, ref);
}
protected Object loadBaseDataSource(BaseDataSource ds, Reference ref) {
ds.setFromReference(ref);
return ds;
}
protected /* @Nullable */ String getProperty(Reference ref, String s) {
RefAddr addr = ref.get(s);
if (addr == null) {
return null;
}
return (String) addr.getContent();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/fastpath/ 0040775 0000000 0000000 00000000000 00000250600 023454 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/fastpath/Fastpath.java 0100664 0000000 0000000 00000026426 00000250600 026100 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.fastpath;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.core.BaseConnection;
import org.postgresql.core.ParameterList;
import org.postgresql.core.QueryExecutor;
import org.postgresql.util.ByteConverter;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
import java.util.logging.Level;
/**
* This class implements the Fastpath api.
*
* This is a means of executing functions embedded in the backend from within a java application.
*
* It is based around the file src/interfaces/libpq/fe-exec.c
*/
public class Fastpath {
// Java passes oids around as longs, but in the backend
// it's an unsigned int, so we use this to make the conversion
// of long -> signed int which the backend interprets as unsigned.
private static final long NUM_OIDS = 4294967296L; // 2^32
// This maps the functions names to their id's (possible unique just
// to a connection).
private final Map func = new HashMap<>();
private final QueryExecutor executor;
private final BaseConnection connection;
/**
* Initialises the fastpath system.
*
* @param conn BaseConnection to attach to
*/
public Fastpath(BaseConnection conn) {
this.connection = conn;
this.executor = conn.getQueryExecutor();
}
/**
* Send a function call to the PostgreSQL backend.
*
* @param fnId Function id
* @param resultType True if the result is a numeric (Integer or Long)
* @param args FastpathArguments to pass to fastpath
* @return null if no data, Integer if an integer result, Long if a long result, or byte[]
* otherwise
* @throws SQLException if a database-access error occurs.
* @deprecated please use {@link #fastpath(int, FastpathArg[])}
*/
@Deprecated
public /* @Nullable */ Object fastpath(int fnId, boolean resultType, FastpathArg[] args)
throws SQLException {
// Run it.
byte[] returnValue = fastpath(fnId, args);
// Interpret results.
if (!resultType || returnValue == null) {
return returnValue;
}
if (returnValue.length == 4) {
return ByteConverter.int4(returnValue, 0);
} else if (returnValue.length == 8) {
return ByteConverter.int8(returnValue, 0);
} else {
throw new PSQLException(
GT.tr("Fastpath call {0} - No result was returned and we expected a numeric.", fnId),
PSQLState.NO_DATA);
}
}
/**
* Send a function call to the PostgreSQL backend.
*
* @param fnId Function id
* @param args FastpathArguments to pass to fastpath
* @return null if no data, byte[] otherwise
* @throws SQLException if a database-access error occurs.
*/
public byte /* @Nullable */ [] fastpath(int fnId, FastpathArg[] args) throws SQLException {
// Turn fastpath array into a parameter list.
@SuppressWarnings("deprecation")
ParameterList params = executor.createFastpathParameters(args.length);
for (int i = 0; i < args.length; i++) {
args[i].populateParameter(params, i + 1);
}
// Run it.
@SuppressWarnings("deprecation")
byte[] result = executor.fastpathCall(fnId, params, connection.getAutoCommit());
return result;
}
/**
* @param name Function name
* @param resulttype True if the result is a numeric (Integer or Long)
* @param args FastpathArguments to pass to fastpath
* @return null if no data, Integer if an integer result, Long if a long result, or byte[]
* otherwise
* @throws SQLException if something goes wrong
* @see #fastpath(int, FastpathArg[])
* @see #fastpath(String, FastpathArg[])
* @deprecated Use {@link #getData(String, FastpathArg[])} if you expect a binary result, or one
* of {@link #getInteger(String, FastpathArg[])} or
* {@link #getLong(String, FastpathArg[])} if you expect a numeric one
*/
@Deprecated
public /* @Nullable */ Object fastpath(String name, boolean resulttype, FastpathArg[] args)
throws SQLException {
connection.getLogger().log(Level.FINEST, "Fastpath: calling {0}", name);
return fastpath(getID(name), resulttype, args);
}
/**
* Send a function call to the PostgreSQL backend by name.
*
* Note: the mapping for the procedure name to function id needs to exist, usually to an earlier
* call to addfunction().
*
* This is the preferred method to call, as function id's can/may change between versions of the
* backend.
*
* For an example of how this works, refer to org.postgresql.largeobject.LargeObject
*
* @param name Function name
* @param args FastpathArguments to pass to fastpath
* @return null if no data, byte[] otherwise
* @throws SQLException if name is unknown or if a database-access error occurs.
* @see org.postgresql.largeobject.LargeObject
*/
public byte /* @Nullable */ [] fastpath(String name, FastpathArg[] args) throws SQLException {
connection.getLogger().log(Level.FINEST, "Fastpath: calling {0}", name);
return fastpath(getID(name), args);
}
/**
* This convenience method assumes that the return value is an integer.
*
* @param name Function name
* @param args Function arguments
* @return integer result
* @throws SQLException if a database-access error occurs or no result
*/
public int getInteger(String name, FastpathArg[] args) throws SQLException {
byte[] returnValue = fastpath(name, args);
if (returnValue == null) {
throw new PSQLException(
GT.tr("Fastpath call {0} - No result was returned and we expected an integer.", name),
PSQLState.NO_DATA);
}
if (returnValue.length == 4) {
return ByteConverter.int4(returnValue, 0);
} else {
throw new PSQLException(GT.tr(
"Fastpath call {0} - No result was returned or wrong size while expecting an integer.",
name), PSQLState.NO_DATA);
}
}
/**
* This convenience method assumes that the return value is a long (bigint).
*
* @param name Function name
* @param args Function arguments
* @return long result
* @throws SQLException if a database-access error occurs or no result
*/
public long getLong(String name, FastpathArg[] args) throws SQLException {
byte[] returnValue = fastpath(name, args);
if (returnValue == null) {
throw new PSQLException(
GT.tr("Fastpath call {0} - No result was returned and we expected a long.", name),
PSQLState.NO_DATA);
}
if (returnValue.length == 8) {
return ByteConverter.int8(returnValue, 0);
} else {
throw new PSQLException(
GT.tr("Fastpath call {0} - No result was returned or wrong size while expecting a long.",
name),
PSQLState.NO_DATA);
}
}
/**
* This convenience method assumes that the return value is an oid.
*
* @param name Function name
* @param args Function arguments
* @return oid of the given call
* @throws SQLException if a database-access error occurs or no result
*/
public long getOID(String name, FastpathArg[] args) throws SQLException {
long oid = getInteger(name, args);
if (oid < 0) {
oid += NUM_OIDS;
}
return oid;
}
/**
* This convenience method assumes that the return value is not an Integer.
*
* @param name Function name
* @param args Function arguments
* @return byte[] array containing result
* @throws SQLException if a database-access error occurs or no result
*/
public byte /* @Nullable */ [] getData(String name, FastpathArg[] args) throws SQLException {
return fastpath(name, args);
}
/**
* This adds a function to our lookup table.
*
* User code should use the addFunctions method, which is based upon a query, rather than hard
* coding the oid. The oid for a function is not guaranteed to remain static, even on different
* servers of the same version.
*
* @param name Function name
* @param fnid Function id
*/
public void addFunction(String name, int fnid) {
func.put(name, fnid);
}
/**
* This takes a ResultSet containing two columns. Column 1 contains the function name, Column 2
* the oid.
*
* It reads the entire ResultSet, loading the values into the function table.
*
* REMEMBER to close() the resultset after calling this!!
*
* Implementation note about function name lookups:
*
* PostgreSQL stores the function id's and their corresponding names in the pg_proc table. To
* speed things up locally, instead of querying each function from that table when required, a
* HashMap is used. Also, only the function's required are entered into this table, keeping
* connection times as fast as possible.
*
* The org.postgresql.largeobject.LargeObject class performs a query upon it's startup, and passes
* the returned ResultSet to the addFunctions() method here.
*
* Once this has been done, the LargeObject api refers to the functions by name.
*
* Don't think that manually converting them to the oid's will work. Ok, they will for now, but
* they can change during development (there was some discussion about this for V7.0), so this is
* implemented to prevent any unwarranted headaches in the future.
*
* @param rs ResultSet
* @throws SQLException if a database-access error occurs.
* @see org.postgresql.largeobject.LargeObjectManager
*/
public void addFunctions(ResultSet rs) throws SQLException {
while (rs.next()) {
func.put(castNonNull(rs.getString(1)), rs.getInt(2));
}
}
/**
* This returns the function id associated by its name.
*
* If addFunction() or addFunctions() have not been called for this name, then an SQLException is
* thrown.
*
* @param name Function name to lookup
* @return Function ID for fastpath call
* @throws SQLException is function is unknown.
*/
public int getID(String name) throws SQLException {
Integer id = func.get(name);
// may be we could add a lookup to the database here, and store the result
// in our lookup table, throwing the exception if that fails.
// We must, however, ensure that if we do, any existing ResultSet is
// unaffected, otherwise we could break user code.
//
// so, until we know we can do this (needs testing, on the TODO list)
// for now, we throw the exception and do no lookups.
if (id == null) {
throw new PSQLException(GT.tr("The fastpath function {0} is unknown.", name),
PSQLState.UNEXPECTED_ERROR);
}
return id;
}
/**
* Creates a FastpathArg with an oid parameter. This is here instead of a constructor of
* FastpathArg because the constructor can't tell the difference between an long that's really
* int8 and a long thats an oid.
*
* @param oid input oid
* @return FastpathArg with an oid parameter
*/
public static FastpathArg createOIDArg(long oid) {
if (oid > Integer.MAX_VALUE) {
oid -= NUM_OIDS;
}
return new FastpathArg((int) oid);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/fastpath/FastpathArg.java 0100664 0000000 0000000 00000006421 00000250600 026523 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.fastpath;
import org.postgresql.core.ParameterList;
import org.postgresql.util.ByteStreamWriter;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.nio.charset.Charset;
import java.sql.SQLException;
// Not a very clean mapping to the new QueryExecutor/ParameterList
// stuff, but it seems hard to support both v2 and v3 cleanly with
// the same model while retaining API compatibility. So I've just
// done it the ugly way..
/**
* Each fastpath call requires an array of arguments, the number and type dependent on the function
* being called.
*/
public class FastpathArg {
/**
* Encoded byte value of argument.
*/
private final byte /* @Nullable */ [] bytes;
private final int bytesStart;
private final int bytesLength;
static class ByteStreamWriterFastpathArg extends FastpathArg {
private final ByteStreamWriter writer;
ByteStreamWriterFastpathArg(ByteStreamWriter writer) {
super(null, 0, 0);
this.writer = writer;
}
@Override
void populateParameter(ParameterList params, int index) throws SQLException {
params.setBytea(index, writer);
}
}
/**
* Constructs an argument that consists of an integer value.
*
* @param value int value to set
*/
public FastpathArg(int value) {
bytes = new byte[4];
bytes[3] = (byte) (value);
bytes[2] = (byte) (value >> 8);
bytes[1] = (byte) (value >> 16);
bytes[0] = (byte) (value >> 24);
bytesStart = 0;
bytesLength = 4;
}
/**
* Constructs an argument that consists of an integer value.
*
* @param value int value to set
*/
public FastpathArg(long value) {
bytes = new byte[8];
bytes[7] = (byte) (value);
bytes[6] = (byte) (value >> 8);
bytes[5] = (byte) (value >> 16);
bytes[4] = (byte) (value >> 24);
bytes[3] = (byte) (value >> 32);
bytes[2] = (byte) (value >> 40);
bytes[1] = (byte) (value >> 48);
bytes[0] = (byte) (value >> 56);
bytesStart = 0;
bytesLength = 8;
}
/**
* Constructs an argument that consists of an array of bytes.
*
* @param bytes array to store
*/
public FastpathArg(byte[] bytes) {
this(bytes, 0, bytes.length);
}
/**
* Constructs an argument that consists of part of a byte array.
*
* @param buf source array
* @param off offset within array
* @param len length of data to include
*/
public FastpathArg(byte /* @Nullable */ [] buf, int off, int len) {
this.bytes = buf;
this.bytesStart = off;
this.bytesLength = len;
}
/**
* Constructs an argument that consists of a String.
*
* @param s String to store
*/
public FastpathArg(String s) {
// Default charset is for backward compatibility
// It looks like we should use database connection encoding
this(s.getBytes(Charset.defaultCharset()));
}
public static FastpathArg of(ByteStreamWriter writer) {
return new ByteStreamWriterFastpathArg(writer);
}
void populateParameter(ParameterList params, int index) throws SQLException {
if (bytes == null) {
params.setNull(index, 0);
} else {
params.setBytea(index, bytes, bytesStart, bytesLength);
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/ 0040775 0000000 0000000 00000000000 00000250600 023620 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/PGbox.java 0100664 0000000 0000000 00000012754 00000250600 025510 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.geometric;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.util.GT;
import org.postgresql.util.PGBinaryObject;
import org.postgresql.util.PGobject;
import org.postgresql.util.PGtokenizer;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Serializable;
import java.sql.SQLException;
/**
* This represents the box datatype within org.postgresql.
*/
public class PGbox extends PGobject implements PGBinaryObject, Serializable, Cloneable {
/**
* These are the two points.
*/
public PGpoint /* @Nullable */ [] point;
/**
* @param x1 first x coordinate
* @param y1 first y coordinate
* @param x2 second x coordinate
* @param y2 second y coordinate
*/
public PGbox(double x1, double y1, double x2, double y2) {
this(new PGpoint(x1, y1), new PGpoint(x2, y2));
}
/**
* @param p1 first point
* @param p2 second point
*/
public PGbox(PGpoint p1, PGpoint p2) {
this();
this.point = new PGpoint[]{p1, p2};
}
/**
* @param s Box definition in PostgreSQL syntax
* @throws SQLException if definition is invalid
*/
@SuppressWarnings("method.invocation")
public PGbox(String s) throws SQLException {
this();
setValue(s);
}
/**
* Required constructor.
*/
public PGbox() {
type = "box";
}
/**
* This method sets the value of this object. It should be overridden, but still called by
* subclasses.
*
* @param value a string representation of the value of the object
* @throws SQLException thrown if value is invalid for this type
*/
@Override
public void setValue(/* @Nullable */ String value) throws SQLException {
if (value == null) {
this.point = null;
return;
}
PGtokenizer t = new PGtokenizer(value, ',');
if (t.getSize() != 2) {
throw new PSQLException(
GT.tr("Conversion to type {0} failed: {1}.", type, value),
PSQLState.DATA_TYPE_MISMATCH);
}
PGpoint[] point = this.point;
if (point == null) {
this.point = point = new PGpoint[2];
}
point[0] = new PGpoint(t.getToken(0));
point[1] = new PGpoint(t.getToken(1));
}
/**
* @param b Definition of this point in PostgreSQL's binary syntax
*/
@Override
public void setByteValue(byte[] b, int offset) {
PGpoint[] point = this.point;
if (point == null) {
this.point = point = new PGpoint[2];
}
point[0] = new PGpoint();
point[0].setByteValue(b, offset);
point[1] = new PGpoint();
point[1].setByteValue(b, offset + point[0].lengthInBytes());
this.point = point;
}
/**
* @param obj Object to compare with
* @return true if the two boxes are identical
*/
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (obj instanceof PGbox) {
PGbox p = (PGbox) obj;
// Same points.
PGpoint[] point = this.point;
PGpoint[] pPoint = p.point;
if (point == null) {
return pPoint == null;
} else if (pPoint == null) {
return false;
}
if (pPoint[0].equals(point[0]) && pPoint[1].equals(point[1])) {
return true;
}
// Points swapped.
if (pPoint[0].equals(point[1]) && pPoint[1].equals(point[0])) {
return true;
}
// Using the opposite two points of the box:
// (x1,y1),(x2,y2) -> (x1,y2),(x2,y1)
if (pPoint[0].x == point[0].x && pPoint[0].y == point[1].y
&& pPoint[1].x == point[1].x && pPoint[1].y == point[0].y) {
return true;
}
// Using the opposite two points of the box, and the points are swapped
// (x1,y1),(x2,y2) -> (x2,y1),(x1,y2)
if (pPoint[0].x == point[1].x && pPoint[0].y == point[0].y
&& pPoint[1].x == point[0].x && pPoint[1].y == point[1].y) {
return true;
}
}
return false;
}
@Override
public int hashCode() {
// This relies on the behaviour of point's hashcode being an exclusive-OR of
// its X and Y components; we end up with an exclusive-OR of the two X and
// two Y components, which is equal whenever equals() would return true
// since xor is commutative.
PGpoint[] point = this.point;
return point == null ? 0 : point[0].hashCode() ^ point[1].hashCode();
}
@Override
public Object clone() throws CloneNotSupportedException {
PGbox newPGbox = (PGbox) super.clone();
if (newPGbox.point != null) {
newPGbox.point = newPGbox.point.clone();
for (int i = 0; i < newPGbox.point.length; i++) {
if (newPGbox.point[i] != null) {
newPGbox.point[i] = (PGpoint) newPGbox.point[i].clone();
}
}
}
return newPGbox;
}
/**
* @return the PGbox in the syntax expected by org.postgresql
*/
@Override
public /* @Nullable */ String getValue() {
PGpoint[] point = this.point;
return point == null ? null : point[0].toString() + "," + point[1].toString();
}
@Override
public int lengthInBytes() {
PGpoint[] point = this.point;
if (point == null) {
return 0;
}
return point[0].lengthInBytes() + point[1].lengthInBytes();
}
@Override
public void toBytes(byte[] bytes, int offset) {
PGpoint[] point = castNonNull(this.point);
point[0].toBytes(bytes, offset);
point[1].toBytes(bytes, offset + point[0].lengthInBytes());
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/PGcircle.java 0100664 0000000 0000000 00000006753 00000250600 026163 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.geometric;
import org.postgresql.util.GT;
import org.postgresql.util.PGobject;
import org.postgresql.util.PGtokenizer;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Serializable;
import java.sql.SQLException;
/**
* This represents org.postgresql's circle datatype, consisting of a point and a radius.
*/
public class PGcircle extends PGobject implements Serializable, Cloneable {
/**
* This is the center point.
*/
public /* @Nullable */ PGpoint center;
/**
* This is the radius.
*/
public double radius;
/**
* @param x coordinate of center
* @param y coordinate of center
* @param r radius of circle
*/
public PGcircle(double x, double y, double r) {
this(new PGpoint(x, y), r);
}
/**
* @param c PGpoint describing the circle's center
* @param r radius of circle
*/
public PGcircle(PGpoint c, double r) {
this();
this.center = c;
this.radius = r;
}
/**
* @param s definition of the circle in PostgreSQL's syntax.
* @throws SQLException on conversion failure
*/
@SuppressWarnings("method.invocation")
public PGcircle(String s) throws SQLException {
this();
setValue(s);
}
/**
* This constructor is used by the driver.
*/
public PGcircle() {
type = "circle";
}
/**
* @param s definition of the circle in PostgreSQL's syntax.
* @throws SQLException on conversion failure
*/
@Override
public void setValue(/* @Nullable */ String s) throws SQLException {
if (s == null) {
center = null;
return;
}
PGtokenizer t = new PGtokenizer(PGtokenizer.removeAngle(s), ',');
if (t.getSize() != 2) {
throw new PSQLException(GT.tr("Conversion to type {0} failed: {1}.", type, s),
PSQLState.DATA_TYPE_MISMATCH);
}
try {
center = new PGpoint(t.getToken(0));
radius = Double.parseDouble(t.getToken(1));
} catch (NumberFormatException e) {
throw new PSQLException(GT.tr("Conversion to type {0} failed: {1}.", type, s),
PSQLState.DATA_TYPE_MISMATCH, e);
}
}
/**
* @param obj Object to compare with
* @return true if the two circles are identical
*/
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (obj instanceof PGcircle) {
PGcircle p = (PGcircle) obj;
PGpoint center = this.center;
PGpoint pCenter = p.center;
if (center == null) {
return pCenter == null;
} else if (pCenter == null) {
return false;
}
return p.radius == radius && equals(pCenter, center);
}
return false;
}
@Override
public int hashCode() {
if (center == null) {
return 0;
}
long bits = Double.doubleToLongBits(radius);
int v = (int) (bits ^ (bits >>> 32));
v = v * 31 + center.hashCode();
return v;
}
@Override
public Object clone() throws CloneNotSupportedException {
PGcircle newPGcircle = (PGcircle) super.clone();
if (newPGcircle.center != null) {
newPGcircle.center = (PGpoint) newPGcircle.center.clone();
}
return newPGcircle;
}
/**
* @return the PGcircle in the syntax expected by org.postgresql
*/
@Override
public /* @Nullable */ String getValue() {
return center == null ? null : "<" + center + "," + radius + ">";
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/PGline.java 0100664 0000000 0000000 00000012365 00000250600 025645 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.geometric;
import org.postgresql.util.GT;
import org.postgresql.util.PGobject;
import org.postgresql.util.PGtokenizer;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Serializable;
import java.sql.SQLException;
/**
* This implements a line represented by the linear equation Ax + By + C = 0.
**/
public class PGline extends PGobject implements Serializable, Cloneable {
/**
* Coefficient of x.
*/
public double a;
/**
* Coefficient of y.
*/
public double b;
/**
* Constant.
*/
public double c;
private boolean isNull;
/**
* @param a coefficient of x
* @param b coefficient of y
* @param c constant
*/
public PGline(double a, double b, double c) {
this();
this.a = a;
this.b = b;
this.c = c;
}
/**
* @param x1 coordinate for first point on the line
* @param y1 coordinate for first point on the line
* @param x2 coordinate for second point on the line
* @param y2 coordinate for second point on the line
*/
@SuppressWarnings("method.invocation")
public PGline(double x1, double y1, double x2, double y2) {
this();
setValue(x1, y1, x2, y2);
}
/**
* @param p1 first point on the line
* @param p2 second point on the line
*/
@SuppressWarnings("method.invocation")
public PGline(/* @Nullable */ PGpoint p1, /* @Nullable */ PGpoint p2) {
this();
setValue(p1, p2);
}
/**
* @param lseg Line segment which calls on this line.
*/
@SuppressWarnings("method.invocation")
public PGline(/* @Nullable */ PGlseg lseg) {
this();
if (lseg == null) {
isNull = true;
return;
}
PGpoint[] point = lseg.point;
if (point == null) {
isNull = true;
return;
}
setValue(point[0], point[1]);
}
private void setValue(/* @Nullable */ PGpoint p1, /* @Nullable */ PGpoint p2) {
if (p1 == null || p2 == null) {
isNull = true;
} else {
setValue(p1.x, p1.y, p2.x, p2.y);
}
}
private void setValue(double x1, double y1, double x2, double y2) {
if (x1 == x2) {
a = -1;
b = 0;
} else {
a = (y2 - y1) / (x2 - x1);
b = -1;
}
c = y1 - a * x1;
}
/**
* @param s definition of the line in PostgreSQL's syntax.
* @throws SQLException on conversion failure
*/
@SuppressWarnings("method.invocation")
public PGline(String s) throws SQLException {
this();
setValue(s);
}
/**
* required by the driver.
*/
public PGline() {
type = "line";
}
/**
* @param s Definition of the line in PostgreSQL's syntax
* @throws SQLException on conversion failure
*/
@Override
public void setValue(/* @Nullable */ String s) throws SQLException {
isNull = s == null;
if (s == null) {
return;
}
if (s.trim().startsWith("{")) {
PGtokenizer t = new PGtokenizer(PGtokenizer.removeCurlyBrace(s), ',');
if (t.getSize() != 3) {
throw new PSQLException(GT.tr("Conversion to type {0} failed: {1}.", type, s),
PSQLState.DATA_TYPE_MISMATCH);
}
a = Double.parseDouble(t.getToken(0));
b = Double.parseDouble(t.getToken(1));
c = Double.parseDouble(t.getToken(2));
} else if (s.trim().startsWith("[")) {
PGtokenizer t = new PGtokenizer(PGtokenizer.removeBox(s), ',');
if (t.getSize() != 2) {
throw new PSQLException(GT.tr("Conversion to type {0} failed: {1}.", type, s),
PSQLState.DATA_TYPE_MISMATCH);
}
PGpoint point1 = new PGpoint(t.getToken(0));
PGpoint point2 = new PGpoint(t.getToken(1));
a = point2.x - point1.x;
b = point2.y - point1.y;
c = point1.y;
}
}
/**
* @param obj Object to compare with
* @return true if the two lines are identical
*/
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (this == obj) {
return true;
}
if (obj == null || getClass() != obj.getClass()) {
return false;
}
if (!super.equals(obj)) {
return false;
}
PGline pGline = (PGline) obj;
if (isNull) {
return pGline.isNull;
} else if (pGline.isNull) {
return false;
}
return Double.compare(pGline.a, a) == 0
&& Double.compare(pGline.b, b) == 0
&& Double.compare(pGline.c, c) == 0;
}
@Override
public int hashCode() {
if (isNull) {
return 0;
}
int result = super.hashCode();
long temp;
temp = Double.doubleToLongBits(a);
result = 31 * result + (int) (temp ^ (temp >>> 32));
temp = Double.doubleToLongBits(b);
result = 31 * result + (int) (temp ^ (temp >>> 32));
temp = Double.doubleToLongBits(c);
result = 31 * result + (int) (temp ^ (temp >>> 32));
return result;
}
/**
* @return the PGline in the syntax expected by org.postgresql
*/
@Override
public /* @Nullable */ String getValue() {
return isNull ? null : "{" + a + "," + b + "," + c + "}";
}
@Override
public Object clone() throws CloneNotSupportedException {
// squid:S2157 "Cloneables" should implement "clone
return super.clone();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/PGlseg.java 0100664 0000000 0000000 00000007173 00000250600 025651 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.geometric;
import org.postgresql.util.GT;
import org.postgresql.util.PGobject;
import org.postgresql.util.PGtokenizer;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Serializable;
import java.sql.SQLException;
/**
* This implements a lseg (line segment) consisting of two points.
*/
public class PGlseg extends PGobject implements Serializable, Cloneable {
/**
* These are the two points.
*/
public PGpoint /* @Nullable */ [] point;
/**
* @param x1 coordinate for first point
* @param y1 coordinate for first point
* @param x2 coordinate for second point
* @param y2 coordinate for second point
*/
public PGlseg(double x1, double y1, double x2, double y2) {
this(new PGpoint(x1, y1), new PGpoint(x2, y2));
}
/**
* @param p1 first point
* @param p2 second point
*/
public PGlseg(PGpoint p1, PGpoint p2) {
this();
point = new PGpoint[]{p1, p2};
}
/**
* @param s definition of the line segment in PostgreSQL's syntax.
* @throws SQLException on conversion failure
*/
@SuppressWarnings("method.invocation")
public PGlseg(String s) throws SQLException {
this();
setValue(s);
}
/**
* required by the driver.
*/
public PGlseg() {
type = "lseg";
}
/**
* @param s Definition of the line segment in PostgreSQL's syntax
* @throws SQLException on conversion failure
*/
@Override
public void setValue(/* @Nullable */ String s) throws SQLException {
if (s == null) {
point = null;
return;
}
PGtokenizer t = new PGtokenizer(PGtokenizer.removeBox(s), ',');
if (t.getSize() != 2) {
throw new PSQLException(GT.tr("Conversion to type {0} failed: {1}.", type, s),
PSQLState.DATA_TYPE_MISMATCH);
}
PGpoint[] point = this.point;
if (point == null) {
this.point = point = new PGpoint[2];
}
point[0] = new PGpoint(t.getToken(0));
point[1] = new PGpoint(t.getToken(1));
}
/**
* @param obj Object to compare with
* @return true if the two line segments are identical
*/
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (obj instanceof PGlseg) {
PGlseg p = (PGlseg) obj;
PGpoint[] point = this.point;
PGpoint[] pPoint = p.point;
if (point == null) {
return pPoint == null;
} else if (pPoint == null) {
return false;
}
return (pPoint[0].equals(point[0]) && pPoint[1].equals(point[1]))
|| (pPoint[0].equals(point[1]) && pPoint[1].equals(point[0]));
}
return false;
}
@Override
public int hashCode() {
PGpoint[] point = this.point;
if (point == null) {
return 0;
}
return point[0].hashCode() ^ point[1].hashCode();
}
@Override
public Object clone() throws CloneNotSupportedException {
PGlseg newPGlseg = (PGlseg) super.clone();
if (newPGlseg.point != null) {
newPGlseg.point = (PGpoint[]) newPGlseg.point.clone();
for (int i = 0; i < newPGlseg.point.length; i++) {
if (newPGlseg.point[i] != null) {
newPGlseg.point[i] = (PGpoint) newPGlseg.point[i].clone();
}
}
}
return newPGlseg;
}
/**
* @return the PGlseg in the syntax expected by org.postgresql
*/
@Override
public /* @Nullable */ String getValue() {
PGpoint[] point = this.point;
if (point == null) {
return null;
}
return "[" + point[0] + "," + point[1] + "]";
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/PGpath.java 0100664 0000000 0000000 00000010704 00000250600 025645 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.geometric;
import org.postgresql.util.GT;
import org.postgresql.util.PGobject;
import org.postgresql.util.PGtokenizer;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Serializable;
import java.sql.SQLException;
/**
* This implements a path (a multiple segmented line, which may be closed).
*/
public class PGpath extends PGobject implements Serializable, Cloneable {
/**
* True if the path is open, false if closed.
*/
public boolean open;
/**
* The points defining this path.
*/
public PGpoint /* @Nullable */ [] points;
/**
* @param points the PGpoints that define the path
* @param open True if the path is open, false if closed
*/
public PGpath(PGpoint /* @Nullable */ [] points, boolean open) {
this();
this.points = points;
this.open = open;
}
/**
* Required by the driver.
*/
public PGpath() {
type = "path";
}
/**
* @param s definition of the path in PostgreSQL's syntax.
* @throws SQLException on conversion failure
*/
@SuppressWarnings("method.invocation")
public PGpath(String s) throws SQLException {
this();
setValue(s);
}
/**
* @param s Definition of the path in PostgreSQL's syntax
* @throws SQLException on conversion failure
*/
@Override
public void setValue(/* @Nullable */ String s) throws SQLException {
if (s == null) {
points = null;
return;
}
// First test to see if were open
if (s.startsWith("[") && s.endsWith("]")) {
open = true;
s = PGtokenizer.removeBox(s);
} else if (s.startsWith("(") && s.endsWith(")")) {
open = false;
s = PGtokenizer.removePara(s);
} else {
throw new PSQLException(GT.tr("Cannot tell if path is open or closed: {0}.", s),
PSQLState.DATA_TYPE_MISMATCH);
}
PGtokenizer t = new PGtokenizer(s, ',');
int npoints = t.getSize();
PGpoint[] points = new PGpoint[npoints];
this.points = points;
for (int p = 0; p < npoints; p++) {
points[p] = new PGpoint(t.getToken(p));
}
}
/**
* @param obj Object to compare with
* @return true if the two paths are identical
*/
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (obj instanceof PGpath) {
PGpath p = (PGpath) obj;
PGpoint[] points = this.points;
PGpoint[] pPoints = p.points;
if (points == null) {
return pPoints == null;
} else if (pPoints == null) {
return false;
}
if (p.open != open) {
return false;
}
if (pPoints.length != points.length) {
return false;
}
for (int i = 0; i < points.length; i++) {
if (!points[i].equals(pPoints[i])) {
return false;
}
}
return true;
}
return false;
}
@Override
public int hashCode() {
PGpoint[] points = this.points;
if (points == null) {
return 0;
}
// XXX not very good..
int hash = open ? 1231 : 1237;
for (int i = 0; i < points.length && i < 5; i++) {
hash = hash * 31 + points[i].hashCode();
}
return hash;
}
@Override
public Object clone() throws CloneNotSupportedException {
PGpath newPGpath = (PGpath) super.clone();
if (newPGpath.points != null) {
PGpoint[] newPoints = newPGpath.points.clone();
newPGpath.points = newPoints;
for (int i = 0; i < newPGpath.points.length; i++) {
newPoints[i] = (PGpoint) newPGpath.points[i].clone();
}
}
return newPGpath;
}
/**
* This returns the path in the syntax expected by org.postgresql.
* @return the value of this object
*/
@Override
public /* @Nullable */ String getValue() {
PGpoint[] points = this.points;
if (points == null) {
return null;
}
StringBuilder b = new StringBuilder(open ? "[" : "(");
for (int p = 0; p < points.length; p++) {
if (p > 0) {
b.append(",");
}
b.append(points[p].toString());
}
b.append(open ? "]" : ")");
return b.toString();
}
public boolean isOpen() {
return open && points != null;
}
public boolean isClosed() {
return !open && points != null;
}
public void closePath() {
open = false;
}
public void openPath() {
open = true;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/PGpoint.java 0100664 0000000 0000000 00000012612 00000250600 026042 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.geometric;
import org.postgresql.util.ByteConverter;
import org.postgresql.util.GT;
import org.postgresql.util.PGBinaryObject;
import org.postgresql.util.PGobject;
import org.postgresql.util.PGtokenizer;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.awt.Point;
import java.io.Serializable;
import java.sql.SQLException;
/**
* It maps to the point datatype in org.postgresql.
*
* This implements a version of java.awt.Point, except it uses double to represent the coordinates.
*/
public class PGpoint extends PGobject implements PGBinaryObject, Serializable, Cloneable {
/**
* The X coordinate of the point.
*/
public double x;
/**
* The Y coordinate of the point.
*/
public double y;
/**
* True if the point represents {@code null::point}.
*/
public boolean isNull;
/**
* @param x coordinate
* @param y coordinate
*/
public PGpoint(double x, double y) {
this();
this.x = x;
this.y = y;
}
/**
* This is called mainly from the other geometric types, when a point is embedded within their
* definition.
*
* @param value Definition of this point in PostgreSQL's syntax
* @throws SQLException if something goes wrong
*/
@SuppressWarnings("method.invocation")
public PGpoint(String value) throws SQLException {
this();
setValue(value);
}
/**
* Required by the driver.
*/
public PGpoint() {
type = "point";
}
/**
* @param s Definition of this point in PostgreSQL's syntax
* @throws SQLException on conversion failure
*/
@Override
public void setValue(/* @Nullable */ String s) throws SQLException {
isNull = s == null;
if (s == null) {
return;
}
PGtokenizer t = new PGtokenizer(PGtokenizer.removePara(s), ',');
try {
x = Double.parseDouble(t.getToken(0));
y = Double.parseDouble(t.getToken(1));
} catch (NumberFormatException e) {
throw new PSQLException(GT.tr("Conversion to type {0} failed: {1}.", type, s),
PSQLState.DATA_TYPE_MISMATCH, e);
}
}
/**
* @param b Definition of this point in PostgreSQL's binary syntax
*/
@Override
public void setByteValue(byte[] b, int offset) {
this.isNull = false;
x = ByteConverter.float8(b, offset);
y = ByteConverter.float8(b, offset + 8);
}
/**
* @param obj Object to compare with
* @return true if the two points are identical
*/
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (obj instanceof PGpoint) {
PGpoint p = (PGpoint) obj;
if (isNull) {
return p.isNull;
} else if (p.isNull) {
return false;
}
return x == p.x && y == p.y;
}
return false;
}
@Override
public int hashCode() {
if (isNull) {
return 0;
}
long v1 = Double.doubleToLongBits(x);
long v2 = Double.doubleToLongBits(y);
return (int) (v1 ^ v2 ^ (v1 >>> 32) ^ (v2 >>> 32));
}
/**
* @return the PGpoint in the syntax expected by org.postgresql
*/
@Override
public /* @Nullable */ String getValue() {
return isNull ? null : "(" + x + "," + y + ")";
}
@Override
public int lengthInBytes() {
return isNull ? 0 : 16;
}
/**
* Populate the byte array with PGpoint in the binary syntax expected by org.postgresql.
*/
@Override
public void toBytes(byte[] b, int offset) {
if (isNull) {
return;
}
ByteConverter.float8(b, offset, x);
ByteConverter.float8(b, offset + 8, y);
}
/**
* Translate the point by the supplied amount.
*
* @param x integer amount to add on the x axis
* @param y integer amount to add on the y axis
*/
public void translate(int x, int y) {
translate((double) x, (double) y);
}
/**
* Translate the point by the supplied amount.
*
* @param x double amount to add on the x axis
* @param y double amount to add on the y axis
*/
public void translate(double x, double y) {
this.isNull = false;
this.x += x;
this.y += y;
}
/**
* Moves the point to the supplied coordinates.
*
* @param x integer coordinate
* @param y integer coordinate
*/
public void move(int x, int y) {
setLocation(x, y);
}
/**
* Moves the point to the supplied coordinates.
*
* @param x double coordinate
* @param y double coordinate
*/
public void move(double x, double y) {
this.isNull = false;
this.x = x;
this.y = y;
}
/**
* Moves the point to the supplied coordinates. refer to java.awt.Point for description of this.
*
* @param x integer coordinate
* @param y integer coordinate
* @see java.awt.Point
*/
public void setLocation(int x, int y) {
move((double) x, (double) y);
}
/**
* Moves the point to the supplied java.awt.Point refer to java.awt.Point for description of this.
*
* @param p Point to move to
* @see java.awt.Point
*
* @deprecated Will be removed for avoiding a dependency to the {@code java.desktop} module.
*/
@Deprecated
public void setLocation(Point p) {
setLocation(p.x, p.y);
}
@Override
public Object clone() throws CloneNotSupportedException {
// squid:S2157 "Cloneables" should implement "clone
return super.clone();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/geometric/PGpolygon.java 0100664 0000000 0000000 00000007173 00000250600 026406 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2003, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.geometric;
import org.postgresql.util.PGobject;
import org.postgresql.util.PGtokenizer;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.Serializable;
import java.sql.SQLException;
/**
* This implements the polygon datatype within PostgreSQL.
*/
public class PGpolygon extends PGobject implements Serializable, Cloneable {
/**
* The points defining the polygon.
*/
public PGpoint /* @Nullable */ [] points;
/**
* Creates a polygon using an array of PGpoints.
*
* @param points the points defining the polygon
*/
public PGpolygon(PGpoint[] points) {
this();
this.points = points;
}
/**
* @param s definition of the polygon in PostgreSQL's syntax.
* @throws SQLException on conversion failure
*/
@SuppressWarnings("method.invocation")
public PGpolygon(String s) throws SQLException {
this();
setValue(s);
}
/**
* Required by the driver.
*/
public PGpolygon() {
type = "polygon";
}
/**
* @param s Definition of the polygon in PostgreSQL's syntax
* @throws SQLException on conversion failure
*/
@Override
public void setValue(/* @Nullable */ String s) throws SQLException {
if (s == null) {
points = null;
return;
}
PGtokenizer t = new PGtokenizer(PGtokenizer.removePara(s), ',');
int npoints = t.getSize();
PGpoint[] points = this.points;
if (points == null || points.length != npoints) {
this.points = points = new PGpoint[npoints];
}
for (int p = 0; p < npoints; p++) {
points[p] = new PGpoint(t.getToken(p));
}
}
/**
* @param obj Object to compare with
* @return true if the two polygons are identical
*/
@Override
public boolean equals(/* @Nullable */ Object obj) {
if (obj instanceof PGpolygon) {
PGpolygon p = (PGpolygon) obj;
PGpoint[] points = this.points;
PGpoint[] pPoints = p.points;
if (points == null) {
return pPoints == null;
} else if (pPoints == null) {
return false;
}
if (pPoints.length != points.length) {
return false;
}
for (int i = 0; i < points.length; i++) {
if (!points[i].equals(pPoints[i])) {
return false;
}
}
return true;
}
return false;
}
@Override
public int hashCode() {
int hash = 0;
PGpoint[] points = this.points;
if (points == null) {
return hash;
}
for (int i = 0; i < points.length && i < 5; i++) {
hash = hash * 31 + points[i].hashCode();
}
return hash;
}
@Override
public Object clone() throws CloneNotSupportedException {
PGpolygon newPGpolygon = (PGpolygon) super.clone();
if (newPGpolygon.points != null) {
PGpoint[] newPoints = newPGpolygon.points.clone();
newPGpolygon.points = newPoints;
for (int i = 0; i < newPGpolygon.points.length; i++) {
if (newPGpolygon.points[i] != null) {
newPoints[i] = (PGpoint) newPGpolygon.points[i].clone();
}
}
}
return newPGpolygon;
}
/**
* @return the PGpolygon in the syntax expected by org.postgresql
*/
@Override
public /* @Nullable */ String getValue() {
PGpoint[] points = this.points;
if (points == null) {
return null;
}
StringBuilder b = new StringBuilder();
b.append("(");
for (int p = 0; p < points.length; p++) {
if (p > 0) {
b.append(",");
}
b.append(points[p].toString());
}
b.append(")");
return b.toString();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/gss/ 0040775 0000000 0000000 00000000000 00000250600 022436 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/gss/GSSCallbackHandler.java 0100664 0000000 0000000 00000004270 00000250600 026650 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2008, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.gss;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import javax.security.auth.callback.Callback;
import javax.security.auth.callback.CallbackHandler;
import javax.security.auth.callback.NameCallback;
import javax.security.auth.callback.PasswordCallback;
import javax.security.auth.callback.TextOutputCallback;
import javax.security.auth.callback.UnsupportedCallbackException;
/*
provide a more or less redundant callback handler
*/
class GSSCallbackHandler implements CallbackHandler {
private final String user;
private final char /* @Nullable */ [] password;
GSSCallbackHandler(String user, char /* @Nullable */ [] password) {
this.user = user;
this.password = password;
}
@Override
public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException {
for (Callback callback : callbacks) {
if (callback instanceof TextOutputCallback) {
TextOutputCallback toc = (TextOutputCallback) callback;
switch (toc.getMessageType()) {
case TextOutputCallback.INFORMATION:
System.out.println("INFO: " + toc.getMessage());
break;
case TextOutputCallback.ERROR:
System.out.println("ERROR: " + toc.getMessage());
break;
case TextOutputCallback.WARNING:
System.out.println("WARNING: " + toc.getMessage());
break;
default:
throw new IOException("Unsupported message type: " + toc.getMessageType());
}
} else if (callback instanceof NameCallback) {
NameCallback nc = (NameCallback) callback;
nc.setName(user);
} else if (callback instanceof PasswordCallback) {
PasswordCallback pc = (PasswordCallback) callback;
if (password == null) {
throw new IOException("No cached kerberos ticket found and no password supplied.");
}
pc.setPassword(password);
} else {
throw new UnsupportedCallbackException(callback, "Unrecognized Callback");
}
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/gss/GSSInputStream.java 0100664 0000000 0000000 00000011445 00000250600 026133 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2008, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.gss;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.util.ByteConverter;
// import org.checkerframework.checker.nullness.qual.Nullable;
import org.ietf.jgss.GSSContext;
import org.ietf.jgss.GSSException;
import org.ietf.jgss.MessageProp;
import java.io.IOException;
import java.io.InputStream;
public class GSSInputStream extends InputStream {
private final GSSContext gssContext;
private final MessageProp messageProp;
private final InputStream wrapped;
// See https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-GSSAPI
// The server can be expected to not send encrypted packets of larger than 16kB to the client
private byte[] encrypted = new byte[16 * 1024];
private int encryptedPos;
private int encryptedLength;
private byte /* @Nullable */ [] unencrypted;
private int unencryptedPos;
private final byte[] int4Buf = new byte[4];
private int lenPos;
private final byte[] int1Buf = new byte[1];
public GSSInputStream(InputStream wrapped, GSSContext gssContext, MessageProp messageProp) {
this.wrapped = wrapped;
this.gssContext = gssContext;
this.messageProp = messageProp;
}
@Override
public int read() throws IOException {
int res = 0;
while (res == 0) {
res = read(int1Buf);
}
return res == -1 ? -1 : int1Buf[0] & 0xFF;
}
@Override
public int read(byte[] buffer, int pos, int len) throws IOException {
int n = 0;
// Server makes 16KiB frames, so we attempt several reads from the underlying stream
// so we don't have to store the unencrypted buffer across GSSInputStream.read calls
while (true) {
// 1. Reading length from the wrapped stream
if (lenPos < 4) {
int res = readLength();
if (res <= 0) {
// Did not read "message length" fully, so we can't read encrypted message yet
return n == 0 ? res : n;
}
}
// 2. Reading encrypted message from the wrapped stream
if (encryptedPos < encryptedLength) {
int res = readEncryptedBytesAndUnwrap();
if (res <= 0) {
// Did not read encrypted message fully, so we can't deliver decrypted data yet
return n == 0 ? res : n;
}
}
// 3. Reading unencrypted message into the user-provided buffer
byte[] unencrypted = castNonNull(this.unencrypted);
int copyLength = Math.min(len - n, unencrypted.length - unencryptedPos);
System.arraycopy(unencrypted, unencryptedPos, buffer, pos + n, copyLength);
unencryptedPos += copyLength;
n += copyLength;
if (unencryptedPos == unencrypted.length) {
// Start reading the new message on the next read
lenPos = 0;
encryptedPos = 0;
this.unencrypted = null;
}
if (n >= len || wrapped.available() <= 0) {
return n;
}
}
}
/**
* Reads the length of the wrapper message.
*
* @return -1 of end of stream reached, 0 if length is not fully read yet, and 1 if length is
* fully read
* @throws IOException if read fails
*/
private int readLength() throws IOException {
while (true) {
int res = wrapped.read(int4Buf, lenPos, 4 - lenPos);
if (res == -1) {
return -1;
}
lenPos += res;
if (lenPos == 4) {
break;
}
if (wrapped.available() <= 0) {
// Did not read "message length" fully, and there's no more bytes available, so stop trying
return 0;
}
}
encryptedLength = ByteConverter.int4(int4Buf, 0);
if (encrypted.length < encryptedLength) {
// If the buffer is too small, reallocate
encrypted = new byte[encryptedLength];
}
return 1;
}
/**
* Reads the encrypted message, and unwraps it.
*
* @return -1 of end of stream reached, 0 if the message is not fully read yet, and 1 if length is
* fully read
* @throws IOException if read fails
*/
private int readEncryptedBytesAndUnwrap() throws IOException {
while (true) {
int res = wrapped.read(encrypted, encryptedPos, encryptedLength - encryptedPos);
if (res == -1) {
// Should we raise something like "incomplete GSS message due to end of input stream"?
return -1;
}
encryptedPos += res;
if (encryptedPos == encryptedLength) {
break;
}
if (wrapped.available() <= 0) {
// The encrypted message is not yet ready, so we can't read user data yet
return 0;
}
}
try {
this.unencrypted = gssContext.unwrap(encrypted, 0, encryptedLength, messageProp);
} catch (GSSException e) {
throw new IOException(e);
}
unencryptedPos = 0;
return 1;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/gss/GSSOutputStream.java 0100664 0000000 0000000 00000005117 00000250600 026333 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.gss;
import org.postgresql.util.internal.PgBufferedOutputStream;
import org.ietf.jgss.GSSContext;
import org.ietf.jgss.GSSException;
import org.ietf.jgss.MessageProp;
import java.io.IOException;
/**
* Output stream that wraps each packed with GSS encryption.
*/
public class GSSOutputStream extends PgBufferedOutputStream {
private final PgBufferedOutputStream pgOut;
private final GSSContext gssContext;
private final MessageProp messageProp;
/**
* Creates GSS output stream.
* @param out output stream for the encrypted data
* @param gssContext gss context
* @param messageProp message properties
* @param maxTokenSize maximum length of the encrypted messages
*/
public GSSOutputStream(PgBufferedOutputStream out, GSSContext gssContext, MessageProp messageProp, int maxTokenSize) throws GSSException {
super(out, getBufferSize(gssContext, messageProp, maxTokenSize));
this.pgOut = out;
this.gssContext = gssContext;
this.messageProp = messageProp;
}
private static int getBufferSize(GSSContext gssContext, MessageProp messageProp, int maxTokenSize) throws GSSException {
return gssContext.getWrapSizeLimit(messageProp.getQOP(), messageProp.getPrivacy(), maxTokenSize);
}
@Override
protected void flushBuffer() throws IOException {
if (count > 0) {
writeWrapped(buf, 0, count);
count = 0;
}
}
private void writeWrapped(byte[] b, int off, int len) throws IOException {
try {
byte[] token = gssContext.wrap(b, off, len, messageProp);
pgOut.writeInt4(token.length);
pgOut.write(token, 0, token.length);
} catch (GSSException ex) {
throw new IOException(ex);
}
}
@Override
public void write(byte[] b, int off, int len) throws IOException {
if (count > 0) {
// If there's some data in the buffer, combine both
int avail = buf.length - count;
int prefixLength = Math.min(len, avail);
System.arraycopy(b, off, buf, count, prefixLength);
count += prefixLength;
off += prefixLength;
len -= prefixLength;
if (count == buf.length) {
flushBuffer();
}
}
// Write out the rest, chunk the writes, so we do not exceed the maximum encrypted message size
while (len >= buf.length) {
writeWrapped(b, off, buf.length);
off += buf.length;
len -= buf.length;
}
if (len == 0) {
return;
}
System.arraycopy(b, off, buf, 0, len);
count += len;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/gss/GssAction.java 0100664 0000000 0000000 00000014364 00000250600 025200 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2008, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.gss;
import org.postgresql.core.PGStream;
import org.postgresql.core.PgMessageType;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.util.ServerErrorMessage;
// import org.checkerframework.checker.nullness.qual.Nullable;
import org.ietf.jgss.GSSContext;
import org.ietf.jgss.GSSCredential;
import org.ietf.jgss.GSSException;
import org.ietf.jgss.GSSManager;
import org.ietf.jgss.GSSName;
import org.ietf.jgss.Oid;
import java.io.IOException;
import java.security.Principal;
import java.security.PrivilegedAction;
import java.util.Iterator;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.security.auth.Subject;
class GssAction implements PrivilegedAction* @Nullable */ Exception>, Callable* @Nullable */ Exception> {
private static final Logger LOGGER = Logger.getLogger(GssAction.class.getName());
private final PGStream pgStream;
private final String host;
private final String kerberosServerName;
private final String user;
private final boolean useSpnego;
private final boolean gssUseDefaultCreds;
private final /* @Nullable */ Subject subject;
private final boolean logServerErrorDetail;
GssAction(PGStream pgStream, /* @Nullable */ Subject subject, String host, String user,
String kerberosServerName, boolean useSpnego, boolean gssUseDefaultCreds,
boolean logServerErrorDetail) {
this.pgStream = pgStream;
this.subject = subject;
this.host = host;
this.user = user;
this.kerberosServerName = kerberosServerName;
this.useSpnego = useSpnego;
this.gssUseDefaultCreds = gssUseDefaultCreds;
this.logServerErrorDetail = logServerErrorDetail;
}
private static boolean hasSpnegoSupport(GSSManager manager) throws GSSException {
Oid spnego = new Oid("1.3.6.1.5.5.2");
Oid[] mechs = manager.getMechs();
for (Oid mech : mechs) {
if (mech.equals(spnego)) {
return true;
}
}
return false;
}
@Override
public /* @Nullable */ Exception run() {
try {
GSSManager manager = GSSManager.getInstance();
GSSCredential clientCreds = null;
Oid[] desiredMechs = new Oid[1];
//Try to get credential from subject first.
GSSCredential gssCredential = null;
if (subject != null) {
Set gssCreds = subject.getPrivateCredentials(GSSCredential.class);
if (gssCreds != null && !gssCreds.isEmpty()) {
gssCredential = gssCreds.iterator().next();
}
}
//If failed to get credential from subject,
//then call createCredential to create one.
if (gssCredential == null) {
if (useSpnego && hasSpnegoSupport(manager)) {
desiredMechs[0] = new Oid("1.3.6.1.5.5.2");
} else {
desiredMechs[0] = new Oid("1.2.840.113554.1.2.2");
}
String principalName = this.user;
if (subject != null) {
Set principals = subject.getPrincipals();
Iterator principalIterator = principals.iterator();
Principal principal = null;
if (principalIterator.hasNext()) {
principal = principalIterator.next();
principalName = principal.getName();
}
}
if (gssUseDefaultCreds) {
clientCreds = manager.createCredential(GSSCredential.INITIATE_ONLY);
} else {
GSSName clientName = manager.createName(principalName, GSSName.NT_USER_NAME);
clientCreds = manager.createCredential(clientName, 8 * 3600, desiredMechs,
GSSCredential.INITIATE_ONLY);
}
} else {
desiredMechs[0] = new Oid("1.2.840.113554.1.2.2");
clientCreds = gssCredential;
}
GSSName serverName =
manager.createName(kerberosServerName + "@" + host, GSSName.NT_HOSTBASED_SERVICE);
GSSContext secContext = manager.createContext(serverName, desiredMechs[0], clientCreds,
GSSContext.DEFAULT_LIFETIME);
secContext.requestMutualAuth(true);
byte[] inToken = new byte[0];
byte[] outToken = null;
boolean established = false;
while (!established) {
outToken = secContext.initSecContext(inToken, 0, inToken.length);
if (outToken != null) {
LOGGER.log(Level.FINEST, " FE=> Password(GSS Authentication Token)");
pgStream.sendChar(PgMessageType.GSS_TOKEN_REQUEST);
pgStream.sendInteger4(4 + outToken.length);
pgStream.send(outToken);
pgStream.flush();
}
if (!secContext.isEstablished()) {
int response = pgStream.receiveChar();
// Error
switch (response) {
case PgMessageType.ERROR_RESPONSE:
int elen = pgStream.receiveInteger4();
ServerErrorMessage errorMsg
= new ServerErrorMessage(pgStream.receiveErrorString(elen - 4));
LOGGER.log(Level.FINEST, " <=BE ErrorMessage({0})", errorMsg);
return new PSQLException(errorMsg, logServerErrorDetail);
case PgMessageType.AUTHENTICATION_RESPONSE:
LOGGER.log(Level.FINEST, " <=BE AuthenticationGSSContinue");
int len = pgStream.receiveInteger4();
@SuppressWarnings("unused")
int type = pgStream.receiveInteger4(); // Specifies that this message contains GSSAPI or SSPI data
// should check type = 8
inToken = pgStream.receive(len - 8);
break;
default:
// Unknown/unexpected message type.
return new PSQLException(GT.tr("Protocol error. Session setup failed."),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
} else {
established = true;
}
}
} catch (IOException e) {
return e;
} catch (GSSException gsse) {
return new PSQLException(GT.tr("GSS Authentication failed"), PSQLState.CONNECTION_FAILURE,
gsse);
}
return null;
}
@Override
public /* @Nullable */ Exception call() throws Exception {
return run();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/gss/GssEncAction.java 0100664 0000000 0000000 00000012404 00000250600 025617 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.gss;
import org.postgresql.core.PGStream;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import org.ietf.jgss.GSSContext;
import org.ietf.jgss.GSSCredential;
import org.ietf.jgss.GSSException;
import org.ietf.jgss.GSSManager;
import org.ietf.jgss.GSSName;
import org.ietf.jgss.Oid;
import java.io.IOException;
import java.security.Principal;
import java.security.PrivilegedAction;
import java.util.Iterator;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.security.auth.Subject;
public class GssEncAction implements PrivilegedAction* @Nullable */ Exception>, Callable* @Nullable */ Exception> {
private static final Logger LOGGER = Logger.getLogger(GssAction.class.getName());
private final PGStream pgStream;
private final String host;
private final String user;
private final String kerberosServerName;
private final boolean useSpnego;
private final boolean gssUseDefaultCreds;
private final /* @Nullable */ Subject subject;
@SuppressWarnings("unused")
private final boolean logServerErrorDetail;
public GssEncAction(PGStream pgStream, /* @Nullable */ Subject subject,
String host, String user,
String kerberosServerName, boolean useSpnego, boolean gssUseDefaultCreds,
boolean logServerErrorDetail) {
this.pgStream = pgStream;
this.subject = subject;
this.host = host;
this.user = user;
this.kerberosServerName = kerberosServerName;
this.useSpnego = useSpnego;
this.gssUseDefaultCreds = gssUseDefaultCreds;
this.logServerErrorDetail = logServerErrorDetail;
}
private static boolean hasSpnegoSupport(GSSManager manager) throws GSSException {
Oid spnego = new Oid("1.3.6.1.5.5.2");
Oid[] mechs = manager.getMechs();
for (Oid mech : mechs) {
if (mech.equals(spnego)) {
return true;
}
}
return false;
}
@Override
public /* @Nullable */ Exception run() {
try {
GSSManager manager = GSSManager.getInstance();
GSSCredential clientCreds = null;
Oid[] desiredMechs = new Oid[1];
//Try to get credential from subject first.
GSSCredential gssCredential = null;
if (subject != null) {
Set gssCreds = subject.getPrivateCredentials(GSSCredential.class);
if (gssCreds != null && !gssCreds.isEmpty()) {
gssCredential = gssCreds.iterator().next();
}
}
//If failed to get credential from subject,
//then call createCredential to create one.
if (gssCredential == null) {
if (useSpnego && hasSpnegoSupport(manager)) {
desiredMechs[0] = new Oid("1.3.6.1.5.5.2");
} else {
desiredMechs[0] = new Oid("1.2.840.113554.1.2.2");
}
String principalName = this.user;
if (subject != null) {
Set principals = subject.getPrincipals();
Iterator principalIterator = principals.iterator();
Principal principal = null;
if (principalIterator.hasNext()) {
principal = principalIterator.next();
principalName = principal.getName();
}
}
if (gssUseDefaultCreds) {
clientCreds = manager.createCredential(GSSCredential.INITIATE_ONLY);
} else {
GSSName clientName = manager.createName(principalName, GSSName.NT_USER_NAME);
clientCreds = manager.createCredential(clientName, 8 * 3600, desiredMechs,
GSSCredential.INITIATE_ONLY);
}
} else {
desiredMechs[0] = new Oid("1.2.840.113554.1.2.2");
clientCreds = gssCredential;
}
GSSName serverName =
manager.createName(kerberosServerName + "@" + host, GSSName.NT_HOSTBASED_SERVICE);
GSSContext secContext = manager.createContext(serverName, desiredMechs[0], clientCreds,
GSSContext.DEFAULT_LIFETIME);
secContext.requestMutualAuth(true);
secContext.requestConf(true);
secContext.requestInteg(true);
byte[] inToken = new byte[0];
byte[] outToken = null;
boolean established = false;
while (!established) {
outToken = secContext.initSecContext(inToken, 0, inToken.length);
if (outToken != null) {
LOGGER.log(Level.FINEST, " FE=> Password(GSS Authentication Token)");
pgStream.sendInteger4(outToken.length);
pgStream.send(outToken);
pgStream.flush();
}
if (!secContext.isEstablished()) {
int len = pgStream.receiveInteger4();
// should check type = 8
inToken = pgStream.receive(len);
} else {
established = true;
pgStream.setSecContext(secContext);
}
}
} catch (IOException e) {
return e;
} catch (GSSException gsse) {
return new PSQLException(GT.tr("GSS Authentication failed"), PSQLState.CONNECTION_FAILURE,
gsse);
}
return null;
}
@Override
public /* @Nullable */ Exception call() throws Exception {
return run();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/gss/MakeGSS.java 0100664 0000000 0000000 00000015604 00000250600 024536 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2008, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.gss;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.PGProperty;
import org.postgresql.core.PGStream;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.NonNull;
// import org.checkerframework.checker.nullness.qual.Nullable;
import org.ietf.jgss.GSSCredential;
import java.io.IOException;
import java.lang.invoke.MethodHandle;
import java.lang.invoke.MethodHandles;
import java.lang.invoke.MethodType;
import java.security.PrivilegedAction;
import java.util.Set;
import java.util.concurrent.Callable;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.security.auth.Subject;
import javax.security.auth.login.LoginContext;
public class MakeGSS {
private static final Logger LOGGER = Logger.getLogger(MakeGSS.class.getName());
private static final /* @Nullable */ MethodHandle SUBJECT_CURRENT;
private static final /* @Nullable */ MethodHandle ACCESS_CONTROLLER_GET_CONTEXT;
private static final /* @Nullable */ MethodHandle SUBJECT_GET_SUBJECT;
// Java <18
private static final /* @Nullable */ MethodHandle SUBJECT_DO_AS;
// Java 18+, see https://bugs.openjdk.org/browse/JDK-8267108
private static final /* @Nullable */ MethodHandle SUBJECT_CALL_AS;
static {
MethodHandle subjectCurrent = null;
try {
subjectCurrent = MethodHandles.lookup()
.findStatic(Subject.class, "current", MethodType.methodType(Subject.class));
} catch (NoSuchMethodException | IllegalAccessException ignore) {
// E.g. pre Java 18
}
SUBJECT_CURRENT = subjectCurrent;
MethodHandle accessControllerGetContext = null;
MethodHandle subjectGetSubject = null;
try {
Class> accessControllerClass = Class.forName("java.security.AccessController");
Class> accessControlContextClass =
Class.forName("java.security.AccessControlContext");
accessControllerGetContext = MethodHandles.lookup()
.findStatic(accessControllerClass, "getContext",
MethodType.methodType(accessControlContextClass));
subjectGetSubject = MethodHandles.lookup()
.findStatic(Subject.class, "getSubject",
MethodType.methodType(Subject.class, accessControlContextClass));
} catch (NoSuchMethodException | IllegalAccessException | ClassNotFoundException ignore) {
// E.g. pre Java 18+
}
ACCESS_CONTROLLER_GET_CONTEXT = accessControllerGetContext;
SUBJECT_GET_SUBJECT = subjectGetSubject;
MethodHandle subjectDoAs = null;
try {
subjectDoAs = MethodHandles.lookup().findStatic(Subject.class, "doAs",
MethodType.methodType(Object.class, Subject.class, PrivilegedAction.class));
} catch (NoSuchMethodException | IllegalAccessException ignore) {
// E.g. Java 18+
}
SUBJECT_DO_AS = subjectDoAs;
MethodHandle subjectCallAs = null;
try {
subjectCallAs = MethodHandles.lookup().findStatic(Subject.class, "callAs",
MethodType.methodType(Object.class, Subject.class, Callable.class));
} catch (NoSuchMethodException | IllegalAccessException ignore) {
// E.g. Java < 18
}
SUBJECT_CALL_AS = subjectCallAs;
}
/**
* Use {@code Subject.current()} in Java 18+, and
* {@code Subject.getSubject(AccessController.getContext())} in Java before 18.
* @return current Subject or null
*/
@SuppressWarnings("deprecation")
private static /* @Nullable */ Subject getCurrentSubject() {
try {
if (SUBJECT_CURRENT != null) {
return (Subject) SUBJECT_CURRENT.invokeExact();
}
if (SUBJECT_GET_SUBJECT == null || ACCESS_CONTROLLER_GET_CONTEXT == null) {
return null;
}
return (Subject) SUBJECT_GET_SUBJECT.invoke(
ACCESS_CONTROLLER_GET_CONTEXT.invoke()
);
} catch (Throwable e) {
if (e instanceof RuntimeException) {
throw (RuntimeException) e;
}
if (e instanceof Error) {
throw (Error) e;
}
throw new RuntimeException(e);
}
}
public static void authenticate(boolean encrypted,
PGStream pgStream, String host, String user, char /* @Nullable */ [] password,
/* @Nullable */ String jaasApplicationName, /* @Nullable */ String kerberosServerName,
boolean useSpnego, boolean jaasLogin, boolean gssUseDefaultCreds,
boolean logServerErrorDetail)
throws IOException, PSQLException {
LOGGER.log(Level.FINEST, " <=BE AuthenticationReqGSS");
if (jaasApplicationName == null) {
jaasApplicationName = PGProperty.JAAS_APPLICATION_NAME.getDefaultValue();
}
if (kerberosServerName == null) {
kerberosServerName = "postgres";
}
/* @Nullable */ Exception result;
try {
boolean performAuthentication = jaasLogin;
//Check if we can get credential from subject to avoid login.
Subject sub = getCurrentSubject();
if (sub != null) {
Set gssCreds = sub.getPrivateCredentials(GSSCredential.class);
if (gssCreds != null && !gssCreds.isEmpty()) {
performAuthentication = false;
}
}
if (performAuthentication) {
LoginContext lc = new LoginContext(castNonNull(jaasApplicationName), new GSSCallbackHandler(user, password));
lc.login();
sub = lc.getSubject();
}
PrivilegedAction* @Nullable */ Exception> action;
if ( encrypted ) {
action = new GssEncAction(pgStream, sub, host, user,
kerberosServerName, useSpnego, gssUseDefaultCreds, logServerErrorDetail);
} else {
action = new GssAction(pgStream, sub, host, user,
kerberosServerName, useSpnego, gssUseDefaultCreds, logServerErrorDetail);
}
//noinspection ConstantConditions
@SuppressWarnings({"cast.unsafe", "assignment"})
/* @NonNull */ Subject subject = sub;
if (SUBJECT_DO_AS != null) {
result = (Exception) SUBJECT_DO_AS.invoke(subject, action);
} else if (SUBJECT_CALL_AS != null) {
//noinspection ConstantConditions,unchecked
result = (Exception) SUBJECT_CALL_AS.invoke(subject, action);
} else {
throw new PSQLException(
GT.tr("Neither Subject.doAs (Java before 18) nor Subject.callAs (Java 18+) method found"),
PSQLState.OBJECT_NOT_IN_STATE);
}
} catch (Throwable e) {
throw new PSQLException(GT.tr("GSS Authentication failed"), PSQLState.CONNECTION_FAILURE, e);
}
if (result instanceof IOException) {
throw (IOException) result;
} else if (result instanceof PSQLException) {
throw (PSQLException) result;
} else if (result != null) {
throw new PSQLException(GT.tr("GSS Authentication failed"), PSQLState.CONNECTION_FAILURE,
result);
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/ 0040775 0000000 0000000 00000000000 00000250600 024202 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/CandidateHost.java 0100664 0000000 0000000 00000001014 00000250600 027550 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2017, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
import org.postgresql.util.HostSpec;
/**
* Candidate host to be connected.
*/
public class CandidateHost {
public final HostSpec hostSpec;
public final HostRequirement targetServerType;
public CandidateHost(HostSpec hostSpec, HostRequirement targetServerType) {
this.hostSpec = hostSpec;
this.targetServerType = targetServerType;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/GlobalHostStatusTracker.java 0100664 0000000 0000000 00000005205 00000250600 031622 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2014, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
import org.postgresql.jdbc.ResourceLock;
import org.postgresql.util.HostSpec;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Keeps track of HostSpec targets in a global map.
*/
public class GlobalHostStatusTracker {
private static final Map hostStatusMap =
new HashMap<>();
private static final ResourceLock lock = new ResourceLock();
/**
* Store the actual observed host status.
*
* @param hostSpec The host whose status is known.
* @param hostStatus Latest known status for the host.
*/
public static void reportHostStatus(HostSpec hostSpec, HostStatus hostStatus) {
long now = System.nanoTime() / 1000000;
try (ResourceLock ignore = lock.obtain()) {
HostSpecStatus hostSpecStatus = hostStatusMap.get(hostSpec);
if (hostSpecStatus == null) {
hostSpecStatus = new HostSpecStatus(hostSpec);
hostStatusMap.put(hostSpec, hostSpecStatus);
}
hostSpecStatus.status = hostStatus;
hostSpecStatus.lastUpdated = now;
}
}
/**
* Returns a list of candidate hosts that have the required targetServerType.
*
* @param hostSpecs The potential list of hosts.
* @param targetServerType The required target server type.
* @param hostRecheckMillis How stale information is allowed.
* @return candidate hosts to connect to.
*/
static List getCandidateHosts(HostSpec[] hostSpecs,
HostRequirement targetServerType, long hostRecheckMillis) {
List candidates = new ArrayList<>(hostSpecs.length);
long latestAllowedUpdate = System.nanoTime() / 1000000 - hostRecheckMillis;
try (ResourceLock ignore = lock.obtain()) {
for (HostSpec hostSpec : hostSpecs) {
HostSpecStatus hostInfo = hostStatusMap.get(hostSpec);
// candidates are nodes we do not know about and the nodes with correct type
if (hostInfo == null
|| hostInfo.lastUpdated < latestAllowedUpdate
|| targetServerType.allowConnectingTo(hostInfo.status)) {
candidates.add(hostSpec);
}
}
}
return candidates;
}
static class HostSpecStatus {
final HostSpec host;
/* @Nullable */ HostStatus status;
long lastUpdated;
HostSpecStatus(HostSpec host) {
this.host = host;
}
@Override
public String toString() {
return host.toString() + '=' + status;
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/HostChooser.java 0100664 0000000 0000000 00000000735 00000250600 027307 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2014, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
import java.util.Iterator;
/**
* Lists connections in preferred order.
*/
public interface HostChooser extends Iterable {
/**
* Lists connection hosts in preferred order.
*
* @return connection hosts in preferred order.
*/
@Override
Iterator iterator();
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/HostChooserFactory.java 0100664 0000000 0000000 00000001244 00000250600 030633 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2014, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
import org.postgresql.util.HostSpec;
import java.util.Properties;
/**
* Chooses a {@link HostChooser} instance based on the number of hosts and properties.
*/
public class HostChooserFactory {
public static HostChooser createHostChooser(HostSpec[] hostSpecs,
HostRequirement targetServerType, Properties info) {
if (hostSpecs.length == 1) {
return new SingleHostChooser(hostSpecs[0], targetServerType);
}
return new MultiHostChooser(hostSpecs, targetServerType, info);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/HostRequirement.java 0100664 0000000 0000000 00000004570 00000250600 030206 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2014, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* Describes the required server type.
*/
public enum HostRequirement {
any {
@Override
public boolean allowConnectingTo(/* @Nullable */ HostStatus status) {
return status != HostStatus.ConnectFail;
}
},
/**
* @deprecated we no longer use the terms master or slave in the driver, or the PostgreSQL
* project.
*/
@Deprecated
master {
@Override
public boolean allowConnectingTo(/* @Nullable */ HostStatus status) {
return primary.allowConnectingTo(status);
}
},
primary {
@Override
public boolean allowConnectingTo(/* @Nullable */ HostStatus status) {
return status == HostStatus.Primary || status == HostStatus.ConnectOK;
}
},
secondary {
@Override
public boolean allowConnectingTo(/* @Nullable */ HostStatus status) {
return status == HostStatus.Secondary || status == HostStatus.ConnectOK;
}
},
preferSecondary {
@Override
public boolean allowConnectingTo(/* @Nullable */ HostStatus status) {
return status != HostStatus.ConnectFail;
}
},
preferPrimary {
@Override
public boolean allowConnectingTo(/* @Nullable */ HostStatus status) {
return status != HostStatus.ConnectFail;
}
};
public abstract boolean allowConnectingTo(/* @Nullable */ HostStatus status);
/**
* The postgreSQL project has decided not to use the term slave to refer to alternate servers.
* secondary or standby is preferred. We have arbitrarily chosen secondary.
* As of Jan 2018 in order not to break existing code we are going to accept both slave or
* secondary for names of alternate servers.
*
* The current policy is to keep accepting this silently but not document slave, or slave preferSlave
*
* As of Jul 2018 silently deprecate the use of the word master as well
*
* @param targetServerType the value of {@code targetServerType} connection property
* @return HostRequirement
*/
public static HostRequirement getTargetServerType(String targetServerType) {
String allowSlave = targetServerType.replace("lave", "econdary").replace("master", "primary");
return valueOf(allowSlave);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/HostStatus.java 0100664 0000000 0000000 00000000434 00000250600 027164 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2014, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
/**
* Known state of a server.
*/
public enum HostStatus {
ConnectFail,
ConnectOK,
Primary,
Secondary
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/MultiHostChooser.java 0100664 0000000 0000000 00000010451 00000250600 030316 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2014, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
import static java.util.Collections.shuffle;
import org.postgresql.PGProperty;
import org.postgresql.util.HostSpec;
import org.postgresql.util.PSQLException;
import java.util.AbstractList;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
import java.util.Properties;
/**
* HostChooser that keeps track of known host statuses.
*/
class MultiHostChooser implements HostChooser {
private final HostSpec[] hostSpecs;
private final HostRequirement targetServerType;
private int hostRecheckTime;
private boolean loadBalance;
MultiHostChooser(HostSpec[] hostSpecs, HostRequirement targetServerType,
Properties info) {
this.hostSpecs = hostSpecs;
this.targetServerType = targetServerType;
try {
hostRecheckTime = PGProperty.HOST_RECHECK_SECONDS.getInt(info) * 1000;
loadBalance = PGProperty.LOAD_BALANCE_HOSTS.getBoolean(info);
} catch (PSQLException e) {
throw new RuntimeException(e);
}
}
@Override
public Iterator iterator() {
Iterator res = candidateIterator();
if (!res.hasNext()) {
// In case all the candidate hosts are unavailable or do not match, try all the hosts just in case
List allHosts = Arrays.asList(hostSpecs);
if (loadBalance) {
allHosts = new ArrayList<>(allHosts);
shuffle(allHosts);
}
res = withReqStatus(targetServerType, allHosts).iterator();
}
return res;
}
private Iterator candidateIterator() {
if ( targetServerType != HostRequirement.preferSecondary
&& targetServerType != HostRequirement.preferPrimary ) {
return getCandidateHosts(targetServerType).iterator();
}
HostRequirement preferredServerType =
targetServerType == HostRequirement.preferSecondary
? HostRequirement.secondary
: HostRequirement.primary;
// preferSecondary tries to find secondary hosts first
// Note: sort does not work here since there are "unknown" hosts,
// and that "unknown" might turn out to be master, so we should discard that
// if other secondaries exist
// Same logic as the above works for preferPrimary if we replace "secondary"
// with "primary" and vice versa
List preferred = getCandidateHosts(preferredServerType);
List any = getCandidateHosts(HostRequirement.any);
if ( !preferred.isEmpty() && !any.isEmpty()
&& preferred.get(preferred.size() - 1).hostSpec.equals(any.get(0).hostSpec)) {
// When the last preferred host's hostspec is the same as the first in "any" list, there's no need
// to attempt to connect it as "preferred"
// Note: this is only an optimization
preferred = rtrim(1, preferred);
}
return append(preferred, any).iterator();
}
private List getCandidateHosts(HostRequirement hostRequirement) {
List candidates =
GlobalHostStatusTracker.getCandidateHosts(hostSpecs, hostRequirement, hostRecheckTime);
if (loadBalance) {
shuffle(candidates);
}
return withReqStatus(hostRequirement, candidates);
}
private static List withReqStatus(final HostRequirement requirement, final List hosts) {
return new AbstractList() {
@Override
public CandidateHost get(int index) {
return new CandidateHost(hosts.get(index), requirement);
}
@Override
public int size() {
return hosts.size();
}
};
}
private static List append(final List a, final List b) {
return new AbstractList() {
@Override
public T get(int index) {
return index < a.size() ? a.get(index) : b.get(index - a.size());
}
@Override
public int size() {
return a.size() + b.size();
}
};
}
private static List rtrim(final int size, final List a) {
return new AbstractList() {
@Override
public T get(int index) {
return a.get(index);
}
@Override
public int size() {
return Math.max(0, a.size() - size);
}
};
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/hostchooser/SingleHostChooser.java 0100664 0000000 0000000 00000001340 00000250600 030442 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2014, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.hostchooser;
import org.postgresql.util.HostSpec;
import java.util.Collection;
import java.util.Collections;
import java.util.Iterator;
/**
* Host chooser that returns the single host.
*/
class SingleHostChooser implements HostChooser {
private final Collection candidateHost;
SingleHostChooser(HostSpec hostSpec, HostRequirement targetServerType) {
this.candidateHost = Collections.singletonList(new CandidateHost(hostSpec, targetServerType));
}
@Override
public Iterator iterator() {
return candidateHost.iterator();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/ 0040775 0000000 0000000 00000000000 00000250600 022544 5 ustar 00 0000000 0000000 postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/AbstractBlobClob.java 0100664 0000000 0000000 00000021224 00000250600 026547 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2005, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.core.BaseConnection;
import org.postgresql.core.ServerVersion;
import org.postgresql.largeobject.LargeObject;
import org.postgresql.largeobject.LargeObjectManager;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.InputStream;
import java.io.OutputStream;
import java.sql.Blob;
import java.sql.SQLException;
import java.util.ArrayList;
/**
* This class holds all of the methods common to both Blobs and Clobs.
*
* @author Michael Barker
*/
public abstract class AbstractBlobClob {
protected BaseConnection conn;
private /* @Nullable */ LargeObject currentLo;
private boolean currentLoIsWriteable;
private final boolean support64bit;
/**
* We create separate LargeObjects for methods that use streams so they won't interfere with each
* other.
*/
private /* @Nullable */ ArrayList subLOs = new ArrayList();
protected final ResourceLock lock = new ResourceLock();
private final long oid;
public AbstractBlobClob(BaseConnection conn, long oid) throws SQLException {
this.conn = conn;
this.oid = oid;
this.currentLoIsWriteable = false;
support64bit = conn.haveMinimumServerVersion(90300);
}
public void free() throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
if (currentLo != null) {
currentLo.close();
currentLo = null;
currentLoIsWriteable = false;
}
if (subLOs != null) {
for (LargeObject subLO : subLOs) {
subLO.close();
}
}
subLOs = null;
}
}
/**
* For Blobs this should be in bytes while for Clobs it should be in characters. Since we really
* haven't figured out how to handle character sets for Clobs the current implementation uses
* bytes for both Blobs and Clobs.
*
* @param len maximum length
* @throws SQLException if operation fails
*/
public void truncate(long len) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
if (!conn.haveMinimumServerVersion(ServerVersion.v8_3)) {
throw new PSQLException(
GT.tr("Truncation of large objects is only implemented in 8.3 and later servers."),
PSQLState.NOT_IMPLEMENTED);
}
if (len < 0) {
throw new PSQLException(GT.tr("Cannot truncate LOB to a negative length."),
PSQLState.INVALID_PARAMETER_VALUE);
}
if (len > Integer.MAX_VALUE) {
if (support64bit) {
getLo(true).truncate64(len);
} else {
throw new PSQLException(GT.tr("PostgreSQL LOBs can only index to: {0}", Integer.MAX_VALUE),
PSQLState.INVALID_PARAMETER_VALUE);
}
} else {
getLo(true).truncate((int) len);
}
}
}
public long length() throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
if (support64bit) {
return getLo(false).size64();
} else {
return getLo(false).size();
}
}
}
public byte[] getBytes(long pos, int length) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
assertPosition(pos);
getLo(false).seek((int) (pos - 1), LargeObject.SEEK_SET);
return getLo(false).read(length);
}
}
public InputStream getBinaryStream() throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
LargeObject subLO = getLo(false).copy();
addSubLO(subLO);
subLO.seek(0, LargeObject.SEEK_SET);
return subLO.getInputStream();
}
}
public OutputStream setBinaryStream(long pos) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
assertPosition(pos);
LargeObject subLO = getLo(true).copy();
addSubLO(subLO);
subLO.seek((int) (pos - 1));
return subLO.getOutputStream();
}
}
/**
* Iterate over the buffer looking for the specified pattern.
*
* @param pattern A pattern of bytes to search the blob for
* @param start The position to start reading from
* @return position of the specified pattern
* @throws SQLException if something wrong happens
*/
public long position(byte[] pattern, long start) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
assertPosition(start, pattern.length);
int position = 1;
int patternIdx = 0;
long result = -1;
int tmpPosition = 1;
for (LOIterator i = new LOIterator(start - 1); i.hasNext(); position++) {
byte b = i.next();
if (b == pattern[patternIdx]) {
if (patternIdx == 0) {
tmpPosition = position;
}
patternIdx++;
if (patternIdx == pattern.length) {
result = tmpPosition;
break;
}
} else {
patternIdx = 0;
}
}
return result;
}
}
/**
* Iterates over a large object returning byte values. Will buffer the data from the large object.
*/
private class LOIterator {
private static final int BUFFER_SIZE = 8096;
private final byte[] buffer = new byte[BUFFER_SIZE];
private int idx = BUFFER_SIZE;
private int numBytes = BUFFER_SIZE;
LOIterator(long start) throws SQLException {
getLo(false).seek((int) start);
}
public boolean hasNext() throws SQLException {
boolean result;
if (idx < numBytes) {
result = true;
} else {
numBytes = getLo(false).read(buffer, 0, BUFFER_SIZE);
idx = 0;
result = numBytes > 0;
}
return result;
}
private byte next() {
return buffer[idx++];
}
}
/**
* This is simply passing the byte value of the pattern Blob.
*
* @param pattern search pattern
* @param start start position
* @return position of given pattern
* @throws SQLException if something goes wrong
*/
public long position(Blob pattern, long start) throws SQLException {
return position(pattern.getBytes(1, (int) pattern.length()), start);
}
/**
* Throws an exception if the pos value exceeds the max value by which the large object API can
* index.
*
* @param pos Position to write at.
* @throws SQLException if something goes wrong
*/
protected void assertPosition(long pos) throws SQLException {
assertPosition(pos, 0);
}
/**
* Throws an exception if the pos value exceeds the max value by which the large object API can
* index.
*
* @param pos Position to write at.
* @param len number of bytes to write.
* @throws SQLException if something goes wrong
*/
protected void assertPosition(long pos, long len) throws SQLException {
checkFreed();
if (pos < 1) {
throw new PSQLException(GT.tr("LOB positioning offsets start at 1."),
PSQLState.INVALID_PARAMETER_VALUE);
}
if (pos + len - 1 > Integer.MAX_VALUE) {
throw new PSQLException(GT.tr("PostgreSQL LOBs can only index to: {0}", Integer.MAX_VALUE),
PSQLState.INVALID_PARAMETER_VALUE);
}
}
/**
* Checks that this LOB hasn't been free()d already.
*
* @throws SQLException if LOB has been freed.
*/
protected void checkFreed() throws SQLException {
if (subLOs == null) {
throw new PSQLException(GT.tr("free() was called on this LOB previously"),
PSQLState.OBJECT_NOT_IN_STATE);
}
}
protected LargeObject getLo(boolean forWrite) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
LargeObject currentLo = this.currentLo;
if (currentLo != null) {
if (forWrite && !currentLoIsWriteable) {
// Reopen the stream in read-write, at the same pos.
int currentPos = currentLo.tell();
LargeObjectManager lom = conn.getLargeObjectAPI();
LargeObject newLo = lom.open(oid, LargeObjectManager.READWRITE);
castNonNull(subLOs).add(currentLo);
this.currentLo = currentLo = newLo;
if (currentPos != 0) {
currentLo.seek(currentPos);
}
}
return currentLo;
}
LargeObjectManager lom = conn.getLargeObjectAPI();
this.currentLo = currentLo =
lom.open(oid, forWrite ? LargeObjectManager.READWRITE : LargeObjectManager.READ);
currentLoIsWriteable = forWrite;
return currentLo;
}
}
protected void addSubLO(LargeObject subLO) {
castNonNull(subLOs).add(subLO);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/ArrayDecoding.java 0100664 0000000 0000000 00000063022 00000250600 026122 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.Driver;
import org.postgresql.core.BaseConnection;
import org.postgresql.core.Oid;
import org.postgresql.core.Parser;
import org.postgresql.jdbc2.ArrayAssistant;
import org.postgresql.jdbc2.ArrayAssistantRegistry;
import org.postgresql.util.GT;
import org.postgresql.util.PGbytea;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.index.qual.NonNegative;
// import org.checkerframework.checker.nullness.qual.NonNull;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.IOException;
import java.lang.reflect.Array;
import java.math.BigDecimal;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.charset.StandardCharsets;
import java.sql.Date;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.sql.Time;
import java.sql.Timestamp;
import java.sql.Types;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* Utility for decoding arrays.
*
*
* See {@code ArrayEncoding} for description of the binary format of arrays.
*
*
* @author Brett Okken
*/
final class ArrayDecoding {
/**
* Array list implementation specific for storing PG array elements. If
* {@link PgArrayList#dimensionsCount} is {@code 1}, the contents will be
* {@link String}. For all larger dimensionsCount , the values will be
* {@link PgArrayList} instances.
*/
static final class PgArrayList extends ArrayList* @Nullable */ Object> {
private static final long serialVersionUID = 1L;
/**
* How many dimensions.
*/
int dimensionsCount = 1;
}
private interface ArrayDecoder {
A createArray(/* @NonNegative */ int size);
Object[] createMultiDimensionalArray(/* @NonNegative */ int[] sizes);
boolean supportBinary();
void populateFromBinary(A array, /* @NonNegative */ int index, /* @NonNegative */ int count, ByteBuffer bytes, BaseConnection connection)
throws SQLException;
void populateFromString(A array, List* @Nullable */ String> strings, BaseConnection connection) throws SQLException;
}
private abstract static class AbstractObjectStringArrayDecoder implements ArrayDecoder {
final Class> baseClazz;
AbstractObjectStringArrayDecoder(Class> baseClazz) {
this.baseClazz = baseClazz;
}
/**
* {@inheritDoc}
*/
@Override
public boolean supportBinary() {
return false;
}
@SuppressWarnings("unchecked")
@Override
public A createArray(int size) {
return (A) Array.newInstance(baseClazz, size);
}
/**
* {@inheritDoc}
*/
@Override
public Object[] createMultiDimensionalArray(int[] sizes) {
return (Object[]) Array.newInstance(baseClazz, sizes);
}
@Override
public void populateFromBinary(A arr, int index, int count, ByteBuffer bytes, BaseConnection connection)
throws SQLException {
throw new SQLFeatureNotSupportedException();
}
/**
* {@inheritDoc}
*/
@Override
public void populateFromString(A arr, List* @Nullable */ String> strings, BaseConnection connection) throws SQLException {
final /* @Nullable */ Object[] array = (Object[]) arr;
for (int i = 0, j = strings.size(); i < j; i++) {
final String stringVal = strings.get(i);
array[i] = stringVal != null ? parseValue(stringVal, connection) : null;
}
}
abstract Object parseValue(String stringVal, BaseConnection connection) throws SQLException;
}
private abstract static class AbstractObjectArrayDecoder extends AbstractObjectStringArrayDecoder {
AbstractObjectArrayDecoder(Class> baseClazz) {
super(baseClazz);
}
/**
* {@inheritDoc}
*/
@Override
public boolean supportBinary() {
return true;
}
@Override
public void populateFromBinary(A arr, /* @NonNegative */ int index, /* @NonNegative */ int count, ByteBuffer bytes, BaseConnection connection)
throws SQLException {
final /* @Nullable */ Object[] array = (Object[]) arr;
// skip through to the requested index
for (int i = 0; i < index; i++) {
final int length = bytes.getInt();
if (length > 0) {
bytes.position(bytes.position() + length);
}
}
for (int i = 0; i < count; i++) {
final int length = bytes.getInt();
if (length != -1) {
array[i] = parseValue(length, bytes, connection);
} else {
// explicitly set to null for reader's clarity
array[i] = null;
}
}
}
abstract Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) throws SQLException;
}
private static final ArrayDecoder LONG_OBJ_ARRAY = new AbstractObjectArrayDecoder(Long.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) {
return bytes.getLong();
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PgResultSet.toLong(stringVal);
}
};
private static final ArrayDecoder INT4_UNSIGNED_OBJ_ARRAY = new AbstractObjectArrayDecoder(
Long.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) {
return bytes.getInt() & 0xFFFFFFFFL;
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PgResultSet.toLong(stringVal);
}
};
private static final ArrayDecoder INTEGER_OBJ_ARRAY = new AbstractObjectArrayDecoder(
Integer.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) {
return bytes.getInt();
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PgResultSet.toInt(stringVal);
}
};
private static final ArrayDecoder SHORT_OBJ_ARRAY = new AbstractObjectArrayDecoder(Short.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) {
return bytes.getShort();
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PgResultSet.toShort(stringVal);
}
};
private static final ArrayDecoder DOUBLE_OBJ_ARRAY = new AbstractObjectArrayDecoder(
Double.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) {
return bytes.getDouble();
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PgResultSet.toDouble(stringVal);
}
};
private static final ArrayDecoder FLOAT_OBJ_ARRAY = new AbstractObjectArrayDecoder(Float.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) {
return bytes.getFloat();
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PgResultSet.toFloat(stringVal);
}
};
private static final ArrayDecoder BOOLEAN_OBJ_ARRAY = new AbstractObjectArrayDecoder(
Boolean.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) {
return bytes.get() == 1;
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return BooleanTypeUtil.fromString(stringVal);
}
};
private static final ArrayDecoder STRING_ARRAY = new AbstractObjectArrayDecoder(String.class) {
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) throws SQLException {
assert bytes.hasArray();
final byte[] byteArray = bytes.array();
final int offset = bytes.arrayOffset() + bytes.position();
String val;
try {
val = connection.getEncoding().decode(byteArray, offset, length);
} catch (IOException e) {
throw new PSQLException(GT.tr(
"Invalid character data was found. This is most likely caused by stored data containing characters that are invalid for the character set the database was created in. The most common example of this is storing 8bit data in a SQL_ASCII database."),
PSQLState.DATA_ERROR, e);
}
bytes.position(bytes.position() + length);
return val;
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return stringVal;
}
};
private static final ArrayDecoder BYTE_ARRAY_ARRAY = new AbstractObjectArrayDecoder(
byte[].class) {
/**
* {@inheritDoc}
*/
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) throws SQLException {
final byte[] array = new byte[length];
bytes.get(array);
return array;
}
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PGbytea.toBytes(stringVal.getBytes(StandardCharsets.US_ASCII));
}
};
private static final ArrayDecoder BIG_DECIMAL_STRING_DECODER = new AbstractObjectStringArrayDecoder(
BigDecimal.class) {
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return PgResultSet.toBigDecimal(stringVal);
}
};
private static final ArrayDecoder STRING_ONLY_DECODER = new AbstractObjectStringArrayDecoder(
String.class) {
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return stringVal;
}
};
private static final ArrayDecoder DATE_DECODER = new AbstractObjectStringArrayDecoder(
Date.class) {
@Override
@SuppressWarnings("deprecation")
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return connection.getTimestampUtils().toDate(null, stringVal);
}
};
private static final ArrayDecoder TIME_DECODER = new AbstractObjectStringArrayDecoder(
Time.class) {
@Override
@SuppressWarnings("deprecation")
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return connection.getTimestampUtils().toTime(null, stringVal);
}
};
private static final ArrayDecoder TIMESTAMP_DECODER = new AbstractObjectStringArrayDecoder(
Timestamp.class) {
@Override
@SuppressWarnings("deprecation")
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return connection.getTimestampUtils().toTimestamp(null, stringVal);
}
};
/**
* Maps from base type oid to {@link ArrayDecoder} capable of processing
* entries.
*/
@SuppressWarnings("rawtypes")
private static final Map OID_TO_DECODER = new HashMap<>(
(int) (21 / .75) + 1);
static {
OID_TO_DECODER.put(Oid.OID, INT4_UNSIGNED_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.INT8, LONG_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.INT4, INTEGER_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.INT2, SHORT_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.MONEY, DOUBLE_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.FLOAT8, DOUBLE_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.FLOAT4, FLOAT_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.TEXT, STRING_ARRAY);
OID_TO_DECODER.put(Oid.VARCHAR, STRING_ARRAY);
// 42.2.x decodes jsonb array as String rather than PGobject
OID_TO_DECODER.put(Oid.JSONB, STRING_ONLY_DECODER);
OID_TO_DECODER.put(Oid.BIT, BOOLEAN_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.BOOL, BOOLEAN_OBJ_ARRAY);
OID_TO_DECODER.put(Oid.BYTEA, BYTE_ARRAY_ARRAY);
OID_TO_DECODER.put(Oid.NUMERIC, BIG_DECIMAL_STRING_DECODER);
OID_TO_DECODER.put(Oid.BPCHAR, STRING_ONLY_DECODER);
OID_TO_DECODER.put(Oid.CHAR, STRING_ONLY_DECODER);
OID_TO_DECODER.put(Oid.JSON, STRING_ONLY_DECODER);
OID_TO_DECODER.put(Oid.DATE, DATE_DECODER);
OID_TO_DECODER.put(Oid.TIME, TIME_DECODER);
OID_TO_DECODER.put(Oid.TIMETZ, TIME_DECODER);
OID_TO_DECODER.put(Oid.TIMESTAMP, TIMESTAMP_DECODER);
OID_TO_DECODER.put(Oid.TIMESTAMPTZ, TIMESTAMP_DECODER);
}
@SuppressWarnings("rawtypes")
private static final class ArrayAssistantObjectArrayDecoder extends AbstractObjectArrayDecoder {
private final ArrayAssistant arrayAssistant;
@SuppressWarnings("unchecked")
ArrayAssistantObjectArrayDecoder(ArrayAssistant arrayAssistant) {
super(arrayAssistant.baseType());
this.arrayAssistant = arrayAssistant;
}
/**
* {@inheritDoc}
*/
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) throws SQLException {
assert bytes.hasArray();
final byte[] byteArray = bytes.array();
final int offset = bytes.arrayOffset() + bytes.position();
final Object val = arrayAssistant.buildElement(byteArray, offset, length);
bytes.position(bytes.position() + length);
return val;
}
/**
* {@inheritDoc}
*/
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return arrayAssistant.buildElement(stringVal);
}
}
private static final class MappedTypeObjectArrayDecoder extends AbstractObjectArrayDecoder {
private final String typeName;
MappedTypeObjectArrayDecoder(String baseTypeName) {
super(Object.class);
this.typeName = baseTypeName;
}
/**
* {@inheritDoc}
*/
@Override
Object parseValue(int length, ByteBuffer bytes, BaseConnection connection) throws SQLException {
final byte[] copy = new byte[length];
bytes.get(copy);
return connection.getObject(typeName, null, copy);
}
/**
* {@inheritDoc}
*/
@Override
Object parseValue(String stringVal, BaseConnection connection) throws SQLException {
return connection.getObject(typeName, stringVal, null);
}
}
@SuppressWarnings("unchecked")
private static ArrayDecoder getDecoder(int oid, BaseConnection connection) throws SQLException {
final Integer key = oid;
@SuppressWarnings("rawtypes")
final ArrayDecoder decoder = OID_TO_DECODER.get(key);
if (decoder != null) {
return decoder;
}
final ArrayAssistant assistant = ArrayAssistantRegistry.getAssistant(oid);
if (assistant != null) {
return new ArrayAssistantObjectArrayDecoder(assistant);
}
final String typeName = connection.getTypeInfo().getPGType(oid);
if (typeName == null) {
throw Driver.notImplemented(PgArray.class, "readArray(data,oid)");
}
// 42.2.x should return enums as strings
int type = connection.getTypeInfo().getSQLType(typeName);
if (type == Types.CHAR || type == Types.VARCHAR) {
return (ArrayDecoder ) STRING_ONLY_DECODER;
}
return (ArrayDecoder ) new MappedTypeObjectArrayDecoder(typeName);
}
/**
* Reads binary representation of array into object model.
*
* @param index
* 1 based index of where to start on outermost array.
* @param count
* The number of items to return from outermost array (beginning at
* index ).
* @param bytes
* The binary representation of the array.
* @param connection
* The connection the bytes were retrieved from.
* @return The parsed array.
* @throws SQLException
* For failures encountered during parsing.
*/
@SuppressWarnings("unchecked")
public static Object readBinaryArray(int index, int count, byte[] bytes, BaseConnection connection)
throws SQLException {
final ByteBuffer buffer = ByteBuffer.wrap(bytes);
buffer.order(ByteOrder.BIG_ENDIAN);
final int dimensions = buffer.getInt();
@SuppressWarnings("unused")
final boolean hasNulls = buffer.getInt() != 0;
final int elementOid = buffer.getInt();
@SuppressWarnings("rawtypes")
final ArrayDecoder decoder = getDecoder(elementOid, connection);
if (!decoder.supportBinary()) {
throw Driver.notImplemented(PgArray.class, "readBinaryArray(data,oid)");
}
if (dimensions == 0) {
return decoder.createArray(0);
}
final int adjustedSkipIndex = index > 0 ? index - 1 : 0;
// optimize for single dimension array
if (dimensions == 1) {
int length = buffer.getInt();
buffer.position(buffer.position() + 4);
if (count > 0) {
length = Math.min(length, count);
}
final Object array = decoder.createArray(length);
decoder.populateFromBinary(array, adjustedSkipIndex, length, buffer, connection);
return array;
}
final int[] dimensionLengths = new int[dimensions];
for (int i = 0; i < dimensions; i++) {
dimensionLengths[i] = buffer.getInt();
buffer.position(buffer.position() + 4);
}
if (count > 0) {
dimensionLengths[0] = Math.min(count, dimensionLengths[0]);
}
final Object[] array = decoder.createMultiDimensionalArray(dimensionLengths);
// TODO: in certain circumstances (no nulls, fixed size data types)
// if adjustedSkipIndex is > 0, we could advance through the buffer rather than
// parse our way through throwing away the results
storeValues(array, decoder, buffer, adjustedSkipIndex, dimensionLengths, 0, connection);
return array;
}
@SuppressWarnings("unchecked")
private static void storeValues(A[] array, ArrayDecoder decoder, ByteBuffer bytes,
int skip, int[] dimensionLengths, int dim, BaseConnection connection) throws SQLException {
assert dim <= dimensionLengths.length - 2;
for (int i = 0; i < skip; i++) {
if (dim == dimensionLengths.length - 2) {
decoder.populateFromBinary(array[0], 0, dimensionLengths[dim + 1], bytes, connection);
} else {
storeValues((/* @NonNull */ A /* @NonNull */[]) array[0], decoder, bytes, 0, dimensionLengths, dim + 1, connection);
}
}
for (int i = 0; i < dimensionLengths[dim]; i++) {
if (dim == dimensionLengths.length - 2) {
decoder.populateFromBinary(array[i], 0, dimensionLengths[dim + 1], bytes, connection);
} else {
storeValues((/* @NonNull */ A /* @NonNull */[]) array[i], decoder, bytes, 0, dimensionLengths, dim + 1, connection);
}
}
}
/**
* Parses the string representation of an array into a {@link PgArrayList}.
*
* @param fieldString
* The array value to parse.
* @param delim
* The delimiter character appropriate for the data type.
* @return A {@link PgArrayList} representing the parsed fieldString .
*/
static PgArrayList buildArrayList(String fieldString, char delim) {
final PgArrayList arrayList = new PgArrayList();
if (fieldString == null) {
return arrayList;
}
final char[] chars = fieldString.toCharArray();
StringBuilder buffer = null;
boolean insideString = false;
// needed for checking if NULL value occurred
boolean wasInsideString = false;
// array dimension arrays
final List dims = new ArrayList<>();
// currently processed array
PgArrayList curArray = arrayList;
// Starting with 8.0 non-standard (beginning index
// isn't 1) bounds the dimensions are returned in the
// data formatted like so "[0:3]={0,1,2,3,4}".
// Older versions simply do not return the bounds.
//
// Right now we ignore these bounds, but we could
// consider allowing these index values to be used
// even though the JDBC spec says 1 is the first
// index. I'm not sure what a client would like
// to see, so we just retain the old behavior.
int startOffset = 0;
{
if (chars[0] == '[') {
while (chars[startOffset] != '=') {
startOffset++;
}
startOffset++; // skip =
}
}
for (int i = startOffset; i < chars.length; i++) {
// escape character that we need to skip
if (chars[i] == '\\') {
i++;
} else if (!insideString && chars[i] == '{') {
// subarray start
if (dims.isEmpty()) {
dims.add(arrayList);
} else {
PgArrayList a = new PgArrayList();
PgArrayList p = dims.get(dims.size() - 1);
p.add(a);
dims.add(a);
}
curArray = dims.get(dims.size() - 1);
// number of dimensions
{
for (int t = i + 1; t < chars.length; t++) {
char c = chars[t];
if (c == '{') {
curArray.dimensionsCount++;
} else if (!Character.isWhitespace(c)) {
break;
}
}
}
buffer = new StringBuilder();
continue;
} else if (chars[i] == '"') {
// quoted element
insideString = !insideString;
wasInsideString = true;
continue;
} else if (!insideString && Parser.isArrayWhiteSpace(chars[i])) {
// white space
continue;
} else if ((!insideString && (chars[i] == delim || chars[i] == '}')) || i == chars.length - 1) {
// array end or element end
// when character that is a part of array element
if (chars[i] != '"' && chars[i] != '}' && chars[i] != delim && buffer != null) {
buffer.append(chars[i]);
}
String b = buffer == null ? null : buffer.toString();
// add element to current array
if (b != null && (!b.isEmpty() || wasInsideString)) {
curArray.add(!wasInsideString && "NULL".equals(b) ? null : b);
}
wasInsideString = false;
buffer = new StringBuilder();
// when end of an array
if (chars[i] == '}') {
dims.remove(dims.size() - 1);
// when multi-dimension
if (!dims.isEmpty()) {
curArray = dims.get(dims.size() - 1);
}
buffer = null;
}
continue;
}
if (buffer != null) {
buffer.append(chars[i]);
}
}
return arrayList;
}
/**
* Reads {@code String} representation of array into object model.
*
* @param index
* 1 based index of where to start on outermost array.
* @param count
* The number of items to return from outermost array (beginning at
* index ).
* @param oid
* The oid of the base type of the array.
* @param list
* The {@code #buildArrayList(String, char) processed} string
* representation of an array.
* @param connection
* The connection the bytes were retrieved from.
* @return The parsed array.
* @throws SQLException
* For failures encountered during parsing.
*/
@SuppressWarnings({"unchecked", "rawtypes"})
public static Object readStringArray(int index, int count, int oid, PgArrayList list, BaseConnection connection)
throws SQLException {
final ArrayDecoder decoder = getDecoder(oid, connection);
final int dims = list.dimensionsCount;
if (dims == 0) {
return decoder.createArray(0);
}
boolean sublist = false;
int adjustedSkipIndex = 0;
if (index > 1) {
sublist = true;
adjustedSkipIndex = index - 1;
}
int adjustedCount = list.size();
if (count > 0 && count != adjustedCount) {
sublist = true;
adjustedCount = Math.min(adjustedCount, count);
}
final List adjustedList = sublist ? list.subList(adjustedSkipIndex, adjustedSkipIndex + adjustedCount) : list;
if (dims == 1) {
int length = adjustedList.size();
if (count > 0) {
length = Math.min(length, count);
}
final Object array = decoder.createArray(length);
decoder.populateFromString(array, adjustedList, connection);
return array;
}
// dimensions length array (to be used with
// java.lang.reflect.Array.newInstance(Class>, int[]))
final int[] dimensionLengths = new int[dims];
dimensionLengths[0] = adjustedCount;
{
List tmpList = (List) adjustedList.get(0);
for (int i = 1; i < dims; i++) {
// TODO: tmpList always non-null?
dimensionLengths[i] = castNonNull(tmpList, "first element of adjustedList is null").size();
if (i != dims - 1) {
tmpList = (List) tmpList.get(0);
}
}
}
final Object[] array = decoder.createMultiDimensionalArray(dimensionLengths);
storeStringValues(array, decoder, adjustedList, dimensionLengths, 0, connection);
return array;
}
@SuppressWarnings({"unchecked", "rawtypes"})
private static void storeStringValues(A[] array, ArrayDecoder decoder, List list, int [] dimensionLengths,
int dim, BaseConnection connection) throws SQLException {
assert dim <= dimensionLengths.length - 2;
for (int i = 0; i < dimensionLengths[dim]; i++) {
Object element = castNonNull(list.get(i), "list.get(i)");
if (dim == dimensionLengths.length - 2) {
decoder.populateFromString(array[i], (List* @Nullable */ String>) element, connection);
} else {
storeStringValues((/* @NonNull */ A /* @NonNull */[]) array[i], decoder, (List) element, dimensionLengths, dim + 1, connection);
}
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/ArrayEncoding.java 0100664 0000000 0000000 00000120435 00000250600 026136 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.core.BaseConnection;
import org.postgresql.core.Encoding;
import org.postgresql.core.Oid;
import org.postgresql.util.ByteConverter;
import org.postgresql.util.GT;
import org.postgresql.util.PGbytea;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.index.qual.Positive;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.lang.reflect.Array;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.HashMap;
import java.util.Map;
/**
* Utility for using arrays in requests.
*
*
* Binary format:
*
* 4 bytes with number of dimensions
* 4 bytes, boolean indicating nulls present or not
* 4 bytes type oid
* 8 bytes describing the length of each dimension (repeated for each dimension)
*
* 4 bytes for length
* 4 bytes for lower bound on length to check for overflow (it appears this value can always be 0)
*
* data in depth first element order corresponding number and length of dimensions
*
* 4 bytes describing length of element, {@code 0xFFFFFFFF} ({@code -1}) means {@code null}
* binary representation of element (iff not {@code null}).
*
*
*
*
* @author Brett Okken
*/
final class ArrayEncoding {
@SuppressWarnings("ExtendsObject")
interface ArrayEncoder {
/**
* The default array type oid supported by this instance.
*
* @return The default array type oid supported by this instance.
*/
int getDefaultArrayTypeOid();
/**
* Creates {@code String} representation of the array .
*
* @param delim
* The character to use to delimit between elements.
* @param array
* The array to represent as a {@code String}.
* @return {@code String} representation of the array .
*/
String toArrayString(char delim, A array);
/**
* Indicates if an array can be encoded in binary form to array oid .
*
* @param oid
* The array oid to see check for binary support.
* @return Indication of whether
* {@link #toBinaryRepresentation(BaseConnection, Object, int)} is
* supported for oid .
*/
boolean supportBinaryRepresentation(int oid);
/**
* Creates binary representation of the array .
*
* @param connection
* The connection the binary representation will be used on. Attributes
* from the connection might impact how values are translated to
* binary.
* @param array
* The array to binary encode. Must not be {@code null}, but may
* contain {@code null} elements.
* @param oid
* The array type oid to use. Calls to
* {@link #supportBinaryRepresentation(int)} must have returned
* {@code true}.
* @return The binary representation of array .
* @throws SQLFeatureNotSupportedException
* If {@link #supportBinaryRepresentation(int)} is false for
* oid .
*/
byte[] toBinaryRepresentation(BaseConnection connection, A array, int oid)
throws SQLException, SQLFeatureNotSupportedException;
/**
* Append {@code String} representation of array to sb .
*
* @param sb
* The {@link StringBuilder} to append to.
* @param delim
* The delimiter between elements.
* @param array
* The array to represent. Will not be {@code null}, but may contain
* {@code null} elements.
*/
void appendArray(StringBuilder sb, char delim, A array);
}
/**
* Base class to implement {@link ArrayEncoding.ArrayEncoder} and provide
* multi-dimensional support.
*
* @param
* Base array type supported.
*/
@SuppressWarnings("ExtendsObject")
private abstract static class AbstractArrayEncoder
implements ArrayEncoder {
private final int oid;
final int arrayOid;
/**
*
* @param oid
* The default/primary base oid type.
* @param arrayOid
* The default/primary array oid type.
*/
AbstractArrayEncoder(int oid, int arrayOid) {
this.oid = oid;
this.arrayOid = arrayOid;
}
/**
*
* @param arrayOid
* The array oid to get base oid type for.
* @return The base oid type for the given array oid type given to
* {@link #toBinaryRepresentation(BaseConnection, Object, int)}.
*/
int getTypeOID(@SuppressWarnings("unused") int arrayOid) {
return oid;
}
/**
* By default returns the arrayOid this instance was instantiated with.
*/
@Override
public int getDefaultArrayTypeOid() {
return arrayOid;
}
/**
* Counts the number of {@code null} elements in array .
*
* @param array
* The array to count {@code null} elements in.
* @return The number of {@code null} elements in array .
*/
int countNulls(A array) {
int nulls = 0;
final int arrayLength = Array.getLength(array);
for (int i = 0; i < arrayLength; i++) {
if (Array.get(array, i) == null) {
++nulls;
}
}
return nulls;
}
/**
* Creates {@code byte[]} of just the raw data (no metadata).
*
* @param connection
* The connection the binary representation will be used on.
* @param array
* The array to create binary representation of. Will not be
* {@code null}, but may contain {@code null} elements.
* @return {@code byte[]} of just the raw data (no metadata).
* @throws SQLFeatureNotSupportedException
* If {@link #supportBinaryRepresentation(int)} is false for
* oid .
*/
abstract byte[] toSingleDimensionBinaryRepresentation(BaseConnection connection, A array)
throws SQLException, SQLFeatureNotSupportedException;
/**
* {@inheritDoc}
*/
@Override
public String toArrayString(char delim, A array) {
final StringBuilder sb = new StringBuilder(1024);
appendArray(sb, delim, array);
return sb.toString();
}
/**
* By default returns {@code true} if oid matches the arrayOid
* this instance was instantiated with.
*/
@Override
public boolean supportBinaryRepresentation(int oid) {
return oid == arrayOid;
}
}
/**
* Base class to provide support for {@code Number} based arrays.
*
* @param
* The base type of array.
*/
private abstract static class NumberArrayEncoder extends AbstractArrayEncoder {
private final int fieldSize;
/**
*
* @param fieldSize
* The fixed size to represent each value in binary.
* @param oid
* The base type oid.
* @param arrayOid
* The array type oid.
*/
NumberArrayEncoder(int fieldSize, int oid, int arrayOid) {
super(oid, arrayOid);
this.fieldSize = fieldSize;
}
/**
* {@inheritDoc}
*/
@Override
final int countNulls(N[] array) {
int count = 0;
for (int i = 0; i < array.length; i++) {
if (array[i] == null) {
++count;
}
}
return count;
}
/**
* {@inheritDoc}
*/
@Override
public final byte[] toBinaryRepresentation(BaseConnection connection, N[] array, int oid)
throws SQLException, SQLFeatureNotSupportedException {
assert oid == this.arrayOid;
final int nullCount = countNulls(array);
final byte[] bytes = writeBytes(array, nullCount, 20);
// 1 dimension
ByteConverter.int4(bytes, 0, 1);
// no null
ByteConverter.int4(bytes, 4, nullCount == 0 ? 0 : 1);
// oid
ByteConverter.int4(bytes, 8, getTypeOID(oid));
// length
ByteConverter.int4(bytes, 12, array.length);
// postgresql uses 1 base by default
ByteConverter.int4(bytes, 16, 1);
return bytes;
}
/**
* {@inheritDoc}
*/
@Override
final byte[] toSingleDimensionBinaryRepresentation(BaseConnection connection, N[] array)
throws SQLException, SQLFeatureNotSupportedException {
final int nullCount = countNulls(array);
return writeBytes(array, nullCount, 0);
}
private byte[] writeBytes(final N[] array, final int nullCount, final int offset) {
final int length = offset + (4 * array.length) + (fieldSize * (array.length - nullCount));
final byte[] bytes = new byte[length];
int idx = offset;
for (int i = 0; i < array.length; i++) {
if (array[i] == null) {
ByteConverter.int4(bytes, idx, -1);
idx += 4;
} else {
ByteConverter.int4(bytes, idx, fieldSize);
idx += 4;
write(array[i], bytes, idx);
idx += fieldSize;
}
}
return bytes;
}
/**
* Write single value (number ) to bytes beginning at
* offset .
*
* @param number
* The value to write to bytes . This will never be {@code null}.
* @param bytes
* The {@code byte[]} to write to.
* @param offset
* The offset into bytes to write the number value.
*/
protected abstract void write(N number, byte[] bytes, int offset);
/**
* {@inheritDoc}
*/
@Override
public final void appendArray(StringBuilder sb, char delim, N[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i != 0) {
sb.append(delim);
}
if (array[i] == null) {
sb.append('N').append('U').append('L').append('L');
} else {
sb.append('"');
sb.append(array[i].toString());
sb.append('"');
}
}
sb.append('}');
}
}
/**
* Base support for primitive arrays.
*
* @param
* The primitive array to support.
*/
@SuppressWarnings("ExtendsObject")
private abstract static class FixedSizePrimitiveArrayEncoder
extends AbstractArrayEncoder {
private final int fieldSize;
FixedSizePrimitiveArrayEncoder(int fieldSize, int oid, int arrayOid) {
super(oid, arrayOid);
this.fieldSize = fieldSize;
}
/**
* {@inheritDoc}
*
*
* Always returns {@code 0}.
*
*/
@Override
final int countNulls(A array) {
return 0;
}
/**
* {@inheritDoc}
*/
@Override
public final byte[] toBinaryRepresentation(BaseConnection connection, A array, int oid)
throws SQLException, SQLFeatureNotSupportedException {
assert oid == arrayOid;
final int arrayLength = Array.getLength(array);
final int length = 20 + ((fieldSize + 4) * arrayLength);
final byte[] bytes = new byte[length];
// 1 dimension
ByteConverter.int4(bytes, 0, 1);
// no null
ByteConverter.int4(bytes, 4, 0);
// oid
ByteConverter.int4(bytes, 8, getTypeOID(oid));
// length
ByteConverter.int4(bytes, 12, arrayLength);
// postgresql uses 1 base by default
ByteConverter.int4(bytes, 16, 1);
write(array, bytes, 20);
return bytes;
}
/**
* {@inheritDoc}
*/
@Override
final byte[] toSingleDimensionBinaryRepresentation(BaseConnection connection, A array)
throws SQLException, SQLFeatureNotSupportedException {
final int length = (fieldSize + 4) * Array.getLength(array);
final byte[] bytes = new byte[length];
write(array, bytes, 0);
return bytes;
}
/**
* Write the entire contents of array to bytes starting at
* offset without metadata describing type or length.
*
* @param array
* The array to write.
* @param bytes
* The {@code byte[]} to write to.
* @param offset
* The offset into bytes to start writing.
*/
protected abstract void write(A array, byte[] bytes, int offset);
}
private static final AbstractArrayEncoder LONG_ARRAY = new FixedSizePrimitiveArrayEncoder(8, Oid.INT8,
Oid.INT8_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, long[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
sb.append(array[i]);
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
protected void write(long[] array, byte[] bytes, int offset) {
int idx = offset;
for (int i = 0; i < array.length; i++) {
bytes[idx + 3] = 8;
ByteConverter.int8(bytes, idx + 4, array[i]);
idx += 12;
}
}
};
private static final AbstractArrayEncoder LONG_OBJ_ARRAY = new NumberArrayEncoder(8, Oid.INT8,
Oid.INT8_ARRAY) {
@Override
protected void write(Long number, byte[] bytes, int offset) {
ByteConverter.int8(bytes, offset, number.longValue());
}
};
private static final AbstractArrayEncoder INT_ARRAY = new FixedSizePrimitiveArrayEncoder(4, Oid.INT4,
Oid.INT4_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, int[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
sb.append(array[i]);
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
protected void write(int[] array, byte[] bytes, int offset) {
int idx = offset;
for (int i = 0; i < array.length; i++) {
bytes[idx + 3] = 4;
ByteConverter.int4(bytes, idx + 4, array[i]);
idx += 8;
}
}
};
private static final AbstractArrayEncoder INT_OBJ_ARRAY = new NumberArrayEncoder(4, Oid.INT4,
Oid.INT4_ARRAY) {
@Override
protected void write(Integer number, byte[] bytes, int offset) {
ByteConverter.int4(bytes, offset, number.intValue());
}
};
private static final AbstractArrayEncoder SHORT_ARRAY = new FixedSizePrimitiveArrayEncoder(2,
Oid.INT2, Oid.INT2_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, short[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
sb.append(array[i]);
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
protected void write(short[] array, byte[] bytes, int offset) {
int idx = offset;
for (int i = 0; i < array.length; i++) {
bytes[idx + 3] = 2;
ByteConverter.int2(bytes, idx + 4, array[i]);
idx += 6;
}
}
};
private static final AbstractArrayEncoder SHORT_OBJ_ARRAY = new NumberArrayEncoder(2, Oid.INT2,
Oid.INT2_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
protected void write(Short number, byte[] bytes, int offset) {
ByteConverter.int2(bytes, offset, number.shortValue());
}
};
private static final AbstractArrayEncoder DOUBLE_ARRAY = new FixedSizePrimitiveArrayEncoder(8,
Oid.FLOAT8, Oid.FLOAT8_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, double[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
// use quotes to account for any issues with scientific notation
sb.append('"');
sb.append(array[i]);
sb.append('"');
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
protected void write(double[] array, byte[] bytes, int offset) {
int idx = offset;
for (int i = 0; i < array.length; i++) {
bytes[idx + 3] = 8;
ByteConverter.float8(bytes, idx + 4, array[i]);
idx += 12;
}
}
};
private static final AbstractArrayEncoder DOUBLE_OBJ_ARRAY = new NumberArrayEncoder(8, Oid.FLOAT8,
Oid.FLOAT8_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
protected void write(Double number, byte[] bytes, int offset) {
ByteConverter.float8(bytes, offset, number.doubleValue());
}
};
private static final AbstractArrayEncoder FLOAT_ARRAY = new FixedSizePrimitiveArrayEncoder(4,
Oid.FLOAT4, Oid.FLOAT4_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, float[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
// use quotes to account for any issues with scientific notation
sb.append('"');
sb.append(array[i]);
sb.append('"');
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
protected void write(float[] array, byte[] bytes, int offset) {
int idx = offset;
for (int i = 0; i < array.length; i++) {
bytes[idx + 3] = 4;
ByteConverter.float4(bytes, idx + 4, array[i]);
idx += 8;
}
}
};
private static final AbstractArrayEncoder FLOAT_OBJ_ARRAY = new NumberArrayEncoder(4, Oid.FLOAT4,
Oid.FLOAT4_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
protected void write(Float number, byte[] bytes, int offset) {
ByteConverter.float4(bytes, offset, number.floatValue());
}
};
private static final AbstractArrayEncoder BOOLEAN_ARRAY = new FixedSizePrimitiveArrayEncoder(1,
Oid.BOOL, Oid.BOOL_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, boolean[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
sb.append(array[i] ? '1' : '0');
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
protected void write(boolean[] array, byte[] bytes, int offset) {
int idx = offset;
for (int i = 0; i < array.length; i++) {
bytes[idx + 3] = 1;
ByteConverter.bool(bytes, idx + 4, array[i]);
idx += 5;
}
}
};
private static final AbstractArrayEncoder BOOLEAN_OBJ_ARRAY = new AbstractArrayEncoder(Oid.BOOL,
Oid.BOOL_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public byte[] toBinaryRepresentation(BaseConnection connection, Boolean[] array, int oid)
throws SQLException, SQLFeatureNotSupportedException {
assert oid == arrayOid;
final int nullCount = countNulls(array);
final byte[] bytes = writeBytes(array, nullCount, 20);
// 1 dimension
ByteConverter.int4(bytes, 0, 1);
// no null
ByteConverter.int4(bytes, 4, nullCount == 0 ? 0 : 1);
// oid
ByteConverter.int4(bytes, 8, getTypeOID(oid));
// length
ByteConverter.int4(bytes, 12, array.length);
// postgresql uses 1 base by default
ByteConverter.int4(bytes, 16, 1);
return bytes;
}
private byte[] writeBytes(final Boolean[] array, final int nullCount, final int offset) {
final int length = offset + (4 * array.length) + (array.length - nullCount);
final byte[] bytes = new byte[length];
int idx = offset;
for (int i = 0; i < array.length; i++) {
if (array[i] == null) {
ByteConverter.int4(bytes, idx, -1);
idx += 4;
} else {
ByteConverter.int4(bytes, idx, 1);
idx += 4;
write(array[i], bytes, idx);
++idx;
}
}
return bytes;
}
private void write(Boolean bool, byte[] bytes, int idx) {
ByteConverter.bool(bytes, idx, bool.booleanValue());
}
/**
* {@inheritDoc}
*/
@Override
byte[] toSingleDimensionBinaryRepresentation(BaseConnection connection, Boolean[] array)
throws SQLException, SQLFeatureNotSupportedException {
final int nullCount = countNulls(array);
return writeBytes(array, nullCount, 0);
}
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, Boolean[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i != 0) {
sb.append(delim);
}
if (array[i] == null) {
sb.append('N').append('U').append('L').append('L');
} else {
sb.append(array[i].booleanValue() ? '1' : '0');
}
}
sb.append('}');
}
};
private static final AbstractArrayEncoder STRING_ARRAY = new AbstractArrayEncoder(Oid.VARCHAR,
Oid.VARCHAR_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
int countNulls(String[] array) {
int count = 0;
for (int i = 0; i < array.length; i++) {
if (array[i] == null) {
++count;
}
}
return count;
}
/**
* {@inheritDoc}
*/
@Override
public boolean supportBinaryRepresentation(int oid) {
return oid == Oid.VARCHAR_ARRAY || oid == Oid.TEXT_ARRAY;
}
/**
* {@inheritDoc}
*/
@Override
int getTypeOID(int arrayOid) {
if (arrayOid == Oid.VARCHAR_ARRAY) {
return Oid.VARCHAR;
}
if (arrayOid == Oid.TEXT_ARRAY) {
return Oid.TEXT;
}
// this should not be possible based on supportBinaryRepresentation returning
// false for all other types
throw new IllegalStateException("Invalid array oid: " + arrayOid);
}
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, String[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
if (array[i] == null) {
sb.append('N').append('U').append('L').append('L');
} else {
PgArray.escapeArrayElement(sb, array[i]);
}
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
public byte[] toBinaryRepresentation(BaseConnection connection, String[] array, int oid) throws SQLException {
final ByteArrayOutputStream baos = new ByteArrayOutputStream(Math.min(1024, (array.length * 32) + 20));
assert supportBinaryRepresentation(oid);
final byte[] buffer = new byte[4];
try {
// 1 dimension
ByteConverter.int4(buffer, 0, 1);
baos.write(buffer);
// null
ByteConverter.int4(buffer, 0, countNulls(array) > 0 ? 1 : 0);
baos.write(buffer);
// oid
ByteConverter.int4(buffer, 0, getTypeOID(oid));
baos.write(buffer);
// length
ByteConverter.int4(buffer, 0, array.length);
baos.write(buffer);
// postgresql uses 1 base by default
ByteConverter.int4(buffer, 0, 1);
baos.write(buffer);
final Encoding encoding = connection.getEncoding();
for (int i = 0; i < array.length; i++) {
final String string = array[i];
if (string != null) {
final byte[] encoded;
try {
encoded = encoding.encode(string);
} catch (IOException e) {
throw new PSQLException(GT.tr("Unable to translate data into the desired encoding."),
PSQLState.DATA_ERROR, e);
}
ByteConverter.int4(buffer, 0, encoded.length);
baos.write(buffer);
baos.write(encoded);
} else {
ByteConverter.int4(buffer, 0, -1);
baos.write(buffer);
}
}
return baos.toByteArray();
} catch (IOException e) {
// this IO exception is from writing to baos, which will never throw an
// IOException
throw new java.lang.AssertionError(e);
}
}
/**
* {@inheritDoc}
*/
@Override
byte[] toSingleDimensionBinaryRepresentation(BaseConnection connection, String[] array)
throws SQLException, SQLFeatureNotSupportedException {
try {
final ByteArrayOutputStream baos = new ByteArrayOutputStream(Math.min(1024, (array.length * 32) + 20));
final byte[] buffer = new byte[4];
final Encoding encoding = connection.getEncoding();
for (int i = 0; i < array.length; i++) {
final String string = array[i];
if (string != null) {
final byte[] encoded;
try {
encoded = encoding.encode(string);
} catch (IOException e) {
throw new PSQLException(GT.tr("Unable to translate data into the desired encoding."),
PSQLState.DATA_ERROR, e);
}
ByteConverter.int4(buffer, 0, encoded.length);
baos.write(buffer);
baos.write(encoded);
} else {
ByteConverter.int4(buffer, 0, -1);
baos.write(buffer);
}
}
return baos.toByteArray();
} catch (IOException e) {
// this IO exception is from writing to baos, which will never throw an
// IOException
throw new java.lang.AssertionError(e);
}
}
};
private static final AbstractArrayEncoder BYTEA_ARRAY = new AbstractArrayEncoder(Oid.BYTEA,
Oid.BYTEA_ARRAY) {
/**
* {@inheritDoc}
*/
@Override
public byte[] toBinaryRepresentation(BaseConnection connection, byte[][] array, int oid)
throws SQLException, SQLFeatureNotSupportedException {
assert oid == arrayOid;
int length = 20;
for (int i = 0; i < array.length; i++) {
length += 4;
if (array[i] != null) {
length += array[i].length;
}
}
final byte[] bytes = new byte[length];
// 1 dimension
ByteConverter.int4(bytes, 0, 1);
// no null
ByteConverter.int4(bytes, 4, 0);
// oid
ByteConverter.int4(bytes, 8, getTypeOID(oid));
// length
ByteConverter.int4(bytes, 12, array.length);
// postgresql uses 1 base by default
ByteConverter.int4(bytes, 16, 1);
write(array, bytes, 20);
return bytes;
}
/**
* {@inheritDoc}
*/
@Override
byte[] toSingleDimensionBinaryRepresentation(BaseConnection connection, byte[][] array)
throws SQLException, SQLFeatureNotSupportedException {
int length = 0;
for (int i = 0; i < array.length; i++) {
length += 4;
if (array[i] != null) {
length += array[i].length;
}
}
final byte[] bytes = new byte[length];
write(array, bytes, 0);
return bytes;
}
/**
* {@inheritDoc}
*/
@Override
int countNulls(byte[][] array) {
int nulls = 0;
for (int i = 0; i < array.length; i++) {
if (array[i] == null) {
++nulls;
}
}
return nulls;
}
private void write(byte[][] array, byte[] bytes, int offset) {
int idx = offset;
for (int i = 0; i < array.length; i++) {
if (array[i] != null) {
ByteConverter.int4(bytes, idx, array[i].length);
idx += 4;
System.arraycopy(array[i], 0, bytes, idx, array[i].length);
idx += array[i].length;
} else {
ByteConverter.int4(bytes, idx, -1);
idx += 4;
}
}
}
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, byte[][] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
if (array[i] != null) {
sb.append("\"\\\\x");
PGbytea.appendHexString(sb, array[i], 0, array[i].length);
sb.append('"');
} else {
sb.append("NULL");
}
}
sb.append('}');
}
};
private static final AbstractArrayEncoder OBJECT_ARRAY = new AbstractArrayEncoder(0, 0) {
@Override
public int getDefaultArrayTypeOid() {
return 0;
}
/**
* {@inheritDoc}
*/
@Override
public boolean supportBinaryRepresentation(int oid) {
return false;
}
@Override
public byte[] toBinaryRepresentation(BaseConnection connection, Object[] array, int oid)
throws SQLException, SQLFeatureNotSupportedException {
throw new SQLFeatureNotSupportedException();
}
@Override
byte[] toSingleDimensionBinaryRepresentation(BaseConnection connection, Object[] array)
throws SQLException, SQLFeatureNotSupportedException {
throw new SQLFeatureNotSupportedException();
}
@Override
public void appendArray(StringBuilder sb, char delim, Object[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
if (array[i] == null) {
sb.append('N').append('U').append('L').append('L');
} else if (array[i].getClass().isArray()) {
if (array[i] instanceof byte[]) {
throw new UnsupportedOperationException("byte[] nested inside Object[]");
}
try {
getArrayEncoder(array[i]).appendArray(sb, delim, array[i]);
} catch (PSQLException e) {
// this should never happen
throw new IllegalStateException(e);
}
} else {
PgArray.escapeArrayElement(sb, array[i].toString());
}
}
sb.append('}');
}
};
@SuppressWarnings("rawtypes")
private static final Map ARRAY_CLASS_TO_ENCODER = new HashMap<>(
(int) (14 / .75) + 1);
static {
ARRAY_CLASS_TO_ENCODER.put(long.class, LONG_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(Long.class, LONG_OBJ_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(int.class, INT_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(Integer.class, INT_OBJ_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(short.class, SHORT_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(Short.class, SHORT_OBJ_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(double.class, DOUBLE_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(Double.class, DOUBLE_OBJ_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(float.class, FLOAT_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(Float.class, FLOAT_OBJ_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(boolean.class, BOOLEAN_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(Boolean.class, BOOLEAN_OBJ_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(byte[].class, BYTEA_ARRAY);
ARRAY_CLASS_TO_ENCODER.put(String.class, STRING_ARRAY);
}
/**
* Returns support for encoding array .
*
* @param array
* The array to encode. Must not be {@code null}.
* @return An instance capable of encoding array as a {@code String} at
* minimum. Some types may support binary encoding.
* @throws PSQLException
* if array is not a supported type.
* @see ArrayEncoding.ArrayEncoder#supportBinaryRepresentation(int)
*/
@SuppressWarnings({"unchecked", "rawtypes", "ExtendsObject"})
public static ArrayEncoder getArrayEncoder(A array) throws PSQLException {
final Class> arrayClazz = array.getClass();
Class> subClazz = arrayClazz.getComponentType();
if (subClazz == null) {
throw new PSQLException(GT.tr("Invalid elements {0}", array), PSQLState.INVALID_PARAMETER_TYPE);
}
AbstractArrayEncoder support = ARRAY_CLASS_TO_ENCODER.get(subClazz);
if (support != null) {
return support;
}
Class> subSubClazz = subClazz.getComponentType();
if (subSubClazz == null) {
if (Object.class.isAssignableFrom(subClazz)) {
return (ArrayEncoder ) OBJECT_ARRAY;
}
throw new PSQLException(GT.tr("Invalid elements {0}", array), PSQLState.INVALID_PARAMETER_TYPE);
}
subClazz = subSubClazz;
int dimensions = 2;
while (subClazz != null) {
support = ARRAY_CLASS_TO_ENCODER.get(subClazz);
if (support != null) {
if (dimensions == 2) {
return new TwoDimensionPrimitiveArrayEncoder(support);
}
return new RecursiveArrayEncoder(support, dimensions);
}
subSubClazz = subClazz.getComponentType();
if (subSubClazz == null) {
if (Object.class.isAssignableFrom(subClazz)) {
if (dimensions == 2) {
return new TwoDimensionPrimitiveArrayEncoder(OBJECT_ARRAY);
}
return new RecursiveArrayEncoder(OBJECT_ARRAY, dimensions);
}
}
++dimensions;
subClazz = subSubClazz;
}
throw new PSQLException(GT.tr("Invalid elements {0}", array), PSQLState.INVALID_PARAMETER_TYPE);
}
/**
* Wraps an {@link AbstractArrayEncoder} implementation and provides optimized
* support for 2 dimensions.
*/
@SuppressWarnings("ExtendsObject")
private static final class TwoDimensionPrimitiveArrayEncoder implements ArrayEncoder {
private final AbstractArrayEncoder support;
/**
* @param support
* The instance providing support for the base array type.
*/
TwoDimensionPrimitiveArrayEncoder(AbstractArrayEncoder support) {
super();
this.support = support;
}
/**
* {@inheritDoc}
*/
@Override
public int getDefaultArrayTypeOid() {
return support.getDefaultArrayTypeOid();
}
/**
* {@inheritDoc}
*/
@Override
public String toArrayString(char delim, A[] array) {
final StringBuilder sb = new StringBuilder(1024);
appendArray(sb, delim, array);
return sb.toString();
}
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, A[] array) {
sb.append('{');
for (int i = 0; i < array.length; i++) {
if (i > 0) {
sb.append(delim);
}
support.appendArray(sb, delim, array[i]);
}
sb.append('}');
}
/**
* {@inheritDoc}
*/
@Override
public boolean supportBinaryRepresentation(int oid) {
return support.supportBinaryRepresentation(oid);
}
/**
* {@inheritDoc} 4 bytes - dimension 4 bytes - oid 4 bytes - ? 8*d bytes -
* dimension length
*/
@Override
public byte[] toBinaryRepresentation(BaseConnection connection, A[] array, int oid)
throws SQLException, SQLFeatureNotSupportedException {
final ByteArrayOutputStream baos = new ByteArrayOutputStream(Math.min(1024, (array.length * 32) + 20));
final byte[] buffer = new byte[4];
boolean hasNulls = false;
for (int i = 0; !hasNulls && i < array.length; i++) {
if (support.countNulls(array[i]) > 0) {
hasNulls = true;
}
}
try {
// 2 dimension
ByteConverter.int4(buffer, 0, 2);
baos.write(buffer);
// nulls
ByteConverter.int4(buffer, 0, hasNulls ? 1 : 0);
baos.write(buffer);
// oid
ByteConverter.int4(buffer, 0, support.getTypeOID(oid));
baos.write(buffer);
// length
ByteConverter.int4(buffer, 0, array.length);
baos.write(buffer);
// postgres defaults to 1 based lower bound
ByteConverter.int4(buffer, 0, 1);
baos.write(buffer);
ByteConverter.int4(buffer, 0, array.length > 0 ? Array.getLength(array[0]) : 0);
baos.write(buffer);
// postgresql uses 1 base by default
ByteConverter.int4(buffer, 0, 1);
baos.write(buffer);
for (int i = 0; i < array.length; i++) {
baos.write(support.toSingleDimensionBinaryRepresentation(connection, array[i]));
}
return baos.toByteArray();
} catch (IOException e) {
// this IO exception is from writing to baos, which will never throw an
// IOException
throw new java.lang.AssertionError(e);
}
}
}
/**
* Wraps an {@link AbstractArrayEncoder} implementation and provides support for
* 2 or more dimensions using recursion.
*/
@SuppressWarnings({"unchecked", "rawtypes"})
private static final class RecursiveArrayEncoder implements ArrayEncoder {
private final AbstractArrayEncoder support;
private final /* @Positive */ int dimensions;
/**
* @param support
* The instance providing support for the base array type.
*/
RecursiveArrayEncoder(AbstractArrayEncoder support, /* @Positive */ int dimensions) {
super();
this.support = support;
this.dimensions = dimensions;
assert dimensions >= 2;
}
/**
* {@inheritDoc}
*/
@Override
public int getDefaultArrayTypeOid() {
return support.getDefaultArrayTypeOid();
}
/**
* {@inheritDoc}
*/
@Override
public String toArrayString(char delim, Object array) {
final StringBuilder sb = new StringBuilder(2048);
arrayString(sb, array, delim, dimensions);
return sb.toString();
}
/**
* {@inheritDoc}
*/
@Override
public void appendArray(StringBuilder sb, char delim, Object array) {
arrayString(sb, array, delim, dimensions);
}
private void arrayString(StringBuilder sb, Object array, char delim, int depth) {
if (depth > 1) {
sb.append('{');
for (int i = 0, j = Array.getLength(array); i < j; i++) {
if (i > 0) {
sb.append(delim);
}
arrayString(sb, Array.get(array, i), delim, depth - 1);
}
sb.append('}');
} else {
support.appendArray(sb, delim, array);
}
}
/**
* {@inheritDoc}
*/
@Override
public boolean supportBinaryRepresentation(int oid) {
return support.supportBinaryRepresentation(oid);
}
private boolean hasNulls(Object array, int depth) {
if (depth > 1) {
for (int i = 0, j = Array.getLength(array); i < j; i++) {
if (hasNulls(Array.get(array, i), depth - 1)) {
return true;
}
}
return false;
}
return support.countNulls(array) > 0;
}
/**
* {@inheritDoc}
*/
@Override
public byte[] toBinaryRepresentation(BaseConnection connection, Object array, int oid)
throws SQLException, SQLFeatureNotSupportedException {
final boolean hasNulls = hasNulls(array, dimensions);
final ByteArrayOutputStream baos = new ByteArrayOutputStream(1024 * dimensions);
final byte[] buffer = new byte[4];
try {
// dimensions
ByteConverter.int4(buffer, 0, dimensions);
baos.write(buffer);
// nulls
ByteConverter.int4(buffer, 0, hasNulls ? 1 : 0);
baos.write(buffer);
// oid
ByteConverter.int4(buffer, 0, support.getTypeOID(oid));
baos.write(buffer);
// length
ByteConverter.int4(buffer, 0, Array.getLength(array));
baos.write(buffer);
// postgresql uses 1 base by default
ByteConverter.int4(buffer, 0, 1);
baos.write(buffer);
writeArray(connection, buffer, baos, array, dimensions, true);
return baos.toByteArray();
} catch (IOException e) {
// this IO exception is from writing to baos, which will never throw an
// IOException
throw new java.lang.AssertionError(e);
}
}
private void writeArray(BaseConnection connection, byte[] buffer, ByteArrayOutputStream baos,
Object array, int depth, boolean first) throws IOException, SQLException {
final int length = Array.getLength(array);
if (first) {
ByteConverter.int4(buffer, 0, length > 0 ? Array.getLength(Array.get(array, 0)) : 0);
baos.write(buffer);
// postgresql uses 1 base by default
ByteConverter.int4(buffer, 0, 1);
baos.write(buffer);
}
for (int i = 0; i < length; i++) {
final Object subArray = Array.get(array, i);
if (depth > 2) {
writeArray(connection, buffer, baos, subArray, depth - 1, i == 0);
} else {
baos.write(support.toSingleDimensionBinaryRepresentation(connection, subArray));
}
}
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/AutoSave.java 0100664 0000000 0000000 00000000764 00000250600 025142 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2005, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import java.util.Locale;
public enum AutoSave {
NEVER,
ALWAYS,
CONSERVATIVE;
private final String value;
AutoSave() {
value = this.name().toLowerCase(Locale.ROOT);
}
public String value() {
return value;
}
public static AutoSave of(String value) {
return valueOf(value.toUpperCase(Locale.ROOT));
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/BatchResultHandler.java 0100664 0000000 0000000 00000020650 00000250600 027125 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.core.Field;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Query;
import org.postgresql.core.ResultCursor;
import org.postgresql.core.ResultHandlerBase;
import org.postgresql.core.Tuple;
import org.postgresql.core.v3.BatchedQuery;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.BatchUpdateException;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.SQLWarning;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
/**
* Internal class, it is not a part of public API.
*/
public class BatchResultHandler extends ResultHandlerBase {
private final PgStatement pgStatement;
private int resultIndex;
private final Query[] queries;
private final long[] longUpdateCounts;
private final /* @Nullable */ ParameterList /* @Nullable */ [] parameterLists;
private final boolean expectGeneratedKeys;
private /* @Nullable */ PgResultSet generatedKeys;
private int committedRows; // 0 means no rows committed. 1 means row 0 was committed, and so on
private final /* @Nullable */ List> allGeneratedRows;
private /* @Nullable */ List latestGeneratedRows;
private /* @Nullable */ PgResultSet latestGeneratedKeysRs;
BatchResultHandler(PgStatement pgStatement, Query[] queries,
/* @Nullable */ ParameterList /* @Nullable */ [] parameterLists,
boolean expectGeneratedKeys) {
this.pgStatement = pgStatement;
this.queries = queries;
this.parameterLists = parameterLists;
this.longUpdateCounts = new long[queries.length];
this.expectGeneratedKeys = expectGeneratedKeys;
this.allGeneratedRows = !expectGeneratedKeys ? null : new ArrayList>();
}
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List tuples,
/* @Nullable */ ResultCursor cursor) {
// If SELECT, then handleCommandStatus call would just be missing
resultIndex++;
if (!expectGeneratedKeys) {
// No rows expected -> just ignore rows
return;
}
if (generatedKeys == null) {
try {
// If SELECT, the resulting ResultSet is not valid
// Thus it is up to handleCommandStatus to decide if resultSet is good enough
latestGeneratedKeysRs = (PgResultSet) pgStatement.createResultSet(fromQuery, fields,
new ArrayList<>(), cursor);
} catch (SQLException e) {
handleError(e);
}
}
latestGeneratedRows = tuples;
}
@Override
public void handleCommandStatus(String status, long updateCount, long insertOID) {
List latestGeneratedRows = this.latestGeneratedRows;
if (latestGeneratedRows != null) {
// We have DML. Decrease resultIndex that was just increased in handleResultRows
resultIndex--;
// If exception thrown, no need to collect generated keys
// Note: some generated keys might be secured in generatedKeys
if (updateCount > 0 && (getException() == null || isAutoCommit())) {
List> allGeneratedRows = castNonNull(this.allGeneratedRows, "allGeneratedRows");
allGeneratedRows.add(latestGeneratedRows);
if (generatedKeys == null) {
generatedKeys = latestGeneratedKeysRs;
}
}
this.latestGeneratedRows = null;
}
if (resultIndex >= queries.length) {
handleError(new PSQLException(GT.tr("Too many update results were returned."),
PSQLState.TOO_MANY_RESULTS));
return;
}
latestGeneratedKeysRs = null;
longUpdateCounts[resultIndex++] = updateCount;
}
private boolean isAutoCommit() {
try {
return pgStatement.getConnection().getAutoCommit();
} catch (SQLException e) {
assert false : "pgStatement.getConnection().getAutoCommit() should not throw";
return false;
}
}
@Override
public void secureProgress() {
if (isAutoCommit()) {
committedRows = resultIndex;
updateGeneratedKeys();
}
}
private void updateGeneratedKeys() {
List> allGeneratedRows = this.allGeneratedRows;
if (allGeneratedRows == null || allGeneratedRows.isEmpty()) {
return;
}
PgResultSet generatedKeys = castNonNull(this.generatedKeys, "generatedKeys");
for (List rows : allGeneratedRows) {
generatedKeys.addRows(rows);
}
allGeneratedRows.clear();
}
@Override
public void handleWarning(SQLWarning warning) {
pgStatement.addWarning(warning);
}
@Override
public void handleError(SQLException newError) {
if (getException() == null) {
Arrays.fill(longUpdateCounts, committedRows, longUpdateCounts.length, Statement.EXECUTE_FAILED);
if (allGeneratedRows != null) {
allGeneratedRows.clear();
}
String queryString = "";
if (pgStatement.getPGConnection().getLogServerErrorDetail()) {
if (resultIndex < queries.length) {
queryString = queries[resultIndex].toString(
parameterLists == null ? null : parameterLists[resultIndex]);
}
}
BatchUpdateException batchException;
batchException = new BatchUpdateException(
GT.tr("Batch entry {0} {1} was aborted: {2} Call getNextException to see other errors in the batch.",
resultIndex, queryString, newError.getMessage()),
newError.getSQLState(), 0, uncompressLongUpdateCount(), newError);
super.handleError(batchException);
}
resultIndex++;
super.handleError(newError);
}
@Override
public void handleCompletion() throws SQLException {
updateGeneratedKeys();
SQLException batchException = getException();
if (batchException != null) {
if (isAutoCommit()) {
// Re-create batch exception since rows after exception might indeed succeed.
BatchUpdateException newException;
newException = new BatchUpdateException(
batchException.getMessage(),
batchException.getSQLState(), 0,
uncompressLongUpdateCount(),
batchException.getCause()
);
SQLException next = batchException.getNextException();
if (next != null) {
newException.setNextException(next);
}
batchException = newException;
}
throw batchException;
}
}
public /* @Nullable */ ResultSet getGeneratedKeys() {
return generatedKeys;
}
private int[] uncompressUpdateCount() {
long[] original = uncompressLongUpdateCount();
int[] copy = new int[original.length];
for (int i = 0; i < original.length; i++) {
copy[i] = original[i] > Integer.MAX_VALUE ? Statement.SUCCESS_NO_INFO : (int) original[i];
}
return copy;
}
public int[] getUpdateCount() {
return uncompressUpdateCount();
}
private long[] uncompressLongUpdateCount() {
if (!(queries[0] instanceof BatchedQuery)) {
return longUpdateCounts;
}
int totalRows = 0;
boolean hasRewrites = false;
for (Query query : queries) {
int batchSize = query.getBatchSize();
totalRows += batchSize;
hasRewrites |= batchSize > 1;
}
if (!hasRewrites) {
return longUpdateCounts;
}
/* In this situation there is a batch that has been rewritten. Substitute
* the running total returned by the database with a status code to
* indicate successful completion for each row the driver client added
* to the batch.
*/
long[] newUpdateCounts = new long[totalRows];
int offset = 0;
for (int i = 0; i < queries.length; i++) {
Query query = queries[i];
int batchSize = query.getBatchSize();
long superBatchResult = longUpdateCounts[i];
if (batchSize == 1) {
newUpdateCounts[offset++] = superBatchResult;
continue;
}
if (superBatchResult > 0) {
// If some rows inserted, we do not really know how did they spread over individual
// statements
superBatchResult = Statement.SUCCESS_NO_INFO;
}
Arrays.fill(newUpdateCounts, offset, offset + batchSize, superBatchResult);
offset += batchSize;
}
return newUpdateCounts;
}
public long[] getLargeUpdateCount() {
return uncompressLongUpdateCount();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/BooleanTypeUtil.java 0100664 0000000 0000000 00000006372 00000250600 026473 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2017, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.util.logging.Level;
import java.util.logging.Logger;
/**
* Helper class to handle boolean type of PostgreSQL.
*
* Based on values accepted by the PostgreSQL server:
* https://www.postgresql.org/docs/current/static/datatype-boolean.html
*/
class BooleanTypeUtil {
private static final Logger LOGGER = Logger.getLogger(BooleanTypeUtil.class.getName());
private BooleanTypeUtil() {
}
/**
* Cast an Object value to the corresponding boolean value.
*
* @param in Object to cast into boolean
* @return boolean value corresponding to the cast of the object
* @throws PSQLException PSQLState.CANNOT_COERCE
*/
static boolean castToBoolean(final Object in) throws PSQLException {
if (LOGGER.isLoggable(Level.FINE)) {
LOGGER.log(Level.FINE, "Cast to boolean: \"{0}\"", String.valueOf(in));
}
if (in instanceof Boolean) {
return (Boolean) in;
}
if (in instanceof String) {
return fromString((String) in);
}
if (in instanceof Character) {
return fromCharacter((Character) in);
}
if (in instanceof Number) {
return fromNumber((Number) in);
}
throw new PSQLException("Cannot cast to boolean", PSQLState.CANNOT_COERCE);
}
static boolean fromString(final String strval) throws PSQLException {
// Leading or trailing whitespace is ignored, and case does not matter.
final String val = strval.trim();
if ("1".equals(val) || "true".equalsIgnoreCase(val)
|| "t".equalsIgnoreCase(val) || "yes".equalsIgnoreCase(val)
|| "y".equalsIgnoreCase(val) || "on".equalsIgnoreCase(val)) {
return true;
}
if ("0".equals(val) || "false".equalsIgnoreCase(val)
|| "f".equalsIgnoreCase(val) || "no".equalsIgnoreCase(val)
|| "n".equalsIgnoreCase(val) || "off".equalsIgnoreCase(val)) {
return false;
}
throw cannotCoerceException(strval);
}
private static boolean fromCharacter(final Character charval) throws PSQLException {
if ('1' == charval || 't' == charval || 'T' == charval
|| 'y' == charval || 'Y' == charval) {
return true;
}
if ('0' == charval || 'f' == charval || 'F' == charval
|| 'n' == charval || 'N' == charval) {
return false;
}
throw cannotCoerceException(charval);
}
private static boolean fromNumber(final Number numval) throws PSQLException {
// Handles BigDecimal, Byte, Short, Integer, Long Float, Double
// based on the widening primitive conversions.
final double value = numval.doubleValue();
if (value == 1.0d) {
return true;
}
if (value == 0.0d) {
return false;
}
throw cannotCoerceException(numval);
}
private static PSQLException cannotCoerceException(final Object value) {
if (LOGGER.isLoggable(Level.FINE)) {
LOGGER.log(Level.FINE, "Cannot cast to boolean: \"{0}\"", String.valueOf(value));
}
return new PSQLException(GT.tr("Cannot cast to boolean: \"{0}\"", String.valueOf(value)),
PSQLState.CANNOT_COERCE);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/CallableBatchResultHandler.java 0100664 0000000 0000000 00000001517 00000250600 030546 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.core.Field;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Query;
import org.postgresql.core.ResultCursor;
import org.postgresql.core.Tuple;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.util.List;
class CallableBatchResultHandler extends BatchResultHandler {
CallableBatchResultHandler(PgStatement statement, Query[] queries,
/* @Nullable */ ParameterList[] parameterLists) {
super(statement, queries, parameterLists, false);
}
@Override
public void handleResultRows(Query fromQuery, Field[] fields, List tuples,
/* @Nullable */ ResultCursor cursor) {
/* ignore */
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/EscapeSyntaxCallMode.java 0100664 0000000 0000000 00000001770 00000250600 027421 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2019, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
/**
* Specifies whether a SELECT/CALL statement is used for the underlying SQL for JDBC escape call syntax: 'select' means to
* always use SELECT, 'callIfNoReturn' means to use CALL if there is no return parameter (otherwise use SELECT), and 'call' means
* to always use CALL.
*
* @see org.postgresql.PGProperty#ESCAPE_SYNTAX_CALL_MODE
*/
public enum EscapeSyntaxCallMode {
SELECT("select"),
CALL_IF_NO_RETURN("callIfNoReturn"),
CALL("call");
private final String value;
EscapeSyntaxCallMode(String value) {
this.value = value;
}
public static EscapeSyntaxCallMode of(String mode) {
for (EscapeSyntaxCallMode escapeSyntaxCallMode : values()) {
if (escapeSyntaxCallMode.value.equals(mode)) {
return escapeSyntaxCallMode;
}
}
return SELECT;
}
public String value() {
return value;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/EscapedFunctions.java 0100664 0000000 0000000 00000061412 00000250600 026645 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.lang.reflect.Method;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Locale;
import java.util.Map;
/**
* This class stores supported escaped function.
*
* @author Xavier Poinsard
* @deprecated see {@link EscapedFunctions2}
*/
@Deprecated
public class EscapedFunctions {
// numeric functions names
public static final String ABS = "abs";
public static final String ACOS = "acos";
public static final String ASIN = "asin";
public static final String ATAN = "atan";
public static final String ATAN2 = "atan2";
public static final String CEILING = "ceiling";
public static final String COS = "cos";
public static final String COT = "cot";
public static final String DEGREES = "degrees";
public static final String EXP = "exp";
public static final String FLOOR = "floor";
public static final String LOG = "log";
public static final String LOG10 = "log10";
public static final String MOD = "mod";
public static final String PI = "pi";
public static final String POWER = "power";
public static final String RADIANS = "radians";
public static final String ROUND = "round";
public static final String SIGN = "sign";
public static final String SIN = "sin";
public static final String SQRT = "sqrt";
public static final String TAN = "tan";
public static final String TRUNCATE = "truncate";
// string function names
public static final String ASCII = "ascii";
public static final String CHAR = "char";
public static final String CONCAT = "concat";
public static final String INSERT = "insert"; // change arguments order
public static final String LCASE = "lcase";
public static final String LEFT = "left";
public static final String LENGTH = "length";
public static final String LOCATE = "locate"; // the 3 args version duplicate args
public static final String LTRIM = "ltrim";
public static final String REPEAT = "repeat";
public static final String REPLACE = "replace";
public static final String RIGHT = "right"; // duplicate args
public static final String RTRIM = "rtrim";
public static final String SPACE = "space";
public static final String SUBSTRING = "substring";
public static final String UCASE = "ucase";
// soundex is implemented on the server side by
// the contrib/fuzzystrmatch module. We provide a translation
// for this in the driver, but since we don't want to bother with run
// time detection of this module's installation we don't report this
// method as supported in DatabaseMetaData.
// difference is currently unsupported entirely.
// date time function names
public static final String CURDATE = "curdate";
public static final String CURTIME = "curtime";
public static final String DAYNAME = "dayname";
public static final String DAYOFMONTH = "dayofmonth";
public static final String DAYOFWEEK = "dayofweek";
public static final String DAYOFYEAR = "dayofyear";
public static final String HOUR = "hour";
public static final String MINUTE = "minute";
public static final String MONTH = "month";
public static final String MONTHNAME = "monthname";
public static final String NOW = "now";
public static final String QUARTER = "quarter";
public static final String SECOND = "second";
public static final String WEEK = "week";
public static final String YEAR = "year";
// for timestampadd and timestampdiff the fractional part of second is not supported
// by the backend
// timestampdiff is very partially supported
public static final String TIMESTAMPADD = "timestampadd";
public static final String TIMESTAMPDIFF = "timestampdiff";
// constants for timestampadd and timestampdiff
public static final String SQL_TSI_ROOT = "SQL_TSI_";
public static final String SQL_TSI_DAY = "DAY";
public static final String SQL_TSI_FRAC_SECOND = "FRAC_SECOND";
public static final String SQL_TSI_HOUR = "HOUR";
public static final String SQL_TSI_MINUTE = "MINUTE";
public static final String SQL_TSI_MONTH = "MONTH";
public static final String SQL_TSI_QUARTER = "QUARTER";
public static final String SQL_TSI_SECOND = "SECOND";
public static final String SQL_TSI_WEEK = "WEEK";
public static final String SQL_TSI_YEAR = "YEAR";
// system functions
public static final String DATABASE = "database";
public static final String IFNULL = "ifnull";
public static final String USER = "user";
/**
* storage for functions implementations.
*/
private static Map functionMap = createFunctionMap();
private static Map createFunctionMap() {
Method[] arrayMeths = EscapedFunctions.class.getDeclaredMethods();
Map functionMap = new HashMap<>(arrayMeths.length * 2);
for (Method meth : arrayMeths) {
if (meth.getName().startsWith("sql")) {
functionMap.put(meth.getName().toLowerCase(Locale.US), meth);
}
}
return functionMap;
}
/**
* get Method object implementing the given function.
*
* @param functionName name of the searched function
* @return a Method object or null if not found
*/
public static /* @Nullable */ Method getFunction(String functionName) {
return functionMap.get("sql" + functionName.toLowerCase(Locale.US));
}
// ** numeric functions translations **
/**
* ceiling to ceil translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlceiling(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("ceil(", "ceiling", parsedArgs);
}
/**
* log to ln translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqllog(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("ln(", "log", parsedArgs);
}
/**
* log10 to log translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqllog10(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("log(", "log10", parsedArgs);
}
/**
* power to pow translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlpower(List> parsedArgs) throws SQLException {
return twoArgumentsFunctionCall("pow(", "power", parsedArgs);
}
/**
* truncate to trunc translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqltruncate(List> parsedArgs) throws SQLException {
return twoArgumentsFunctionCall("trunc(", "truncate", parsedArgs);
}
// ** string functions translations **
/**
* char to chr translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlchar(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("chr(", "char", parsedArgs);
}
/**
* concat translation.
*
* @param parsedArgs arguments
* @return sql call
*/
public static String sqlconcat(List> parsedArgs) {
StringBuilder buf = new StringBuilder();
buf.append('(');
for (int iArg = 0; iArg < parsedArgs.size(); iArg++) {
buf.append(parsedArgs.get(iArg));
if (iArg != (parsedArgs.size() - 1)) {
buf.append(" || ");
}
}
return buf.append(')').toString();
}
/**
* insert to overlay translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlinsert(List> parsedArgs) throws SQLException {
if (parsedArgs.size() != 4) {
throw new PSQLException(GT.tr("{0} function takes four and only four argument.", "insert"),
PSQLState.SYNTAX_ERROR);
}
StringBuilder buf = new StringBuilder();
buf.append("overlay(");
buf.append(parsedArgs.get(0)).append(" placing ").append(parsedArgs.get(3));
buf.append(" from ").append(parsedArgs.get(1)).append(" for ").append(parsedArgs.get(2));
return buf.append(')').toString();
}
/**
* lcase to lower translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqllcase(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("lower(", "lcase", parsedArgs);
}
/**
* left to substring translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlleft(List> parsedArgs) throws SQLException {
if (parsedArgs.size() != 2) {
throw new PSQLException(GT.tr("{0} function takes two and only two arguments.", "left"),
PSQLState.SYNTAX_ERROR);
}
StringBuilder buf = new StringBuilder();
buf.append("substring(");
buf.append(parsedArgs.get(0)).append(" for ").append(parsedArgs.get(1));
return buf.append(')').toString();
}
/**
* length translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqllength(List> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "length"),
PSQLState.SYNTAX_ERROR);
}
StringBuilder buf = new StringBuilder();
buf.append("length(trim(trailing from ");
buf.append(parsedArgs.get(0));
return buf.append("))").toString();
}
/**
* locate translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqllocate(List> parsedArgs) throws SQLException {
if (parsedArgs.size() == 2) {
return "position(" + parsedArgs.get(0) + " in " + parsedArgs.get(1) + ")";
} else if (parsedArgs.size() == 3) {
String tmp = "position(" + parsedArgs.get(0) + " in substring(" + parsedArgs.get(1) + " from "
+ parsedArgs.get(2) + "))";
return "(" + parsedArgs.get(2) + "*sign(" + tmp + ")+" + tmp + ")";
} else {
throw new PSQLException(GT.tr("{0} function takes two or three arguments.", "locate"),
PSQLState.SYNTAX_ERROR);
}
}
/**
* ltrim translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlltrim(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("trim(leading from ", "ltrim", parsedArgs);
}
/**
* right to substring translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlright(List> parsedArgs) throws SQLException {
if (parsedArgs.size() != 2) {
throw new PSQLException(GT.tr("{0} function takes two and only two arguments.", "right"),
PSQLState.SYNTAX_ERROR);
}
StringBuilder buf = new StringBuilder();
buf.append("substring(");
buf.append(parsedArgs.get(0))
.append(" from (length(")
.append(parsedArgs.get(0))
.append(")+1-")
.append(parsedArgs.get(1));
return buf.append("))").toString();
}
/**
* rtrim translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlrtrim(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("trim(trailing from ", "rtrim", parsedArgs);
}
/**
* space translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlspace(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("repeat(' ',", "space", parsedArgs);
}
/**
* substring to substr translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlsubstring(List> parsedArgs) throws SQLException {
if (parsedArgs.size() == 2) {
return "substr(" + parsedArgs.get(0) + "," + parsedArgs.get(1) + ")";
} else if (parsedArgs.size() == 3) {
return "substr(" + parsedArgs.get(0) + "," + parsedArgs.get(1) + "," + parsedArgs.get(2)
+ ")";
} else {
throw new PSQLException(GT.tr("{0} function takes two or three arguments.", "substring"),
PSQLState.SYNTAX_ERROR);
}
}
/**
* ucase to upper translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlucase(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("upper(", "ucase", parsedArgs);
}
/**
* curdate to current_date translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlcurdate(List> parsedArgs) throws SQLException {
if (!parsedArgs.isEmpty()) {
throw new PSQLException(GT.tr("{0} function doesn''t take any argument.", "curdate"),
PSQLState.SYNTAX_ERROR);
}
return "current_date";
}
/**
* curtime to current_time translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlcurtime(List> parsedArgs) throws SQLException {
if (!parsedArgs.isEmpty()) {
throw new PSQLException(GT.tr("{0} function doesn''t take any argument.", "curtime"),
PSQLState.SYNTAX_ERROR);
}
return "current_time";
}
/**
* dayname translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqldayname(List> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "dayname"),
PSQLState.SYNTAX_ERROR);
}
return "to_char(" + parsedArgs.get(0) + ",'Day')";
}
/**
* dayofmonth translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqldayofmonth(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(day from ", "dayofmonth", parsedArgs);
}
/**
* dayofweek translation adding 1 to postgresql function since we expect values from 1 to 7.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqldayofweek(List> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "dayofweek"),
PSQLState.SYNTAX_ERROR);
}
return "extract(dow from " + parsedArgs.get(0) + ")+1";
}
/**
* dayofyear translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqldayofyear(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(doy from ", "dayofyear", parsedArgs);
}
/**
* hour translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlhour(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(hour from ", "hour", parsedArgs);
}
/**
* minute translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlminute(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(minute from ", "minute", parsedArgs);
}
/**
* month translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlmonth(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(month from ", "month", parsedArgs);
}
/**
* monthname translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlmonthname(List> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "monthname"),
PSQLState.SYNTAX_ERROR);
}
return "to_char(" + parsedArgs.get(0) + ",'Month')";
}
/**
* quarter translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlquarter(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(quarter from ", "quarter", parsedArgs);
}
/**
* second translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlsecond(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(second from ", "second", parsedArgs);
}
/**
* week translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlweek(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(week from ", "week", parsedArgs);
}
/**
* year translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlyear(List> parsedArgs) throws SQLException {
return singleArgumentFunctionCall("extract(year from ", "year", parsedArgs);
}
/**
* time stamp add.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
@SuppressWarnings("TypeParameterExplicitlyExtendsObject")
public static String sqltimestampadd(List extends Object> parsedArgs) throws SQLException {
if (parsedArgs.size() != 3) {
throw new PSQLException(
GT.tr("{0} function takes three and only three arguments.", "timestampadd"),
PSQLState.SYNTAX_ERROR);
}
String interval = EscapedFunctions.constantToInterval(parsedArgs.get(0).toString(),
parsedArgs.get(1).toString());
StringBuilder buf = new StringBuilder();
buf.append("(").append(interval).append("+");
buf.append(parsedArgs.get(2)).append(")");
return buf.toString();
}
private static String constantToInterval(String type, String value) throws SQLException {
if (!type.startsWith(SQL_TSI_ROOT)) {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.SYNTAX_ERROR);
}
String shortType = type.substring(SQL_TSI_ROOT.length());
if (SQL_TSI_DAY.equalsIgnoreCase(shortType)) {
return "CAST(" + value + " || ' day' as interval)";
} else if (SQL_TSI_SECOND.equalsIgnoreCase(shortType)) {
return "CAST(" + value + " || ' second' as interval)";
} else if (SQL_TSI_HOUR.equalsIgnoreCase(shortType)) {
return "CAST(" + value + " || ' hour' as interval)";
} else if (SQL_TSI_MINUTE.equalsIgnoreCase(shortType)) {
return "CAST(" + value + " || ' minute' as interval)";
} else if (SQL_TSI_MONTH.equalsIgnoreCase(shortType)) {
return "CAST(" + value + " || ' month' as interval)";
} else if (SQL_TSI_QUARTER.equalsIgnoreCase(shortType)) {
return "CAST((" + value + "::int * 3) || ' month' as interval)";
} else if (SQL_TSI_WEEK.equalsIgnoreCase(shortType)) {
return "CAST(" + value + " || ' week' as interval)";
} else if (SQL_TSI_YEAR.equalsIgnoreCase(shortType)) {
return "CAST(" + value + " || ' year' as interval)";
} else if (SQL_TSI_FRAC_SECOND.equalsIgnoreCase(shortType)) {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", "SQL_TSI_FRAC_SECOND"),
PSQLState.SYNTAX_ERROR);
} else {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.SYNTAX_ERROR);
}
}
/**
* time stamp diff.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
@SuppressWarnings("TypeParameterExplicitlyExtendsObject")
public static String sqltimestampdiff(List extends Object> parsedArgs) throws SQLException {
if (parsedArgs.size() != 3) {
throw new PSQLException(
GT.tr("{0} function takes three and only three arguments.", "timestampdiff"),
PSQLState.SYNTAX_ERROR);
}
String datePart = EscapedFunctions.constantToDatePart(parsedArgs.get(0).toString());
StringBuilder buf = new StringBuilder();
buf.append("extract( ")
.append(datePart)
.append(" from (")
.append(parsedArgs.get(2))
.append("-")
.append(parsedArgs.get(1))
.append("))");
return buf.toString();
}
private static String constantToDatePart(String type) throws SQLException {
if (!type.startsWith(SQL_TSI_ROOT)) {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.SYNTAX_ERROR);
}
String shortType = type.substring(SQL_TSI_ROOT.length());
if (SQL_TSI_DAY.equalsIgnoreCase(shortType)) {
return "day";
} else if (SQL_TSI_SECOND.equalsIgnoreCase(shortType)) {
return "second";
} else if (SQL_TSI_HOUR.equalsIgnoreCase(shortType)) {
return "hour";
} else if (SQL_TSI_MINUTE.equalsIgnoreCase(shortType)) {
return "minute";
} else if (SQL_TSI_FRAC_SECOND.equalsIgnoreCase(shortType)) {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", "SQL_TSI_FRAC_SECOND"),
PSQLState.SYNTAX_ERROR);
} else {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.SYNTAX_ERROR);
}
// See http://archives.postgresql.org/pgsql-jdbc/2006-03/msg00096.php
/*
* else if (SQL_TSI_MONTH.equalsIgnoreCase(shortType)) return "month"; else if
* (SQL_TSI_QUARTER.equalsIgnoreCase(shortType)) return "quarter"; else if
* (SQL_TSI_WEEK.equalsIgnoreCase(shortType)) return "week"; else if
* (SQL_TSI_YEAR.equalsIgnoreCase(shortType)) return "year";
*/
}
/**
* database translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqldatabase(List> parsedArgs) throws SQLException {
if (!parsedArgs.isEmpty()) {
throw new PSQLException(GT.tr("{0} function doesn''t take any argument.", "database"),
PSQLState.SYNTAX_ERROR);
}
return "current_database()";
}
/**
* ifnull translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqlifnull(List> parsedArgs) throws SQLException {
return twoArgumentsFunctionCall("coalesce(", "ifnull", parsedArgs);
}
/**
* user translation.
*
* @param parsedArgs arguments
* @return sql call
* @throws SQLException if something wrong happens
*/
public static String sqluser(List> parsedArgs) throws SQLException {
if (!parsedArgs.isEmpty()) {
throw new PSQLException(GT.tr("{0} function doesn''t take any argument.", "user"),
PSQLState.SYNTAX_ERROR);
}
return "user";
}
private static String singleArgumentFunctionCall(String call, String functionName,
List> parsedArgs) throws PSQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", functionName),
PSQLState.SYNTAX_ERROR);
}
StringBuilder buf = new StringBuilder();
buf.append(call);
buf.append(parsedArgs.get(0));
return buf.append(')').toString();
}
private static String twoArgumentsFunctionCall(String call, String functionName,
List> parsedArgs) throws PSQLException {
if (parsedArgs.size() != 2) {
throw new PSQLException(GT.tr("{0} function takes two and only two arguments.", functionName),
PSQLState.SYNTAX_ERROR);
}
StringBuilder buf = new StringBuilder();
buf.append(call);
buf.append(parsedArgs.get(0)).append(',').append(parsedArgs.get(1));
return buf.append(')').toString();
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/EscapedFunctions2.java 0100664 0000000 0000000 00000061324 00000250600 026731 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2018, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.lang.reflect.Method;
import java.sql.SQLException;
import java.util.List;
import java.util.Locale;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;
/**
* This class stores supported escaped function.
* Note: this is a pgjdbc-internal class, so it is not supposed to be used outside of the driver.
*/
public final class EscapedFunctions2 {
// constants for timestampadd and timestampdiff
private static final String SQL_TSI_ROOT = "SQL_TSI_";
private static final String SQL_TSI_DAY = "SQL_TSI_DAY";
@SuppressWarnings("unused")
private static final String SQL_TSI_FRAC_SECOND = "SQL_TSI_FRAC_SECOND";
private static final String SQL_TSI_HOUR = "SQL_TSI_HOUR";
private static final String SQL_TSI_MINUTE = "SQL_TSI_MINUTE";
private static final String SQL_TSI_MONTH = "SQL_TSI_MONTH";
private static final String SQL_TSI_QUARTER = "SQL_TSI_QUARTER";
private static final String SQL_TSI_SECOND = "SQL_TSI_SECOND";
private static final String SQL_TSI_WEEK = "SQL_TSI_WEEK";
private static final String SQL_TSI_YEAR = "SQL_TSI_YEAR";
/**
* storage for functions implementations
*/
private static final ConcurrentMap FUNCTION_MAP = createFunctionMap("sql");
private static ConcurrentMap createFunctionMap(String prefix) {
Method[] methods = EscapedFunctions2.class.getMethods();
ConcurrentMap functionMap = new ConcurrentHashMap<>(methods.length * 2);
for (Method method : methods) {
if (method.getName().startsWith(prefix)) {
functionMap.put(method.getName().substring(prefix.length()).toLowerCase(Locale.US), method);
}
}
return functionMap;
}
/**
* get Method object implementing the given function
*
* @param functionName name of the searched function
* @return a Method object or null if not found
*/
public static /* @Nullable */ Method getFunction(String functionName) {
Method method = FUNCTION_MAP.get(functionName);
if (method != null) {
return method;
}
//FIXME: this probably should not use the US locale
String nameLower = functionName.toLowerCase(Locale.US);
if (nameLower.equals(functionName)) {
// Input name was in lower case, the function is not there
return null;
}
method = FUNCTION_MAP.get(nameLower);
if (method != null && FUNCTION_MAP.size() < 1000) {
// Avoid OutOfMemoryError in case input function names are randomized
// The number of methods is finite, however the number of upper-lower case combinations
// is quite a few (e.g. substr, Substr, sUbstr, SUbstr, etc).
FUNCTION_MAP.putIfAbsent(functionName, method);
}
return method;
}
// ** numeric functions translations **
/**
* ceiling to ceil translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlceiling(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "ceil(", "ceiling", parsedArgs);
}
/**
* log to ln translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqllog(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "ln(", "log", parsedArgs);
}
/**
* log10 to log translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqllog10(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "log(", "log10", parsedArgs);
}
/**
* power to pow translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlpower(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
twoArgumentsFunctionCall(buf, "pow(", "power", parsedArgs);
}
/**
* truncate to trunc translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqltruncate(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
twoArgumentsFunctionCall(buf, "trunc(", "truncate", parsedArgs);
}
// ** string functions translations **
/**
* char to chr translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlchar(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "chr(", "char", parsedArgs);
}
/**
* concat translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
*/
public static void sqlconcat(StringBuilder buf, List extends CharSequence> parsedArgs) {
appendCall(buf, "(", "||", ")", parsedArgs);
}
/**
* insert to overlay translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlinsert(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 4) {
throw new PSQLException(GT.tr("{0} function takes four and only four argument.", "insert"),
PSQLState.SYNTAX_ERROR);
}
buf.append("overlay(");
buf.append(parsedArgs.get(0)).append(" placing ").append(parsedArgs.get(3));
buf.append(" from ").append(parsedArgs.get(1)).append(" for ").append(parsedArgs.get(2));
buf.append(')');
}
/**
* lcase to lower translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqllcase(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "lower(", "lcase", parsedArgs);
}
/**
* left to substring translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlleft(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 2) {
throw new PSQLException(GT.tr("{0} function takes two and only two arguments.", "left"),
PSQLState.SYNTAX_ERROR);
}
appendCall(buf, "substring(", " for ", ")", parsedArgs);
}
/**
* length translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqllength(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "length"),
PSQLState.SYNTAX_ERROR);
}
appendCall(buf, "length(trim(trailing from ", "", "))", parsedArgs);
}
/**
* locate translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqllocate(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() == 2) {
appendCall(buf, "position(", " in ", ")", parsedArgs);
} else if (parsedArgs.size() == 3) {
String tmp = "position(" + parsedArgs.get(0) + " in substring(" + parsedArgs.get(1) + " from "
+ parsedArgs.get(2) + "))";
buf.append("(")
.append(parsedArgs.get(2))
.append("*sign(")
.append(tmp)
.append(")+")
.append(tmp)
.append(")");
} else {
throw new PSQLException(GT.tr("{0} function takes two or three arguments.", "locate"),
PSQLState.SYNTAX_ERROR);
}
}
/**
* ltrim translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlltrim(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "trim(leading from ", "ltrim", parsedArgs);
}
/**
* right to substring translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlright(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 2) {
throw new PSQLException(GT.tr("{0} function takes two and only two arguments.", "right"),
PSQLState.SYNTAX_ERROR);
}
buf.append("substring(");
buf.append(parsedArgs.get(0))
.append(" from (length(")
.append(parsedArgs.get(0))
.append(")+1-")
.append(parsedArgs.get(1));
buf.append("))");
}
/**
* rtrim translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlrtrim(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "trim(trailing from ", "rtrim", parsedArgs);
}
/**
* space translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlspace(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "repeat(' ',", "space", parsedArgs);
}
/**
* substring to substr translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlsubstring(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
int argSize = parsedArgs.size();
if (argSize != 2 && argSize != 3) {
throw new PSQLException(GT.tr("{0} function takes two or three arguments.", "substring"),
PSQLState.SYNTAX_ERROR);
}
appendCall(buf, "substr(", ",", ")", parsedArgs);
}
/**
* ucase to upper translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlucase(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "upper(", "ucase", parsedArgs);
}
/**
* curdate to current_date translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlcurdate(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
zeroArgumentFunctionCall(buf, "current_date", "curdate", parsedArgs);
}
/**
* curtime to current_time translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlcurtime(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
zeroArgumentFunctionCall(buf, "current_time", "curtime", parsedArgs);
}
/**
* dayname translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqldayname(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "dayname"),
PSQLState.SYNTAX_ERROR);
}
appendCall(buf, "to_char(", ",", ",'Day')", parsedArgs);
}
/**
* dayofmonth translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqldayofmonth(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(day from ", "dayofmonth", parsedArgs);
}
/**
* dayofweek translation adding 1 to postgresql function since we expect values from 1 to 7
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqldayofweek(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "dayofweek"),
PSQLState.SYNTAX_ERROR);
}
appendCall(buf, "extract(dow from ", ",", ")+1", parsedArgs);
}
/**
* dayofyear translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqldayofyear(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(doy from ", "dayofyear", parsedArgs);
}
/**
* hour translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlhour(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(hour from ", "hour", parsedArgs);
}
/**
* minute translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlminute(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(minute from ", "minute", parsedArgs);
}
/**
* month translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlmonth(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(month from ", "month", parsedArgs);
}
/**
* monthname translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlmonthname(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", "monthname"),
PSQLState.SYNTAX_ERROR);
}
appendCall(buf, "to_char(", ",", ",'Month')", parsedArgs);
}
/**
* quarter translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlquarter(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(quarter from ", "quarter", parsedArgs);
}
/**
* second translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlsecond(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(second from ", "second", parsedArgs);
}
/**
* week translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlweek(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(week from ", "week", parsedArgs);
}
/**
* year translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlyear(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
singleArgumentFunctionCall(buf, "extract(year from ", "year", parsedArgs);
}
/**
* time stamp add
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqltimestampadd(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 3) {
throw new PSQLException(
GT.tr("{0} function takes three and only three arguments.", "timestampadd"),
PSQLState.SYNTAX_ERROR);
}
buf.append('(');
appendInterval(buf, parsedArgs.get(0).toString(), parsedArgs.get(1).toString());
buf.append('+').append(parsedArgs.get(2)).append(')');
}
private static void appendInterval(StringBuilder buf, String type, String value) throws SQLException {
if (!isTsi(type)) {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.SYNTAX_ERROR);
}
if (appendSingleIntervalCast(buf, SQL_TSI_DAY, type, value, "day")
|| appendSingleIntervalCast(buf, SQL_TSI_SECOND, type, value, "second")
|| appendSingleIntervalCast(buf, SQL_TSI_HOUR, type, value, "hour")
|| appendSingleIntervalCast(buf, SQL_TSI_MINUTE, type, value, "minute")
|| appendSingleIntervalCast(buf, SQL_TSI_MONTH, type, value, "month")
|| appendSingleIntervalCast(buf, SQL_TSI_WEEK, type, value, "week")
|| appendSingleIntervalCast(buf, SQL_TSI_YEAR, type, value, "year")
) {
return;
}
if (areSameTsi(SQL_TSI_QUARTER, type)) {
buf.append("CAST((").append(value).append("::int * 3) || ' month' as interval)");
return;
}
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.NOT_IMPLEMENTED);
}
private static boolean appendSingleIntervalCast(StringBuilder buf, String cmp, String type, String value, String pgType) {
if (!areSameTsi(type, cmp)) {
return false;
}
buf.ensureCapacity(buf.length() + 5 + 4 + 14 + value.length() + pgType.length());
buf.append("CAST(").append(value).append("||' ").append(pgType).append("' as interval)");
return true;
}
/**
* Compares two TSI intervals. It is
* @param a first interval to compare
* @param b second interval to compare
* @return true when both intervals are equal (case insensitive)
*/
private static boolean areSameTsi(String a, String b) {
return a.length() == b.length() && b.length() > SQL_TSI_ROOT.length()
&& a.regionMatches(true, SQL_TSI_ROOT.length(), b, SQL_TSI_ROOT.length(), b.length() - SQL_TSI_ROOT.length());
}
/**
* Checks if given input starts with {@link #SQL_TSI_ROOT}
* @param interval input string
* @return true if interval.startsWithIgnoreCase(SQL_TSI_ROOT)
*/
private static boolean isTsi(String interval) {
return interval.regionMatches(true, 0, SQL_TSI_ROOT, 0, SQL_TSI_ROOT.length());
}
/**
* time stamp diff
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqltimestampdiff(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
if (parsedArgs.size() != 3) {
throw new PSQLException(
GT.tr("{0} function takes three and only three arguments.", "timestampdiff"),
PSQLState.SYNTAX_ERROR);
}
buf.append("extract( ")
.append(constantToDatePart(parsedArgs.get(0).toString()))
.append(" from (")
.append(parsedArgs.get(2))
.append("-")
.append(parsedArgs.get(1))
.append("))");
}
private static String constantToDatePart(String type) throws SQLException {
if (!isTsi(type)) {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.SYNTAX_ERROR);
}
if (areSameTsi(SQL_TSI_DAY, type)) {
return "day";
} else if (areSameTsi(SQL_TSI_SECOND, type)) {
return "second";
} else if (areSameTsi(SQL_TSI_HOUR, type)) {
return "hour";
} else if (areSameTsi(SQL_TSI_MINUTE, type)) {
return "minute";
} else {
throw new PSQLException(GT.tr("Interval {0} not yet implemented", type),
PSQLState.SYNTAX_ERROR);
}
// See http://archives.postgresql.org/pgsql-jdbc/2006-03/msg00096.php
/*
* else if (SQL_TSI_MONTH.equalsIgnoreCase(shortType)) return "month"; else if
* (SQL_TSI_QUARTER.equalsIgnoreCase(shortType)) return "quarter"; else if
* (SQL_TSI_WEEK.equalsIgnoreCase(shortType)) return "week"; else if
* (SQL_TSI_YEAR.equalsIgnoreCase(shortType)) return "year";
*/
}
/**
* database translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqldatabase(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
zeroArgumentFunctionCall(buf, "current_database()", "database", parsedArgs);
}
/**
* ifnull translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqlifnull(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
twoArgumentsFunctionCall(buf, "coalesce(", "ifnull", parsedArgs);
}
/**
* user translation
*
* @param buf The buffer to append into
* @param parsedArgs arguments
* @throws SQLException if something wrong happens
*/
public static void sqluser(StringBuilder buf, List extends CharSequence> parsedArgs) throws SQLException {
zeroArgumentFunctionCall(buf, "user", "user", parsedArgs);
}
private static void zeroArgumentFunctionCall(StringBuilder buf, String call, String functionName,
List extends CharSequence> parsedArgs) throws PSQLException {
if (!parsedArgs.isEmpty()) {
throw new PSQLException(GT.tr("{0} function doesn''t take any argument.", functionName),
PSQLState.SYNTAX_ERROR);
}
buf.append(call);
}
private static void singleArgumentFunctionCall(StringBuilder buf, String call, String functionName,
List extends CharSequence> parsedArgs) throws PSQLException {
if (parsedArgs.size() != 1) {
throw new PSQLException(GT.tr("{0} function takes one and only one argument.", functionName),
PSQLState.SYNTAX_ERROR);
}
CharSequence arg0 = parsedArgs.get(0);
buf.ensureCapacity(buf.length() + call.length() + arg0.length() + 1);
buf.append(call).append(arg0).append(')');
}
private static void twoArgumentsFunctionCall(StringBuilder buf, String call, String functionName,
List extends CharSequence> parsedArgs) throws PSQLException {
if (parsedArgs.size() != 2) {
throw new PSQLException(GT.tr("{0} function takes two and only two arguments.", functionName),
PSQLState.SYNTAX_ERROR);
}
appendCall(buf, call, ",", ")", parsedArgs);
}
/**
* Appends {@code begin arg0 separator arg1 separator end} sequence to the input {@link StringBuilder}
* @param sb destination StringBuilder
* @param begin begin string
* @param separator separator string
* @param end end string
* @param args arguments
*/
public static void appendCall(StringBuilder sb, String begin, String separator,
String end, List extends CharSequence> args) {
int size = begin.length();
// Typically just-in-time compiler would eliminate Iterator in case foreach is used,
// however the code below uses indexed iteration to keep the code independent from
// various JIT implementations (== avoid Iterator allocations even for not-so-smart JITs)
// see https://bugs.openjdk.java.net/browse/JDK-8166840
// see http://2016.jpoint.ru/talks/cheremin/ (video and slides)
int numberOfArguments = args.size();
for (int i = 0; i < numberOfArguments; i++) {
size += args.get(i).length();
}
size += separator.length() * (numberOfArguments - 1);
sb.ensureCapacity(sb.length() + size + 1);
sb.append(begin);
for (int i = 0; i < numberOfArguments; i++) {
if (i > 0) {
sb.append(separator);
}
sb.append(args.get(i));
}
sb.append(end);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/FieldMetadata.java 0100664 0000000 0000000 00000004557 00000250600 026103 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2016, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.util.CanEstimateSize;
// import org.checkerframework.checker.nullness.qual.Nullable;
/**
* This is an internal class to hold field metadata info like table name, column name, etc.
* This class is not meant to be used outside of pgjdbc.
*/
public class FieldMetadata implements CanEstimateSize {
public static class Key {
final int tableOid;
final int positionInTable;
Key(int tableOid, int positionInTable) {
this.positionInTable = positionInTable;
this.tableOid = tableOid;
}
@Override
public boolean equals(/* @Nullable */ Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
Key key = (Key) o;
if (tableOid != key.tableOid) {
return false;
}
return positionInTable == key.positionInTable;
}
@Override
public int hashCode() {
int result = tableOid;
result = 31 * result + positionInTable;
return result;
}
@Override
public String toString() {
return "Key{"
+ "tableOid=" + tableOid
+ ", positionInTable=" + positionInTable
+ '}';
}
}
final String columnName;
final String tableName;
final String schemaName;
final int nullable;
final boolean autoIncrement;
public FieldMetadata(String columnName) {
this(columnName, "", "", PgResultSetMetaData.columnNullableUnknown, false);
}
FieldMetadata(String columnName, String tableName, String schemaName, int nullable,
boolean autoIncrement) {
this.columnName = columnName;
this.tableName = tableName;
this.schemaName = schemaName;
this.nullable = nullable;
this.autoIncrement = autoIncrement;
}
@Override
public long getSize() {
return columnName.length() * 2
+ tableName.length() * 2
+ schemaName.length() * 2
+ 4L
+ 1L;
}
@Override
public String toString() {
return "FieldMetadata{"
+ "columnName='" + columnName + '\''
+ ", tableName='" + tableName + '\''
+ ", schemaName='" + schemaName + '\''
+ ", nullable=" + nullable
+ ", autoIncrement=" + autoIncrement
+ '}';
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/GSSEncMode.java 0100664 0000000 0000000 00000002654 00000250600 025302 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2020, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.PGProperty;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import java.util.Properties;
public enum GSSEncMode {
/**
* Do not use encrypted connections.
*/
DISABLE("disable"),
/**
* Start with non-encrypted connection, then try encrypted one.
*/
ALLOW("allow"),
/**
* Start with encrypted connection, fallback to non-encrypted (default).
*/
PREFER("prefer"),
/**
* Ensure connection is encrypted.
*/
REQUIRE("require");
private static final GSSEncMode[] VALUES = values();
public final String value;
GSSEncMode(String value) {
this.value = value;
}
public boolean requireEncryption() {
return this.compareTo(REQUIRE) >= 0;
}
public static GSSEncMode of(Properties info) throws PSQLException {
String gssEncMode = PGProperty.GSS_ENC_MODE.getOrDefault(info);
// If gssEncMode is not set, fallback to allow
if (gssEncMode == null) {
return ALLOW;
}
for (GSSEncMode mode : VALUES) {
if (mode.value.equalsIgnoreCase(gssEncMode)) {
return mode;
}
}
throw new PSQLException(GT.tr("Invalid gssEncMode value: {0}", gssEncMode),
PSQLState.CONNECTION_UNABLE_TO_CONNECT);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/PSQLSavepoint.java 0100664 0000000 0000000 00000004236 00000250600 026061 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.core.Utils;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.SQLException;
import java.sql.Savepoint;
public class PSQLSavepoint implements Savepoint {
private boolean isValid;
private final boolean isNamed;
private int id;
private /* @Nullable */ String name;
public PSQLSavepoint(int id) {
this.isValid = true;
this.isNamed = false;
this.id = id;
}
public PSQLSavepoint(String name) {
this.isValid = true;
this.isNamed = true;
this.name = name;
}
@Override
public int getSavepointId() throws SQLException {
if (!isValid) {
throw new PSQLException(GT.tr("Cannot reference a savepoint after it has been released."),
PSQLState.INVALID_SAVEPOINT_SPECIFICATION);
}
if (isNamed) {
throw new PSQLException(GT.tr("Cannot retrieve the id of a named savepoint."),
PSQLState.WRONG_OBJECT_TYPE);
}
return id;
}
@Override
public String getSavepointName() throws SQLException {
if (!isValid) {
throw new PSQLException(GT.tr("Cannot reference a savepoint after it has been released."),
PSQLState.INVALID_SAVEPOINT_SPECIFICATION);
}
if (!isNamed || name == null) {
throw new PSQLException(GT.tr("Cannot retrieve the name of an unnamed savepoint."),
PSQLState.WRONG_OBJECT_TYPE);
}
return name;
}
public void invalidate() {
isValid = false;
}
public String getPGName() throws SQLException {
if (!isValid) {
throw new PSQLException(GT.tr("Cannot reference a savepoint after it has been released."),
PSQLState.INVALID_SAVEPOINT_SPECIFICATION);
}
if (isNamed && name != null) {
// We need to quote and escape the name in case it
// contains spaces/quotes/etc.
//
return Utils.escapeIdentifier(null, name).toString();
}
return "JDBC_SAVEPOINT_" + id;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/PSQLWarningWrapper.java 0100664 0000000 0000000 00000001640 00000250600 027053 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2017, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import java.sql.SQLWarning;
/**
* Wrapper class for SQLWarnings that provides an optimisation to add
* new warnings to the tail of the SQLWarning singly linked list, avoiding Θ(n) insertion time
* of calling #setNextWarning on the head. By encapsulating this into a single object it allows
* users(ie PgStatement) to atomically set and clear the warning chain.
*/
class PSQLWarningWrapper {
private final SQLWarning firstWarning;
private SQLWarning lastWarning;
PSQLWarningWrapper(SQLWarning warning) {
firstWarning = warning;
lastWarning = warning;
}
void addWarning(SQLWarning sqlWarning) {
lastWarning.setNextWarning(sqlWarning);
lastWarning = sqlWarning;
}
SQLWarning getFirstWarning() {
return firstWarning;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/PgArray.java 0100664 0000000 0000000 00000037435 00000250600 024765 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.Driver;
import org.postgresql.core.BaseConnection;
import org.postgresql.core.BaseStatement;
import org.postgresql.core.Field;
import org.postgresql.core.Oid;
import org.postgresql.core.Tuple;
import org.postgresql.jdbc.ArrayDecoding.PgArrayList;
import org.postgresql.jdbc2.ArrayAssistantRegistry;
import org.postgresql.util.ByteConverter;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.sql.Array;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
/**
* Array is used collect one column of query result data.
*
* Read a field of type Array into either a natively-typed Java array object or a ResultSet.
* Accessor methods provide the ability to capture array slices.
*
* Other than the constructor all methods are direct implementations of those specified for
* java.sql.Array. Please refer to the javadoc for java.sql.Array for detailed descriptions of the
* functionality and parameters of the methods of this class.
*
* @see ResultSet#getArray
*/
public class PgArray implements Array {
static {
ArrayAssistantRegistry.register(Oid.UUID, new UUIDArrayAssistant());
ArrayAssistantRegistry.register(Oid.UUID_ARRAY, new UUIDArrayAssistant());
}
/**
* A database connection.
*/
protected /* @Nullable */ BaseConnection connection;
/**
* The OID of this field.
*/
private final int oid;
/**
* Field value as String.
*/
protected /* @Nullable */ String fieldString;
/**
* Value of field as {@link PgArrayList}. Will be initialized only once within
* {@link #buildArrayList(String)}.
*/
protected ArrayDecoding./* @Nullable */ PgArrayList arrayList;
protected byte /* @Nullable */ [] fieldBytes;
private final ResourceLock lock = new ResourceLock();
private PgArray(BaseConnection connection, int oid) throws SQLException {
this.connection = connection;
this.oid = oid;
}
/**
* Create a new Array.
*
* @param connection a database connection
* @param oid the oid of the array datatype
* @param fieldString the array data in string form
* @throws SQLException if something wrong happens
*/
public PgArray(BaseConnection connection, int oid, /* @Nullable */ String fieldString)
throws SQLException {
this(connection, oid);
this.fieldString = fieldString;
}
/**
* Create a new Array.
*
* @param connection a database connection
* @param oid the oid of the array datatype
* @param fieldBytes the array data in byte form
* @throws SQLException if something wrong happens
*/
public PgArray(BaseConnection connection, int oid, byte /* @Nullable */ [] fieldBytes)
throws SQLException {
this(connection, oid);
this.fieldBytes = fieldBytes;
}
private BaseConnection getConnection() {
return castNonNull(connection);
}
@Override
@SuppressWarnings("return")
public Object getArray() throws SQLException {
return getArrayImpl(1, 0, null);
}
@Override
@SuppressWarnings("return")
public Object getArray(long index, int count) throws SQLException {
return getArrayImpl(index, count, null);
}
@SuppressWarnings("return")
public Object getArrayImpl(Map> map) throws SQLException {
return getArrayImpl(1, 0, map);
}
@Override
@SuppressWarnings("return")
public Object getArray(Map> map) throws SQLException {
return getArrayImpl(map);
}
@Override
@SuppressWarnings("return")
public Object getArray(long index, int count, /* @Nullable */ Map> map)
throws SQLException {
return getArrayImpl(index, count, map);
}
public /* @Nullable */ Object getArrayImpl(long index, int count, /* @Nullable */ Map> map)
throws SQLException {
// for now maps aren't supported.
if (map != null && !map.isEmpty()) {
throw Driver.notImplemented(this.getClass(), "getArrayImpl(long,int,Map)");
}
// array index is out of range
if (index < 1) {
throw new PSQLException(GT.tr("The array index is out of range: {0}", index),
PSQLState.DATA_ERROR);
}
if (fieldBytes != null) {
return readBinaryArray(fieldBytes, (int) index, count);
}
if (fieldString == null) {
return null;
}
final PgArrayList arrayList = buildArrayList(fieldString);
if (count == 0) {
count = arrayList.size();
}
// array index out of range
if ((index - 1) + count > arrayList.size()) {
throw new PSQLException(
GT.tr("The array index is out of range: {0}, number of elements: {1}.",
index + count, (long) arrayList.size()),
PSQLState.DATA_ERROR);
}
return buildArray(arrayList, (int) index, count);
}
private Object readBinaryArray(byte[] fieldBytes, int index, int count) throws SQLException {
return ArrayDecoding.readBinaryArray(index, count, fieldBytes, getConnection());
}
private ResultSet readBinaryResultSet(byte[] fieldBytes, int index, int count)
throws SQLException {
int dimensions = ByteConverter.int4(fieldBytes, 0);
// int flags = ByteConverter.int4(fieldBytes, 4); // bit 0: 0=no-nulls, 1=has-nulls
int elementOid = ByteConverter.int4(fieldBytes, 8);
int pos = 12;
int[] dims = new int[dimensions];
for (int d = 0; d < dimensions; d++) {
dims[d] = ByteConverter.int4(fieldBytes, pos);
pos += 4;
/* int lbound = ByteConverter.int4(fieldBytes, pos); */
pos += 4;
}
if (count > 0 && dimensions > 0) {
dims[0] = Math.min(count, dims[0]);
}
List rows = new ArrayList<>();
Field[] fields = new Field[2];
storeValues(fieldBytes, rows, fields, elementOid, dims, pos, 0, index);
BaseStatement stat = (BaseStatement) getConnection()
.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
return stat.createDriverResultSet(fields, rows);
}
private int storeValues(byte[] fieldBytes, List rows, Field[] fields, int elementOid,
final int[] dims,
int pos, final int thisDimension, int index) throws SQLException {
// handle an empty array
if (dims.length == 0) {
fields[0] = new Field("INDEX", Oid.INT4);
fields[0].setFormat(Field.BINARY_FORMAT);
fields[1] = new Field("VALUE", elementOid);
fields[1].setFormat(Field.BINARY_FORMAT);
for (int i = 1; i < index; i++) {
int len = ByteConverter.int4(fieldBytes, pos);
pos += 4;
if (len != -1) {
pos += len;
}
}
} else if (thisDimension == dims.length - 1) {
fields[0] = new Field("INDEX", Oid.INT4);
fields[0].setFormat(Field.BINARY_FORMAT);
fields[1] = new Field("VALUE", elementOid);
fields[1].setFormat(Field.BINARY_FORMAT);
for (int i = 1; i < index; i++) {
int len = ByteConverter.int4(fieldBytes, pos);
pos += 4;
if (len != -1) {
pos += len;
}
}
for (int i = 0; i < dims[thisDimension]; i++) {
byte[][] rowData = new byte[2][];
rowData[0] = new byte[4];
ByteConverter.int4(rowData[0], 0, i + index);
rows.add(new Tuple(rowData));
int len = ByteConverter.int4(fieldBytes, pos);
pos += 4;
if (len == -1) {
continue;
}
rowData[1] = new byte[len];
System.arraycopy(fieldBytes, pos, rowData[1], 0, rowData[1].length);
pos += len;
}
} else {
fields[0] = new Field("INDEX", Oid.INT4);
fields[0].setFormat(Field.BINARY_FORMAT);
fields[1] = new Field("VALUE", oid);
fields[1].setFormat(Field.BINARY_FORMAT);
int nextDimension = thisDimension + 1;
int dimensionsLeft = dims.length - nextDimension;
for (int i = 1; i < index; i++) {
pos = calcRemainingDataLength(fieldBytes, dims, pos, elementOid, nextDimension);
}
for (int i = 0; i < dims[thisDimension]; i++) {
byte[][] rowData = new byte[2][];
rowData[0] = new byte[4];
ByteConverter.int4(rowData[0], 0, i + index);
rows.add(new Tuple(rowData));
int dataEndPos = calcRemainingDataLength(fieldBytes, dims, pos, elementOid, nextDimension);
int dataLength = dataEndPos - pos;
rowData[1] = new byte[12 + 8 * dimensionsLeft + dataLength];
ByteConverter.int4(rowData[1], 0, dimensionsLeft);
System.arraycopy(fieldBytes, 4, rowData[1], 4, 8);
System.arraycopy(fieldBytes, 12 + nextDimension * 8, rowData[1], 12, dimensionsLeft * 8);
System.arraycopy(fieldBytes, pos, rowData[1], 12 + dimensionsLeft * 8, dataLength);
pos = dataEndPos;
}
}
return pos;
}
private static int calcRemainingDataLength(byte[] fieldBytes,
int[] dims, int pos, int elementOid, int thisDimension) {
if (thisDimension == dims.length - 1) {
for (int i = 0; i < dims[thisDimension]; i++) {
int len = ByteConverter.int4(fieldBytes, pos);
pos += 4;
if (len == -1) {
continue;
}
pos += len;
}
} else {
pos = calcRemainingDataLength(fieldBytes, dims, elementOid, pos, thisDimension + 1);
}
return pos;
}
/**
* Build {@link ArrayList} from field's string input. As a result of this method
* {@link #arrayList} is build. Method can be called many times in order to make sure that array
* list is ready to use, however {@link #arrayList} will be set only once during first call.
*/
private PgArrayList buildArrayList(String fieldString) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
if (arrayList == null) {
arrayList = ArrayDecoding.buildArrayList(fieldString, getConnection().getTypeInfo().getArrayDelimiter(oid));
}
return arrayList;
}
}
/**
* Convert {@link ArrayList} to array.
*
* @param input list to be converted into array
*/
private Object buildArray(ArrayDecoding.PgArrayList input, int index, int count) throws SQLException {
final BaseConnection connection = getConnection();
return ArrayDecoding.readStringArray(index, count, connection.getTypeInfo().getPGArrayElement(oid), input, connection);
}
@Override
public int getBaseType() throws SQLException {
return getConnection().getTypeInfo().getSQLType(getBaseTypeName());
}
@Override
public String getBaseTypeName() throws SQLException {
int elementOID = getConnection().getTypeInfo().getPGArrayElement(oid);
return castNonNull(getConnection().getTypeInfo().getPGType(elementOID));
}
@Override
public ResultSet getResultSet() throws SQLException {
return getResultSetImpl(1, 0, null);
}
@Override
public ResultSet getResultSet(long index, int count) throws SQLException {
return getResultSetImpl(index, count, null);
}
@Override
public ResultSet getResultSet(/* @Nullable */ Map> map) throws SQLException {
return getResultSetImpl(map);
}
@Override
public ResultSet getResultSet(long index, int count, /* @Nullable */ Map> map)
throws SQLException {
return getResultSetImpl(index, count, map);
}
public ResultSet getResultSetImpl(/* @Nullable */ Map> map) throws SQLException {
return getResultSetImpl(1, 0, map);
}
public ResultSet getResultSetImpl(long index, int count, /* @Nullable */ Map> map)
throws SQLException {
// for now maps aren't supported.
if (map != null && !map.isEmpty()) {
throw Driver.notImplemented(this.getClass(), "getResultSetImpl(long,int,Map)");
}
// array index is out of range
if (index < 1) {
throw new PSQLException(GT.tr("The array index is out of range: {0}", index),
PSQLState.DATA_ERROR);
}
if (fieldBytes != null) {
return readBinaryResultSet(fieldBytes, (int) index, count);
}
final PgArrayList arrayList = buildArrayList(castNonNull(fieldString));
if (count == 0) {
count = arrayList.size();
}
// array index out of range
if ((--index) + count > arrayList.size()) {
throw new PSQLException(
GT.tr("The array index is out of range: {0}, number of elements: {1}.",
index + count, (long) arrayList.size()),
PSQLState.DATA_ERROR);
}
List rows = new ArrayList<>();
Field[] fields = new Field[2];
// one dimensional array
if (arrayList.dimensionsCount <= 1) {
// array element type
final int baseOid = getConnection().getTypeInfo().getPGArrayElement(oid);
fields[0] = new Field("INDEX", Oid.INT4);
fields[1] = new Field("VALUE", baseOid);
for (int i = 0; i < count; i++) {
int offset = (int) index + i;
byte[] /* @Nullable */ [] t = new byte[2][0];
String v = (String) arrayList.get(offset);
t[0] = getConnection().encodeString(Integer.toString(offset + 1));
t[1] = v == null ? null : getConnection().encodeString(v);
rows.add(new Tuple(t));
}
} else {
// when multi-dimensional
fields[0] = new Field("INDEX", Oid.INT4);
fields[1] = new Field("VALUE", oid);
for (int i = 0; i < count; i++) {
int offset = (int) index + i;
byte[] /* @Nullable */ [] t = new byte[2][0];
Object v = arrayList.get(offset);
t[0] = getConnection().encodeString(Integer.toString(offset + 1));
t[1] = v == null ? null : getConnection().encodeString(toString((ArrayDecoding.PgArrayList) v));
rows.add(new Tuple(t));
}
}
BaseStatement stat = (BaseStatement) getConnection()
.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
return stat.createDriverResultSet(fields, rows);
}
@Override
@SuppressWarnings("nullness")
public /* @Nullable */ String toString() {
if (fieldString == null && fieldBytes != null) {
try {
Object array = readBinaryArray(fieldBytes, 1, 0);
final ArrayEncoding.ArrayEncoder arraySupport = ArrayEncoding.getArrayEncoder(array);
assert arraySupport != null;
fieldString = arraySupport.toArrayString(connection.getTypeInfo().getArrayDelimiter(oid), array);
} catch (SQLException e) {
fieldString = "NULL"; // punt
}
}
return fieldString;
}
/**
* Convert array list to PG String representation (e.g. {0,1,2}).
*/
private String toString(ArrayDecoding.PgArrayList list) throws SQLException {
if (list == null) {
return "NULL";
}
StringBuilder b = new StringBuilder().append('{');
char delim = getConnection().getTypeInfo().getArrayDelimiter(oid);
for (int i = 0; i < list.size(); i++) {
Object v = list.get(i);
if (i > 0) {
b.append(delim);
}
if (v == null) {
b.append("NULL");
} else if (v instanceof ArrayDecoding.PgArrayList) {
b.append(toString((ArrayDecoding.PgArrayList) v));
} else {
escapeArrayElement(b, (String) v);
}
}
b.append('}');
return b.toString();
}
public static void escapeArrayElement(StringBuilder b, String s) {
b.append('"');
for (int j = 0; j < s.length(); j++) {
char c = s.charAt(j);
if (c == '"' || c == '\\') {
b.append('\\');
}
b.append(c);
}
b.append('"');
}
public boolean isBinary() {
return fieldBytes != null;
}
public byte /* @Nullable */ [] toBytes() {
return fieldBytes;
}
@Override
public void free() throws SQLException {
connection = null;
fieldString = null;
fieldBytes = null;
arrayList = null;
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/PgBlob.java 0100664 0000000 0000000 00000002556 00000250600 024561 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.core.BaseConnection;
import org.postgresql.largeobject.LargeObject;
import java.io.InputStream;
import java.sql.Blob;
import java.sql.SQLException;
public class PgBlob extends AbstractBlobClob implements Blob {
public PgBlob(BaseConnection conn, long oid) throws SQLException {
super(conn, oid);
}
@Override
public InputStream getBinaryStream(long pos, long length)
throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
LargeObject subLO = getLo(false).copy();
addSubLO(subLO);
if (pos > Integer.MAX_VALUE) {
subLO.seek64(pos - 1, LargeObject.SEEK_SET);
} else {
subLO.seek((int) pos - 1, LargeObject.SEEK_SET);
}
return subLO.getInputStream(length);
}
}
@Override
public int setBytes(long pos, byte[] bytes) throws SQLException {
return setBytes(pos, bytes, 0, bytes.length);
}
@Override
public int setBytes(long pos, byte[] bytes, int offset, int len)
throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
assertPosition(pos);
getLo(true).seek((int) (pos - 1));
getLo(true).write(bytes, offset, len);
return len;
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/PgCallableStatement.java 0100664 0000000 0000000 00000106521 00000250600 027264 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.Driver;
import org.postgresql.core.ParameterList;
import org.postgresql.core.Query;
import org.postgresql.util.GT;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
// import org.checkerframework.checker.index.qual.Positive;
// import org.checkerframework.checker.nullness.qual.Nullable;
import java.io.InputStream;
import java.io.Reader;
import java.math.BigDecimal;
import java.net.URL;
import java.sql.Array;
import java.sql.Blob;
import java.sql.CallableStatement;
import java.sql.Clob;
import java.sql.Date;
import java.sql.NClob;
import java.sql.Ref;
import java.sql.ResultSet;
import java.sql.RowId;
import java.sql.SQLException;
import java.sql.SQLType;
import java.sql.SQLXML;
import java.sql.Time;
import java.sql.Timestamp;
import java.sql.Types;
import java.util.Calendar;
import java.util.Map;
class PgCallableStatement extends PgPreparedStatement implements CallableStatement {
// Used by the callablestatement style methods
private final boolean isFunction;
// functionReturnType contains the user supplied value to check
// testReturn contains a modified version to make it easier to
// check the getXXX methods..
private int /* @Nullable */ [] functionReturnType;
private int /* @Nullable */ [] testReturn;
// returnTypeSet is true when a proper call to registerOutParameter has been made
private boolean returnTypeSet;
protected /* @Nullable */ Object /* @Nullable */ [] callResult;
private int lastIndex;
PgCallableStatement(PgConnection connection, String sql, int rsType, int rsConcurrency,
int rsHoldability) throws SQLException {
super(connection, connection.borrowCallableQuery(sql), rsType, rsConcurrency, rsHoldability);
this.isFunction = preparedQuery.isFunction;
if (this.isFunction) {
int inParamCount = this.preparedParameters.getInParameterCount() + 1;
this.testReturn = new int[inParamCount];
this.functionReturnType = new int[inParamCount];
}
}
@Override
public int executeUpdate() throws SQLException {
if (isFunction) {
executeWithFlags(0);
return 0;
}
return super.executeUpdate();
}
@Override
public /* @Nullable */ Object getObject(/* @Positive */ int i, /* @Nullable */ Map> map)
throws SQLException {
return getObjectImpl(i, map);
}
@Override
public /* @Nullable */ Object getObject(String s, /* @Nullable */ Map> map) throws SQLException {
return getObjectImpl(s, map);
}
@Override
public boolean executeWithFlags(int flags) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
boolean hasResultSet = super.executeWithFlags(flags);
int[] functionReturnType = this.functionReturnType;
if (!isFunction || !returnTypeSet || functionReturnType == null) {
return hasResultSet;
}
// If we are executing and there are out parameters
// callable statement function set the return data
if (!hasResultSet) {
throw new PSQLException(GT.tr("A CallableStatement was executed with nothing returned."),
PSQLState.NO_DATA);
}
ResultSet rs = castNonNull(getResultSet());
if (!rs.next()) {
throw new PSQLException(GT.tr("A CallableStatement was executed with nothing returned."),
PSQLState.NO_DATA);
}
// figure out how many columns
int cols = rs.getMetaData().getColumnCount();
int outParameterCount = preparedParameters.getOutParameterCount();
if (cols != outParameterCount) {
throw new PSQLException(
GT.tr("A CallableStatement was executed with an invalid number of parameters"),
PSQLState.SYNTAX_ERROR);
}
// reset last result fetched (for wasNull)
lastIndex = 0;
// allocate enough space for all possible parameters without regard to in/out
/* @Nullable */ Object[] callResult = new Object[preparedParameters.getParameterCount() + 1];
this.callResult = callResult;
// move them into the result set
for (int i = 0, j = 0; i < cols; i++, j++) {
// find the next out parameter, the assumption is that the functionReturnType
// array will be initialized with 0 and only out parameters will have values
// other than 0. 0 is the value for java.sql.Types.NULL, which should not
// conflict
while (j < functionReturnType.length && functionReturnType[j] == 0) {
j++;
}
callResult[j] = rs.getObject(i + 1);
int columnType = rs.getMetaData().getColumnType(i + 1);
if (columnType != functionReturnType[j]) {
// this is here for the sole purpose of passing the cts
if (columnType == Types.DOUBLE && functionReturnType[j] == Types.REAL) {
// return it as a float
Object result = callResult[j];
if (result != null) {
callResult[j] = ((Double) result).floatValue();
}
} else if (columnType == Types.REF_CURSOR && functionReturnType[j] == Types.OTHER) {
// For backwards compatibility reasons we support that ref cursors can be
// registered with both Types.OTHER and Types.REF_CURSOR so we allow
// this specific mismatch
} else {
throw new PSQLException(GT.tr(
"A CallableStatement function was executed and the out parameter {0} was of type {1} however type {2} was registered.",
i + 1, "java.sql.Types=" + columnType, "java.sql.Types=" + functionReturnType[j]),
PSQLState.DATA_TYPE_MISMATCH);
}
}
}
rs.close();
result = null;
}
return false;
}
/**
* {@inheritDoc}
*
* Before executing a stored procedure call you must explicitly call registerOutParameter to
* register the java.sql.Type of each out parameter.
*
* Note: When reading the value of an out parameter, you must use the getXXX method whose Java
* type XXX corresponds to the parameter's registered SQL type.
*
* ONLY 1 RETURN PARAMETER if {?= call ..} syntax is used
*
* @param parameterIndex the first parameter is 1, the second is 2,...
* @param sqlType SQL type code defined by java.sql.Types; for parameters of type Numeric or
* Decimal use the version of registerOutParameter that accepts a scale value
* @throws SQLException if a database-access error occurs.
*/
@Override
public void registerOutParameter(/* @Positive */ int parameterIndex, int sqlType)
throws SQLException {
checkClosed();
switch (sqlType) {
case Types.TINYINT:
// we don't have a TINYINT type use SMALLINT
sqlType = Types.SMALLINT;
break;
case Types.LONGVARCHAR:
sqlType = Types.VARCHAR;
break;
case Types.DECIMAL:
sqlType = Types.NUMERIC;
break;
case Types.FLOAT:
// float is the same as double
sqlType = Types.DOUBLE;
break;
case Types.VARBINARY:
case Types.LONGVARBINARY:
sqlType = Types.BINARY;
break;
case Types.BOOLEAN:
sqlType = Types.BIT;
break;
default:
break;
}
int[] functionReturnType = this.functionReturnType;
int[] testReturn = this.testReturn;
if (!isFunction || functionReturnType == null || testReturn == null) {
throw new PSQLException(
GT.tr(
"This statement does not declare an OUT parameter. Use '{' ?= call ... '}' to declare one."),
PSQLState.STATEMENT_NOT_ALLOWED_IN_FUNCTION_CALL);
}
preparedParameters.registerOutParameter(parameterIndex, sqlType);
// functionReturnType contains the user supplied value to check
// testReturn contains a modified version to make it easier to
// check the getXXX methods..
functionReturnType[parameterIndex - 1] = sqlType;
testReturn[parameterIndex - 1] = sqlType;
if (functionReturnType[parameterIndex - 1] == Types.CHAR
|| functionReturnType[parameterIndex - 1] == Types.LONGVARCHAR) {
testReturn[parameterIndex - 1] = Types.VARCHAR;
} else if (functionReturnType[parameterIndex - 1] == Types.FLOAT) {
testReturn[parameterIndex - 1] = Types.REAL; // changes to streamline later error checking
}
returnTypeSet = true;
}
@Override
public boolean wasNull() throws SQLException {
if (lastIndex == 0 || callResult == null) {
throw new PSQLException(GT.tr("wasNull cannot be call before fetching a result."),
PSQLState.OBJECT_NOT_IN_STATE);
}
// check to see if the last access threw an exception
return callResult[lastIndex - 1] == null;
}
@Override
public /* @Nullable */ String getString(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.VARCHAR, "String");
return (String) result;
}
@Override
public boolean getBoolean(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.BIT, "Boolean");
if (result == null) {
return false;
}
return BooleanTypeUtil.castToBoolean(result);
}
@Override
public byte getByte(/* @Positive */ int parameterIndex) throws SQLException {
// fake tiny int with smallint
Object result = checkIndex(parameterIndex, Types.SMALLINT, "Byte");
if (result == null) {
return 0;
}
return ((Integer) result).byteValue();
}
@Override
public short getShort(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.SMALLINT, "Short");
if (result == null) {
return 0;
}
return ((Integer) result).shortValue();
}
@Override
public int getInt(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.INTEGER, "Int");
if (result == null) {
return 0;
}
return (Integer) result;
}
@Override
public long getLong(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.BIGINT, "Long");
if (result == null) {
return 0;
}
return (Long) result;
}
@Override
public float getFloat(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.REAL, "Float");
if (result == null) {
return 0;
}
return (Float) result;
}
@Override
public double getDouble(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.DOUBLE, "Double");
if (result == null) {
return 0;
}
return (Double) result;
}
@Override
@SuppressWarnings("deprecation")
public /* @Nullable */ BigDecimal getBigDecimal(/* @Positive */ int parameterIndex, int scale) throws SQLException {
Object result = checkIndex(parameterIndex, Types.NUMERIC, "BigDecimal");
return (/* @Nullable */ BigDecimal) result;
}
@Override
public byte /* @Nullable */ [] getBytes(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.VARBINARY, Types.BINARY, "Bytes");
return (byte /* @Nullable */ []) result;
}
@Override
public /* @Nullable */ Date getDate(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.DATE, "Date");
return (/* @Nullable */ Date) result;
}
@Override
public /* @Nullable */ Time getTime(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.TIME, "Time");
return (/* @Nullable */ Time) result;
}
@Override
public /* @Nullable */ Timestamp getTimestamp(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.TIMESTAMP, "Timestamp");
return (/* @Nullable */ Timestamp) result;
}
@Override
public /* @Nullable */ Object getObject(/* @Positive */ int parameterIndex) throws SQLException {
return getCallResult(parameterIndex);
}
/**
* helperfunction for the getXXX calls to check isFunction and index == 1 Compare BOTH type fields
* against the return type.
*
* @param parameterIndex parameter index (1-based)
* @param type1 type 1
* @param type2 type 2
* @param getName getter name
* @throws SQLException if something goes wrong
*/
protected /* @Nullable */ Object checkIndex(/* @Positive */ int parameterIndex, int type1, int type2, String getName)
throws SQLException {
Object result = getCallResult(parameterIndex);
int testReturn = this.testReturn != null ? this.testReturn[parameterIndex - 1] : -1;
if (type1 != testReturn && type2 != testReturn) {
throw new PSQLException(
GT.tr("Parameter of type {0} was registered, but call to get{1} (sqltype={2}) was made.",
"java.sql.Types=" + testReturn, getName,
"java.sql.Types=" + type1),
PSQLState.MOST_SPECIFIC_TYPE_DOES_NOT_MATCH);
}
return result;
}
/**
* Helper function for the getXXX calls to check isFunction and index == 1.
*
* @param parameterIndex parameter index (1-based)
* @param type type
* @param getName getter name
* @throws SQLException if given index is not valid
*/
protected /* @Nullable */ Object checkIndex(/* @Positive */ int parameterIndex,
int type, String getName) throws SQLException {
Object result = getCallResult(parameterIndex);
int testReturn = this.testReturn != null ? this.testReturn[parameterIndex - 1] : -1;
if (type != testReturn) {
throw new PSQLException(
GT.tr("Parameter of type {0} was registered, but call to get{1} (sqltype={2}) was made.",
"java.sql.Types=" + testReturn, getName,
"java.sql.Types=" + type),
PSQLState.MOST_SPECIFIC_TYPE_DOES_NOT_MATCH);
}
return result;
}
private /* @Nullable */ Object getCallResult(/* @Positive */ int parameterIndex) throws SQLException {
checkClosed();
if (!isFunction) {
throw new PSQLException(
GT.tr(
"A CallableStatement was declared, but no call to registerOutParameter(1, ) was made."),
PSQLState.STATEMENT_NOT_ALLOWED_IN_FUNCTION_CALL);
}
if (!returnTypeSet) {
throw new PSQLException(GT.tr("No function outputs were registered."),
PSQLState.OBJECT_NOT_IN_STATE);
}
/* @Nullable */ Object /* @Nullable */ [] callResult = this.callResult;
if (callResult == null) {
throw new PSQLException(
GT.tr("Results cannot be retrieved from a CallableStatement before it is executed."),
PSQLState.NO_DATA);
}
lastIndex = parameterIndex;
return callResult[parameterIndex - 1];
}
@Override
protected BatchResultHandler createBatchHandler(Query[] queries,
/* @Nullable */ ParameterList[] parameterLists) {
return new CallableBatchResultHandler(this, queries, parameterLists);
}
@Override
public /* @Nullable */ Array getArray(int i) throws SQLException {
Object result = checkIndex(i, Types.ARRAY, "Array");
return (Array) result;
}
@Override
public /* @Nullable */ BigDecimal getBigDecimal(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.NUMERIC, "BigDecimal");
return (BigDecimal) result;
}
@Override
public /* @Nullable */ Blob getBlob(int i) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getBlob(int)");
}
@Override
public /* @Nullable */ Clob getClob(int i) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getClob(int)");
}
public /* @Nullable */ Object getObjectImpl(int i, /* @Nullable */ Map> map) throws SQLException {
if (map == null || map.isEmpty()) {
return getObject(i);
}
throw Driver.notImplemented(this.getClass(), "getObjectImpl(int,Map)");
}
@Override
public /* @Nullable */ Ref getRef(int i) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getRef(int)");
}
@Override
public /* @Nullable */ Date getDate(int i, /* @Nullable */ Calendar cal) throws SQLException {
Object result = checkIndex(i, Types.DATE, "Date");
if (result == null) {
return null;
}
return getTimestampUtils().toDate(cal, result.toString());
}
@Override
public /* @Nullable */ Time getTime(int i, /* @Nullable */ Calendar cal) throws SQLException {
Object result = checkIndex(i, Types.TIME, "Time");
if (result == null) {
return null;
}
return getTimestampUtils().toTime(cal, result.toString());
}
@Override
public /* @Nullable */ Timestamp getTimestamp(int i, /* @Nullable */ Calendar cal) throws SQLException {
Object result = checkIndex(i, Types.TIMESTAMP, "Timestamp");
if (result == null) {
return null;
}
return getTimestampUtils().toTimestamp(cal, result.toString());
}
@Override
public void registerOutParameter(/* @Positive */ int parameterIndex, int sqlType, String typeName)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter(int,int,String)");
}
@Override
public void setObject(String parameterName, /* @Nullable */ Object x, SQLType targetSqlType,
int scaleOrLength) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setObject");
}
@Override
public void setObject(String parameterName, /* @Nullable */ Object x, SQLType targetSqlType)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setObject");
}
@Override
public void registerOutParameter(/* @Positive */ int parameterIndex, SQLType sqlType)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter");
}
@Override
public void registerOutParameter(/* @Positive */ int parameterIndex, SQLType sqlType, int scale)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter");
}
@Override
public void registerOutParameter(/* @Positive */ int parameterIndex, SQLType sqlType, String typeName)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter");
}
@Override
public void registerOutParameter(String parameterName, SQLType sqlType)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter");
}
@Override
public void registerOutParameter(String parameterName, SQLType sqlType, int scale)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter");
}
@Override
public void registerOutParameter(String parameterName, SQLType sqlType, String typeName)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter");
}
@Override
public /* @Nullable */ RowId getRowId(/* @Positive */ int parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getRowId(int)");
}
@Override
public /* @Nullable */ RowId getRowId(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getRowId(String)");
}
@Override
public void setRowId(String parameterName, /* @Nullable */ RowId x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setRowId(String, RowId)");
}
@Override
public void setNString(String parameterName, /* @Nullable */ String value) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNString(String, String)");
}
@Override
public void setNCharacterStream(String parameterName, /* @Nullable */ Reader value, long length)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNCharacterStream(String, Reader, long)");
}
@Override
public void setNCharacterStream(String parameterName, /* @Nullable */ Reader value) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNCharacterStream(String, Reader)");
}
@Override
public void setCharacterStream(String parameterName, /* @Nullable */ Reader value, long length)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setCharacterStream(String, Reader, long)");
}
@Override
public void setCharacterStream(String parameterName, /* @Nullable */ Reader value) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setCharacterStream(String, Reader)");
}
@Override
public void setBinaryStream(String parameterName, /* @Nullable */ InputStream value, long length)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBinaryStream(String, InputStream, long)");
}
@Override
public void setBinaryStream(String parameterName, /* @Nullable */ InputStream value) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBinaryStream(String, InputStream)");
}
@Override
public void setAsciiStream(String parameterName, /* @Nullable */ InputStream value, long length)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setAsciiStream(String, InputStream, long)");
}
@Override
public void setAsciiStream(String parameterName, /* @Nullable */ InputStream value) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setAsciiStream(String, InputStream)");
}
@Override
public void setNClob(String parameterName, /* @Nullable */ NClob value) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNClob(String, NClob)");
}
@Override
public void setClob(String parameterName, /* @Nullable */ Reader reader, long length) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setClob(String, Reader, long)");
}
@Override
public void setClob(String parameterName, /* @Nullable */ Reader reader) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setClob(String, Reader)");
}
@Override
public void setBlob(String parameterName, /* @Nullable */ InputStream inputStream, long length)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBlob(String, InputStream, long)");
}
@Override
public void setBlob(String parameterName, /* @Nullable */ InputStream inputStream) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBlob(String, InputStream)");
}
@Override
public void setBlob(String parameterName, /* @Nullable */ Blob x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBlob(String, Blob)");
}
@Override
public void setClob(String parameterName, /* @Nullable */ Clob x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setClob(String, Clob)");
}
@Override
public void setNClob(String parameterName, /* @Nullable */ Reader reader, long length) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNClob(String, Reader, long)");
}
@Override
public void setNClob(String parameterName, /* @Nullable */ Reader reader) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNClob(String, Reader)");
}
@Override
public /* @Nullable */ NClob getNClob(/* @Positive */ int parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getNClob(int)");
}
@Override
public /* @Nullable */ NClob getNClob(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getNClob(String)");
}
@Override
public void setSQLXML(String parameterName, /* @Nullable */ SQLXML xmlObject) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setSQLXML(String, SQLXML)");
}
@Override
public /* @Nullable */ SQLXML getSQLXML(/* @Positive */ int parameterIndex) throws SQLException {
Object result = checkIndex(parameterIndex, Types.SQLXML, "SQLXML");
return (SQLXML) result;
}
@Override
public /* @Nullable */ SQLXML getSQLXML(String parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getSQLXML(String)");
}
@Override
public String getNString(/* @Positive */ int parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getNString(int)");
}
@Override
public /* @Nullable */ String getNString(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getNString(String)");
}
@Override
public /* @Nullable */ Reader getNCharacterStream(/* @Positive */ int parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getNCharacterStream(int)");
}
@Override
public /* @Nullable */ Reader getNCharacterStream(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getNCharacterStream(String)");
}
@Override
public /* @Nullable */ Reader getCharacterStream(/* @Positive */ int parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getCharacterStream(int)");
}
@Override
public /* @Nullable */ Reader getCharacterStream(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getCharacterStream(String)");
}
@Override
public /* @Nullable */ T getObject(/* @Positive */ int parameterIndex, Class type)
throws SQLException {
if (type == ResultSet.class) {
return type.cast(getObject(parameterIndex));
}
throw new PSQLException(GT.tr("Unsupported type conversion to {1}.", type),
PSQLState.INVALID_PARAMETER_VALUE);
}
@Override
public /* @Nullable */ T getObject(String parameterName, Class type) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getObject(String, Class)");
}
@Override
public void registerOutParameter(String parameterName, int sqlType) throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter(String,int)");
}
@Override
public void registerOutParameter(String parameterName, int sqlType, int scale)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter(String,int,int)");
}
@Override
public void registerOutParameter(String parameterName, int sqlType, String typeName)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "registerOutParameter(String,int,String)");
}
@Override
public /* @Nullable */ URL getURL(/* @Positive */ int parameterIndex) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getURL(String)");
}
@Override
public void setURL(String parameterName, /* @Nullable */ URL val) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setURL(String,URL)");
}
@Override
public void setNull(String parameterName, int sqlType) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNull(String,int)");
}
@Override
public void setBoolean(String parameterName, boolean x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBoolean(String,boolean)");
}
@Override
public void setByte(String parameterName, byte x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setByte(String,byte)");
}
@Override
public void setShort(String parameterName, short x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setShort(String,short)");
}
@Override
public void setInt(String parameterName, int x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setInt(String,int)");
}
@Override
public void setLong(String parameterName, long x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setLong(String,long)");
}
@Override
public void setFloat(String parameterName, float x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setFloat(String,float)");
}
@Override
public void setDouble(String parameterName, double x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setDouble(String,double)");
}
@Override
public void setBigDecimal(String parameterName, /* @Nullable */ BigDecimal x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBigDecimal(String,BigDecimal)");
}
@Override
public void setString(String parameterName, /* @Nullable */ String x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setString(String,String)");
}
@Override
public void setBytes(String parameterName, byte /* @Nullable */ [] x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBytes(String,byte)");
}
@Override
public void setDate(String parameterName, /* @Nullable */ Date x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setDate(String,Date)");
}
@Override
public void setTime(String parameterName, /* @Nullable */ Time x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setTime(String,Time)");
}
@Override
public void setTimestamp(String parameterName, /* @Nullable */ Timestamp x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setTimestamp(String,Timestamp)");
}
@Override
public void setAsciiStream(String parameterName, /* @Nullable */ InputStream x, int length) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setAsciiStream(String,InputStream,int)");
}
@Override
public void setBinaryStream(String parameterName, /* @Nullable */ InputStream x, int length) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setBinaryStream(String,InputStream,int)");
}
@Override
public void setObject(String parameterName, /* @Nullable */ Object x, int targetSqlType, int scale)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setObject(String,Object,int,int)");
}
@Override
public void setObject(String parameterName, /* @Nullable */ Object x, int targetSqlType) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setObject(String,Object,int)");
}
@Override
public void setObject(String parameterName, /* @Nullable */ Object x) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setObject(String,Object)");
}
@Override
public void setCharacterStream(String parameterName, /* @Nullable */ Reader reader, int length)
throws SQLException {
throw Driver.notImplemented(this.getClass(), "setCharacterStream(String,Reader,int)");
}
@Override
public void setDate(String parameterName, /* @Nullable */ Date x, /* @Nullable */ Calendar cal) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setDate(String,Date,Calendar)");
}
@Override
public void setTime(String parameterName, /* @Nullable */ Time x, /* @Nullable */ Calendar cal) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setTime(String,Time,Calendar)");
}
@Override
public void setTimestamp(String parameterName, /* @Nullable */ Timestamp x, /* @Nullable */ Calendar cal) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setTimestamp(String,Timestamp,Calendar)");
}
@Override
public void setNull(String parameterName, int sqlType, String typeName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "setNull(String,int,String)");
}
@Override
public /* @Nullable */ String getString(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getString(String)");
}
@Override
public boolean getBoolean(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getBoolean(String)");
}
@Override
public byte getByte(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getByte(String)");
}
@Override
public short getShort(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getShort(String)");
}
@Override
public int getInt(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getInt(String)");
}
@Override
public long getLong(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getLong(String)");
}
@Override
public float getFloat(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getFloat(String)");
}
@Override
public double getDouble(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getDouble(String)");
}
@Override
public byte /* @Nullable */ [] getBytes(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getBytes(String)");
}
@Override
public /* @Nullable */ Date getDate(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getDate(String)");
}
@Override
public Time getTime(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getTime(String)");
}
@Override
public /* @Nullable */ Timestamp getTimestamp(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getTimestamp(String)");
}
@Override
public /* @Nullable */ Object getObject(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getObject(String)");
}
@Override
public /* @Nullable */ BigDecimal getBigDecimal(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getBigDecimal(String)");
}
public /* @Nullable */ Object getObjectImpl(String parameterName, /* @Nullable */ Map> map) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getObject(String,Map)");
}
@Override
public /* @Nullable */ Ref getRef(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getRef(String)");
}
@Override
public /* @Nullable */ Blob getBlob(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getBlob(String)");
}
@Override
public /* @Nullable */ Clob getClob(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getClob(String)");
}
@Override
public /* @Nullable */ Array getArray(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getArray(String)");
}
@Override
public /* @Nullable */ Date getDate(String parameterName, /* @Nullable */ Calendar cal) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getDate(String,Calendar)");
}
@Override
public /* @Nullable */ Time getTime(String parameterName, /* @Nullable */ Calendar cal) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getTime(String,Calendar)");
}
@Override
public /* @Nullable */ Timestamp getTimestamp(String parameterName, /* @Nullable */ Calendar cal) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getTimestamp(String,Calendar)");
}
@Override
public /* @Nullable */ URL getURL(String parameterName) throws SQLException {
throw Driver.notImplemented(this.getClass(), "getURL(String)");
}
@Override
public void registerOutParameter(/* @Positive */ int parameterIndex, int sqlType, int scale) throws SQLException {
// ignore scale for now
registerOutParameter(parameterIndex, sqlType);
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/PgClob.java 0100664 0000000 0000000 00000006277 00000250600 024566 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import org.postgresql.Driver;
import org.postgresql.core.BaseConnection;
import org.postgresql.largeobject.LargeObject;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.Reader;
import java.io.Writer;
import java.nio.charset.Charset;
import java.sql.Clob;
import java.sql.SQLException;
public class PgClob extends AbstractBlobClob implements Clob {
public PgClob(BaseConnection conn, long oid) throws SQLException {
super(conn, oid);
}
@Override
public Reader getCharacterStream(long pos, long length) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
throw Driver.notImplemented(this.getClass(), "getCharacterStream(long, long)");
}
}
@Override
public int setString(long pos, String str) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
throw Driver.notImplemented(this.getClass(), "setString(long,str)");
}
}
@Override
public int setString(long pos, String str, int offset, int len) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
throw Driver.notImplemented(this.getClass(), "setString(long,String,int,int)");
}
}
@Override
public OutputStream setAsciiStream(long pos) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
throw Driver.notImplemented(this.getClass(), "setAsciiStream(long)");
}
}
@Override
public Writer setCharacterStream(long pos) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
throw Driver.notImplemented(this.getClass(), "setCharacterStream(long)");
}
}
@Override
public InputStream getAsciiStream() throws SQLException {
return getBinaryStream();
}
@Override
public Reader getCharacterStream() throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
Charset connectionCharset = Charset.forName(conn.getEncoding().name());
return new InputStreamReader(getBinaryStream(), connectionCharset);
}
}
@Override
public String getSubString(long i, int j) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
assertPosition(i, j);
LargeObject lo = getLo(false);
lo.seek((int) i - 1);
Charset connectionCharset = Charset.forName(conn.getEncoding().name());
return new String(lo.read(j), connectionCharset);
}
}
/**
* For now, this is not implemented.
*/
@Override
public long position(String pattern, long start) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
throw Driver.notImplemented(this.getClass(), "position(String,long)");
}
}
/**
* This should be simply passing the byte value of the pattern Blob.
*/
@Override
public long position(Clob pattern, long start) throws SQLException {
try (ResourceLock ignore = lock.obtain()) {
checkFreed();
throw Driver.notImplemented(this.getClass(), "position(Clob,start)");
}
}
}
postgresql-42.7.6-jdbc-src/src/main/java/org/postgresql/jdbc/PgConnection.java 0100664 0000000 0000000 00000172772 00000250600 026012 0 ustar 00 0000000 0000000 /*
* Copyright (c) 2004, PostgreSQL Global Development Group
* See the LICENSE file in the project root for more information.
*/
package org.postgresql.jdbc;
import static org.postgresql.util.internal.Nullness.castNonNull;
import org.postgresql.Driver;
import org.postgresql.PGNotification;
import org.postgresql.PGProperty;
import org.postgresql.copy.CopyManager;
import org.postgresql.core.BaseConnection;
import org.postgresql.core.BaseStatement;
import org.postgresql.core.CachedQuery;
import org.postgresql.core.ConnectionFactory;
import org.postgresql.core.Encoding;
import org.postgresql.core.Oid;
import org.postgresql.core.ProtocolVersion;
import org.postgresql.core.Query;
import org.postgresql.core.QueryExecutor;
import org.postgresql.core.ReplicationProtocol;
import org.postgresql.core.ResultHandlerBase;
import org.postgresql.core.ServerVersion;
import org.postgresql.core.SqlCommand;
import org.postgresql.core.TransactionState;
import org.postgresql.core.TypeInfo;
import org.postgresql.core.Utils;
import org.postgresql.core.Version;
import org.postgresql.fastpath.Fastpath;
import org.postgresql.geometric.PGbox;
import org.postgresql.geometric.PGcircle;
import org.postgresql.geometric.PGline;
import org.postgresql.geometric.PGlseg;
import org.postgresql.geometric.PGpath;
import org.postgresql.geometric.PGpoint;
import org.postgresql.geometric.PGpolygon;
import org.postgresql.largeobject.LargeObjectManager;
import org.postgresql.replication.PGReplicationConnection;
import org.postgresql.replication.PGReplicationConnectionImpl;
import org.postgresql.util.DriverInfo;
import org.postgresql.util.GT;
import org.postgresql.util.HostSpec;
import org.postgresql.util.LazyCleaner;
import org.postgresql.util.LruCache;
import org.postgresql.util.PGBinaryObject;
import org.postgresql.util.PGInterval;
import org.postgresql.util.PGmoney;
import org.postgresql.util.PGobject;
import org.postgresql.util.PSQLException;
import org.postgresql.util.PSQLState;
import org.postgresql.xml.DefaultPGXmlFactoryFactory;
import org.postgresql.xml.LegacyInsecurePGXmlFactoryFactory;
import org.postgresql.xml.PGXmlFactoryFactory;
// import org.checkerframework.checker.nullness.qual.Nullable;
// import org.checkerframework.checker.nullness.qual.PolyNull;
// import org.checkerframework.dataflow.qual.Pure;
import java.io.IOException;
import java.lang.invoke.MethodHandle;
import java.lang.invoke.MethodHandles;
import java.lang.invoke.MethodType;
import java.security.Permission;
import java.sql.Array;
import java.sql.Blob;
import java.sql.CallableStatement;
import java.sql.ClientInfoStatus;
import java.sql.Clob;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.NClob;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLClientInfoException;
import java.sql.SQLException;
import java.sql.SQLPermission;
import java.sql.SQLWarning;
import java.sql.SQLXML;
import java.sql.Savepoint;
import java.sql.Statement;
import java.sql.Struct;
import java.sql.Types;
import java.util.Arrays;
import java.util.Collections;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Locale;
import java.util.Map;
import java.util.NoSuchElementException;
import java.util.Properties;
import java.util.Set;
import java.util.StringTokenizer;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.Executor;
import java.util.concurrent.locks.Condition;
import java.util.logging.Level;
import java.util.logging.Logger;
public class PgConnection implements BaseConnection {
private static final Logger LOGGER = Logger.getLogger(PgConnection.class.getName());
private static final Set SUPPORTED_BINARY_OIDS = getSupportedBinaryOids();
private static final SQLPermission SQL_PERMISSION_ABORT = new SQLPermission("callAbort");
private static final SQLPermission SQL_PERMISSION_NETWORK_TIMEOUT = new SQLPermission("setNetworkTimeout");
private static final /* @Nullable */ MethodHandle SYSTEM_GET_SECURITY_MANAGER;
private static final /* @Nullable */ MethodHandle SECURITY_MANAGER_CHECK_PERMISSION;
static {
MethodHandle systemGetSecurityManagerHandle = null;
MethodHandle securityManagerCheckPermission = null;
try {
Class> securityManagerClass = Class.forName("java.lang.SecurityManager");
systemGetSecurityManagerHandle =
MethodHandles.lookup().findStatic(System.class, "getSecurityManager",
MethodType.methodType(securityManagerClass));
securityManagerCheckPermission =
MethodHandles.lookup().findVirtual(securityManagerClass, "checkPermission",
MethodType.methodType(void.class, Permission.class));
} catch (NoSuchMethodException | IllegalAccessException | ClassNotFoundException ignore) {
// Ignore if the security manager is not available
}
SYSTEM_GET_SECURITY_MANAGER = systemGetSecurityManagerHandle;
SECURITY_MANAGER_CHECK_PERMISSION = securityManagerCheckPermission;
}
private enum ReadOnlyBehavior {
ignore,
transaction,
always
}
private final ResourceLock lock = new ResourceLock();
private final Condition lockCondition = lock.newCondition();
//
// Data initialized on construction:
//
private final Properties clientInfo;
/* URL we were created via */
private final String creatingURL;
private final ReadOnlyBehavior readOnlyBehavior;
private /* @Nullable */ Throwable openStackTrace;
/**
* This field keeps finalize action alive, so its .finalize() method is called only
* when the connection itself becomes unreachable.
* Moving .finalize() to a different object allows JVM to release all the other objects
* referenced in PgConnection early.
*/
private final PgConnectionCleaningAction finalizeAction;
private final Object leakHandle = new Object();
/* Actual network handler */
private final QueryExecutor queryExecutor;
/* Query that runs COMMIT */
private final Query commitQuery;
/* Query that runs ROLLBACK */
private final Query rollbackQuery;
private final CachedQuery setSessionReadOnly;
private final CachedQuery setSessionNotReadOnly;
private final TypeInfo typeCache;
private boolean disableColumnSanitiser;
// Default statement prepare threshold.
protected int prepareThreshold;
/**
* Default fetch size for statement.
*
* @see PGProperty#DEFAULT_ROW_FETCH_SIZE
*/
protected int defaultFetchSize;
// Default forcebinary option.
protected boolean forcebinary;
/**
* Oids for which binary transfer should be disabled.
*/
private final Set extends Integer> binaryDisabledOids;
private int rsHoldability = ResultSet.CLOSE_CURSORS_AT_COMMIT;
private int savepointId;
// Connection's autocommit state.
private boolean autoCommit = true;
// Connection's readonly state.
private boolean readOnly;
// Filter out database objects for which the current user has no privileges granted from the DatabaseMetaData
private final boolean hideUnprivilegedObjects ;
// Whether to include error details in logging and exceptions
private final boolean logServerErrorDetail;
// Bind String to UNSPECIFIED or VARCHAR?
private final boolean bindStringAsVarchar;
// Current warnings; there might be more on queryExecutor too.
private /* @Nullable */ SQLWarning firstWarning;
/**
* Replication protocol in current version postgresql(10devel) supports a limited number of
* commands.
*/
private final boolean replicationConnection;
private final LruCache fieldMetadataCache;
private final /* @Nullable */ String xmlFactoryFactoryClass;
private /* @Nullable */ PGXmlFactoryFactory xmlFactoryFactory;
private final LazyCleaner.Cleanable