Detailed Version Log Deephaven 1.20230131
Detailed Version Log: Deephaven v1.20230131
Patch | Details |
---|---|
197 | Merge updates from 1.20221001.236
|
196 | DH-14639: Automatically fix jars which lack an embedded pom, for sbom completeness |
195 | Merge updates from 1.20221001.233
|
194 | Merge updates from 1.20221001.232
|
193 | Merge updates from 1.20221001.231
|
192 | Merge updates from 1.20221001.230
|
191 | DH-15433: Fix republishing job for sbom extension |
190 | DH-15422: Prevent admin_init from being executed twice |
189 | Merge updates from 1.20221001.227
|
188 | Merge updates from 1.20221001.224
|
187 | DH-15333: Update java generated from forms to match IJ generated format |
186 | DH-15178: correct TDCP's handling of removed data - remove locations on subscribe, during rescan DH-15026: correct TDCP's handling of removed data - remove locations on error |
185 | DH-15316: Fix silverheels VM deployment |
184 | Merge updates from 1.20221001.221
|
183 | DH-15251: Remove unused logic from DeephavenInstallScript.groovy |
182 | Merge updates from 1.20221001.213
|
181 | Merge updates from 1.20221001.212
|
180 | Merge updates from 1.20221001.209
|
179 | Merge updates from 1.20221001.208
|
178 | Merge updates from 1.20221001.206
|
177 | Merge updates from 1.20221001.205
|
176 | DH-14639: Generate SBOM with each build |
175 | DH-15149: Fix failing CompressedFileUtils Unit Tests |
174 | Backport DH-14821: Make Dnd use web’s npm executable |
173 | Merge updates from 1.20221001.204
|
172 | DH-15124: Make prcheck jenkins job use jdk11 |
171 | DH-13577: Add Release Notes for Web UI subplots support |
170 | DH-15072: Stop building in jdk13 |
169 | Merge updates from 1.20221001.197 |
168 | Merge updates from 1.20221001.195
|
167 | Merge updates from 1.20221001.194
|
166 | DH-11466: add release note about script improvements |
165 | Merge updates from 1.20221001.191
|
164 | Merge updates from 1.20221001.188
|
163 | Merge updates from 1.20221001.184
|
162 | Merge updates from 1.20221001.182
|
161 | DH-14794: update IntelliJ code style |
160 | Fix Javadoc build break from merge. |
159 | Merge updates from 1.20221001.177
|
158 | Merge updates from 1.20221001.175
|
157 | Merge updates from 1.20221001.174
|
156 | Merge updates from 1.20221001.173
|
155 | Merge updates from 1.20221001.172
|
154 | DH-14768: Regenerate datagen for LogEntry interface changes |
153 | DH-14198: LogEntry interface changes |
152 | Merge updates from 1.20221001.168
|
151 | DH-14760: Make DnD not-break when using unmerged, jenkins-built fishlib versions |
150 | DH-14734: spotless fixups |
149 | Merge updates from 1.20221001.165
|
148 | DH-14713: Auth client should retry failed attempts to refresh cookie when there is still time |
147 | Merge updates from 1.20221001.164
|
146 | DH-12700: Update release notes to reflect DH-14195 creating user on upgrade. |
145 | DH-13128: Release notes for removing old plotting. |
144 | Edit Release Notes and Changelogs |
143 | Merge updates from 1.20221001.163
|
142 | DH-14694: Test Automation Dnd worker |
141 | Merge updates from 1.20221001.161
|
140 | Update Web UI to v0.31.5
|
139 | Merge updates from 1.20221001.160
|
138 | DH-14691: Allow gRPC auth service to be configured with a SSL authority override other than the default |
137 | DH-14624: Issues in configuration for envoy after dispatcher client changes |
136 | Merge updates from 1.20221001.159
|
135 | DH-14681: Make worker dispatcher-response timeout configurable |
134 | DH-14372: Add ability for tailer to specify a time to look back for binary logs on startup |
133 | DH-13385: Update upgrade instructions after DH-14189. |
132 | Merge updates from 1.20221001.156
|
131 | DH-14612: Cleanup gone clients from gRPC authentication server state |
130 | DH-14662: Fixed an issue remapping shadowed codecs when loading XML Schemas in DnD |
129 | DH-14627: Web eslint tests are not running in GitHub Actions checks |
128 | DH-14647: Fixed a bug where filtering a reinterpreted object source as dictionary produces incorrect results |
127 | DH-14147: Add integration tests for dynamic data routing |
126 | DH-14465: Eliminate extraneous test dependencies |
125 | DH-14577: Update DnD build and code to include JS plugins |
124 | Merge updates from 1.20221001.151
|
123 | DH-14264: Make DHFileDigester include plugins//global and plugins//client |
122 | DH-14619: Use none instead of null for KubernetesControl Field in Old Log Formats |
121 | Merge updates from 1.20221001.149
|
120 | Fix for merged tests. |
119 | Merge test compilation fix. |
118 | Merge updates from 1.20221001.148
|
117 | DH-14003: Better support for custom worker templates on k8s clusters |
116 | DH-14603: NPE in authentication server error response for AlreadyAuthenticatedException |
115 | Update Web UI to v0.31.4
|
114 | DH-14587: Add property for controlling output directory of ILF-generated test sources |
113 | Update Web UI to v0.31.3
|
112 | Merge updates from 1.20221001.141
|
111 | Merge updates from 1.20221001.140
|
110 | DH-14565: Issues with auth-server failover |
109 | DH-14518: Add missing auth-server files to config_packager.sh DH-14531: Fix backup issues in config_packager.sh DH-14557: Remove duplicate truststore files from config_packager.sh |
108 | DH-14522: Fix generate_loggers script to use correct java generation directory for compilation |
107 | DH-14554: Add Javadoc about Listener Dependencies |
106 | DH-14469: fix --import regression introduced with controller_tool --status option |
105 | DH-14364: Fix error reporting in Web UI query/dashboard import |
104 | DH-14550: Fix arguments in config_import script |
103 | DH-14550: Support SAML in Kubernetes deployments |
102 | Merge updates from 1.20221001.138
|
101 | Merge updates from 1.20221001.136
|
100 | DH-13694: Fix authentication from IntradayLoggerBuilder unit test forward merge |
099 | Merge updates from 1.20221001.135
|
098 | DH-14541: Authentication by delegate token fails in multi-auth server deployments |
097 | DH-14516: Multiple auth servers don't work |
096 | Merge updates from 1.20221001.133
|
095 | DH-14509: Allow an authentication client to retry challengeResponse |
094 | DH-14514: Fix NPE in TDCP abnormal shutdown cases |
093 | DH-14470: Fixes ugly error message on login failure |
092 | DH-14506: Fix javadoc in AbstractBulkValuesWriter that breaks java 8 build |
091 | Merge updates from 1.20221001.132
|
090 | DH-14479: Silverheels release notes updates |
089 | DH-14493: Increase worker startup timeout |
088 | DH-14254: NullWithGroups PermissionFilterProvider returns full acces properly |
087 | Merge updates from 1.20221001.128
|
086 | DH-14416: Fix formula cache issues associated with simultaneous DnD workers |
085 | DH-14425: Fix DnD schema shadow class loading issues |
084 | DH-14408: Enable Swing remote-telemetry by default |
083 | DH-13317: Updates to DateTime selection widgets (swing) |
082 | DH-14457: Exclude Barrage and DHC java client from DnD EnterpriseShadow |
081 | DH-14449: Document docker image builds for various platform architectures |
080 | DH-13759: minor fix for test objections on new controller_tool status option |
079 | DH-14279: Add ability to map additional persistent volumes to k8s-based workers |
078 | DH-14442: Make toplevel spotlessApply task invoke spotlessApply on Dnd also |
077 | Spotless application for DhcInDhe. |
076 | DH-14437: Prevent NPE when nulling table from query scope |
075 | Update Web UI to v0.31.1
|
074 | DH-14422: Improve DnD worker keepalive implementation |
073 | Fix failing test. |
072 | Spotless application for compilation fix. |
071 | Compilation fix. |
070 | Spotless application from 067. |
069 | Spotless application to Telemetry. |
068 | Merge updates from 1.20221001.124
|
067 | DH-14196: DnD Worker Keepalive |
066 | DH-14285: Improve return value, error handling, and heap size control |
065 | DH-14376: Upgrade to JUnit5 and add k8s worker test |
064 | DH-14387: Fix race between DnD worker completion and dispatcher cleanup |
063 | Merge updates from 1.20221001.121
|
062 | DH-14370: Bump dhcVersion to 0.21.1 |
061 | DH-14373: Add k8s testing util to kube-shadow jar |
060 | DH-14285: Remove Java 11 usage |
059 | DH-14285: Create a basic DnD worker integration test tool |
058 | DH-13988: Fix variable changes being reported after running command |
057 | DH-13722: Fix some import dashboard functionality |
056 | DH-14307: Add missing constants for Seek Row functionality |
055 | DH-14317: Bug Fix to resolve static endpoints in new format using the correct tag |
054 | DH-14305: Fix controller client is able to unable to reauthenticate. |
053 | DH-14262: Mac desktop icons are missing in silverheels |
052 | DH-12664: Fix Dictionary column sources incorrectly matching nulls for parquet |
051 | DH-12664: Fix Dictionary column sources incorrectly matching nulls |
050 | DH-14195: Remove obsolete upgrade instructions and scripts. |
049 | DH-12702: Handle unserializable exception when fetching nonexistant barrage tables |
048 | DH-14311: WorkerKind enabled property should have name in middle |
047 | DH-14303 Fix padding on query monitor buttons |
046 | DH-14294: Make DnD Shadow Version Consistent with Parent Project |
045 | DH-14040 Removing an unwanted import from playground class |
044 | Merge updates from 1.20221001.116
|
043 | DH-14040: Add dhconfig support for service registry |
042 | DH-13787: Subtract the pending row count when displaying row count |
041 | Update Web UI to v0.30.1
|
040 | DH-14284: Bug fixed in dhctl intraday options include and exclude |
039 | Merge updates from 1.20221001.115
|
038 | DH-14280: Build and Publish .tar.gz of Dockerfiles and Helm Chart |
037 | DH-14195: Create Missing etcd users on upgrade from Jackson to Silverheels |
036 | DH-14180: Improve error reporting around DnD Java incompatibility |
035 | DH-14268: Fix AdminViewerList in ShareModal and QM Permissions tab |
034 | DH-14181: Relocate fastdoubleparser in EnterpriseShadowed jar |
033 | Merge updates from 1.20221001.112
|
032 | DH-13391: Update DnD to DHC 0.21.0. |
031 | DH-14218: Failed Worker Starts Do Not Send Error to Dispatcher Client |
030 | DH-13252: Fix exposed internal partition column when ACLs precede application |
029 | DH-14228: securityContext runAsGroup prohibits Secondary Groups on EKS |
028 | DH-14005: Simplify kubernetes install by eliminating boostrap package |
027 | Moved test files to new location after merge. |
026 | Merge updates from 1.20221001.110
|
025 | DH-14248: Fixed RegionedColumnSourceObjectWithDictionary improperly handling 0 sized symbol offset files |
024 | DH-14050: Fix infinite loop on login |
023 | DH-14234: Fix drafts in Web query monitor |
022 | DH-14133: Fix failing unit tests |
021 | DH-14133: Pass the worker kind from the web UI to the dispatcher |
020 | DH-14190: DND auth support in Web UI |
019 | DH-14216: Image Pull Secrets Missing from Hooks and Management Shell |
018 | DH-14152: Fix tree table NPE from null groupedColumns |
017 | DH-14206: DbAclProvider should not extend DbAclWriter |
016 | DH-14194: Kubernetes deployment with envoy fails with log_info command not found |
015 | Merge updates from 1.20221001.104
|
014 | DH-14189: Migrate passphrase files and create tdcp key |
013 | DH-14185: Fix auth server logging exceptions |
012 | DH-9482: Safe mode |
011 | DH-13759: Controller Tool "Status" Option |
010 | DH-14184: Improve Logging Under Some Error Conditions |
009 | DH-12163: UI Support column header groups |
008 | DH-12182: Column grouping layout hints |
007 | Spotless application. |
006 | DH-14047: Simplify DnD logging |
005 | DH-14182: Fix DnD Build |
004 | DH-14111: Clean up seek row JS API |
003 | DH-14088: Fix broken PQ Settings modal in PQ Editor |
002 | DH-14112: Fix custom-user upgrade |
001 | Initial release creation |
Detailed Release Candidate Version Log: Deephaven v1.20220623beta
Patch | Details |
---|---|
225 | DH-14151: Can't run commands from notebook file with run buttons |
224 | DH-11487: Isolate fishlib hash classes |
223 | Fix logger merge problem. |
222 | DH-14158: Support Flight V2 authentication in DnD (required for new version of community web UI) |
221 | Merge updates from 1.20221001.102
|
220 | DH-13060: Kubernetes worker state monitoring and logging improvements |
219 | DH-14126: Kubernetes Worker Creation Control |
218 | DH-14141: Fixed a bug in FilterCompatibility and updated default DnD properties |
217 | DH-12985: Updated Release Notes to indicate that existing customer scripts need to fix DataRoutingService import statement |
216 | DH-14070: Add a more informative log for a Deferred Endpoint in TDCP when using a fishlib Connector |
215 | DH-12985: Included Upgrade steps in Release Notes |
214 | DH-14131: Fix Helm Swing Issues |
213 | DH-14130: Move Controller Directory to Maven Standard |
212 | Merge updates from 1.20221001.099
|
211 | Changelog formatting updates. |
210 | DH-14004: Add support for helm upgrades |
209 | DH-14123: Fix arguments to "required" function in helm chart |
208 | DH-14121: Add support for GKE internal load balancers. Update k8s README. |
207 | DH-12985: Fix in migration scripts for Dynamic Data Routing change |
206 | DH-13724: Add DnD Python bindings. Update DHC to 0.20.0 |
205 | Merge updates from 1.20221001.098
|
204 | Update Web UI to v0.26.0
|
203 | DH-14043: Fix intermittent authentication issues running tools |
202 | DH-13652: Remove unused Kube directory |
201 | DH-12985: Changes to support Dynamic Data Routing |
200 | DH-14063: Support ExternalDns for Envoy Service in Helm Chart |
199 | DH-13652: Remove unused ice-kube and ice-builds Directories |
198 | DH-14051: spotlessCheck in parallel with build. |
197 | DH-13623: Compilation fix. |
196 | DH-13163: Converted Console folder to TypeScript |
195 | DH-13944: change ownership of /var/log/deephaven/tdcp in installer |
194 | DH-13822: Add DnD Publication to Jenkins Build |
193 | DH-14055: Add missing labels to helm templates. |
192 | Spotless application. |
191 | DH-13623: Initial authentication for DnD worker gRPC calls |
190 | DH-14051: Apply spotless to Barrage, ArrowIntegration, and Auth |
189 | DH-14045: ensure iriscat/iristail work when configuration server is down |
188 | DH-13721: Unit tests for DnD ACLs |
187 | DH-13762: repair error in information logging statement in previous commit. |
186 | DH-14024: change formatter rule, submit UI Designer generated code |
185 | DH-12690: Avoid refresh overhead in RunAndDones |
184 | DH-14036: Set component type when creating table definitions from schemas |
183 | DH-14027: correct error when DIS tailerPort is disabled |
182 | DH-14002: Scope terminated worker cleanup to dispatcher's namespace |
181 | DH-13762: If ControllerTool Matches Zero Queries for Export it Should Report an Error |
180 | DH-14035: Rename 'ServiceRegistry' interface to 'EnvoyDiscoverySupportService' to free the name for use elsewhere |
179 | Merge updates from 1.20221001.094
|
178 | Javadoc fixes. |
177 | DH-13721: Implement ACLs for DnD |
176 | DH-13959: Fix auth server reload tool not returning feedback from failed runs |
175 | DH-14024: submit UI Designer generated code changed by formatter rules changes |
174 | DH-14020: Add worker name and processInfoId to the community worker info screen |
173 | DH-13960: Correctly package tdcp key for query nodes |
172 | DH-14008: Add WorkerName to ConsoleAddress in web API |
171 | DH-14000: Add project-common IntelliJ inspection settings for enterprise team |
170 | DH-13863: Helm and docker updates. |
169 | DH-13825: Reduce noisy grpc output in DH utilities (add tools invoked via bin/iris) |
168 | DH-13939: Fix auth server reload tool |
167 | Merge updates from 1.20221001.087
|
166 | DH-13952: Update to DHC 1.19.1 |
165 | DH-13691: Add community worker support in Web UI |
164 | DH-12576: Allow password deletion through iris_db_user_mod |
163 | Merge updates from 1.20221001.082
|
162 | DH-13875: Use challenge-response auth for JDBC tests |
161 | DH-13928: Add DnD test target to Jenkins |
160 | DH-13879: Fix console log not connecting to code studio |
159 | DH-13888: set_iris_endpoints needs to use Envoy Host for authentication.server.list |
158 | DH-13820: Allow ACL pubKey for challenge-response login attempt |
157 | Merge updates from 1.20221001.080
|
156 | DH-13886: Failed WebServer Starts results in Hung Useless Process |
155 | DH-13820: Allow pubKey import/delete through iris_db_user_mod |
154 | Java 8 compile fix, Javadoc fix. |
153 | DH-13613: Support gRPC Auth Through Envoy |
152 | DH-13140: Fix issue where new CUS couldn't handle sym links |
151 | DH-11669: Publish Javadoc Jars |
150 | DH-13690: Expose WorkerKinds to Web API |
149 | DH-13880: kv-etcd is closing our singleton executor |
148 | DH-13830: Fix reconnection from Web UI broken after gRPC auth changes |
147 | DH-13863: Add Helm Chart to Iris Repository (part 2) |
146 | Merge updates from 1.20221001.069
|
145 | DH-13653: Remove unused default users "superuser" and "illumon", remove interactive login capability for "iris" by default |
144 | DH-13722: Dashboard export from New Tab Screen |
143 | DH-13863: Add Containers to Iris Repository (part 1) |
142 | DH-11466: Include customer generated jar in DH utilities DH-13825: reduce noisy grpc output in DH utilities |
141 | Correction of changelog for .129 |
140 | DH-13725: Add unit test coverage for DnD locations |
139 | DH-13849: Tweak test cases for next release |
138 | Merge updates from 1.20221001.068
|
137 | DH-11468: Fix c++ binary log file writer example filename format |
136 | Merge updates from 1.20221001.065
|
135 | DH-13835: Apply Spotless to DnD Subproject |
134 | DH-13827: Fix upgrade test for etcd installation |
133 | DH-13391: Update to DHC 0.19. |
132 | DH-13832: Shadowing com.sun.management breaks DnD Performance Tracker |
131 | DH-13140: Optionally redo CUS as part of the web api service |
130 | DH-13371: Upgrade etcd to 3.5.5 and use standard installation procedures |
129 | DH-13246: Migrate from react-scripts to Vite for web build system DH-13393: Upgrade web packages to latest (v0.22.0) DH-12796: Add showSystemBadge to redux selectors DH-12976: Better console history handling of multi-line commands - blank lines DH-10127: Register monaco workers properly DH-12503: Time slider focus behavior when swapping start/end times DH-13577: Support subplots in the Web UI |
128 | DH-13775: Preserve table description when adding formatting |
127 | DH-13788: Update Dependencies to Address CVEs |
126 | Merge updates from 1.20221001.060
|
125 | DH-13736: increase default dsa key size to 2048 and default signature algorithms to SHA256withDSA DH-13771: broaden classpath for dhconfig to correct datagen tests |
124 | Merge updates from 1.20221001.059
|
123 | DH-13746: Allow c# pubKey to use different signature algorithms |
122 | Revert DH-13513: increase default dsa key size to 2048 and default signature algorithms to SHA256withDSA |
121 | DH-13761: Add WorkerCreationJson to Web API |
120 | Merge updates from 1.20221001.058
|
119 | Fix hidden merge conflict in .118. |
118 | DH-13513: increase default dsa key size to 2048 and default signature algorithms to SHA256withDSA |
117 | DH-13476: Implement basic DB interface for DnD. Support TDS subscriptions & routing |
116 | DH-13466: make listener compilation the default when importing schemas |
115 | DH-13716: Correct log messages with missing endl() |
114 | Merge updates from 1.20221001.045
|
113 | Merge updates from 1.20221001.044
|
112 | DH-13698: DnD Worker Arguments Broken |
111 | Merge updates from 1.20221001.041
|
110 | DH-13693: etcd client creation migration scripts should use return instead of continue |
109 | Merge updates from 1.20221001.038
|
108 | Merge updates from 1.20221001.037
|
107 | DH-13647: Refactor data routing service interface and yaml parsing to support service registration and extraction of table data service |
106 | DH-13679: Fix assignment count tracking in dispatcher |
105 | DH-13576: Add column/row to ChartDescriptor for Web |
104 | DH-13173: Convert folder web/client-ui/src/test to TypeScript. |
103 | DH-13164: Convert folder web/client-ui/src/include to TypeScript. |
102 | DH-13619: Fixes for the new Dispatcher/worker interactions after early worker auth. |
101 | DH-13622: Enable Python for Community Worker DH-13625: ProcessInfoId should be attached to failed worker start Exceptions DH-10225: Add System Properties to RemoteProcessingRequest DH-13562: Fix for reporting errors between registration and ready. |
100 | DH-13413: enable table data protocol authentication by default |
099 | DH-13579: Community worker SSL configuration |
098 | Merge updates from 1.20221001.29
|
097 | DH-13612: Fix wrong extension on releasenotes/silverheels/DH-13562. |
096 | DH-13562: Early worker authentication |
095 | DH-13578: Web API support for community workers DH-13556: DnD is ignoring Dispatcher Heap Size |
094 | DH-13565: Add DnD to Jenkins Build |
093 | DH-13317: Remove unused gui components, use DateTimePicker for input-table Date fields |
092 | DH-11669: Fix some javadoc errors. |
091 | DH-13523: Fix Unit test error |
090 | DH-13391: Automatically track local version of DHE |
089 | Merge updates from 1.20221001.23
|
088 | Add DnD build and install README.md. |
087 | DH-13540: Cannot start web-console as non-iris user |
086 | DH-13523: Fix null handing of origin host DH-13437: Initial pass at fishlib to Community Logger Redirection |
085 | DH-13318: Enabled strict-boolean-expression rule in web/client-ui |
084 | Correct release notes headings |
083 | Merge updates from 1.20221001.21
|
082 | DH-13523: Fix gRPC Token parsing reading past end of token |
081 | DH-13523: Do not use Java Serialization for AuthTokens |
080 | DH-13510: Do not have NIO threads wait to lock running put propagation job |
079 | DH-12700: Fix type in gRPC based Authentication Service migration instructions |
078 | DH-12496: Add "shared console" queries (swing only) DH-13099: Remove network round-trip for new console tables |
077 | Merge updates from 1.20221001.16
|
076 | DH-13492: Update datagen for silverheels |
075 | DH-13499: Fix behavioral change in PersistentQueryControllerClient: ensure failures due to auth disconnects keep being retried. |
074 | DH-12702: Fix Incorrect version suffix for Uri package. Bump shadowed DHC versions to 0.17.0 |
073 | DH-13490: tweak the integration test setup to avoid a service timing issue |
072 | DH-13441: check for null service in RemoteTableDataService.authenticate |
071 | DH-12702: Implement downstream subscriptions to DHC via Barrage |
070 | DH-13419: dhconfig and dhctl should fail fast when authentication server is down (config svc part, closes the ticket). |
069 | DH-13419: dhconfig and dhctl should fail fast when authentication server is down (auth part only). |
068 | DH-13461: Fix bugs in cookie renewal in gRPC auth. |
067 | Merge updates from 1.20221001.15
|
066 | Release notes tweaks. |
065 | Merge updates from 1.20221001.14
|
064 | DH-13444: Fix timeout and logic in auth methods to wait for auth server and/or check for already authenticated |
063 | Merge updates from 1.20221001.11
|
062 | DH-13436: Use LAS for ProcessEventLog from DnD |
061 | DH-13425: Update silverheels release notes for gRPC auth; tweak handling of default private key file property. |
060 | DH-12700: gRPC based Authentication Service. |
059 | DH-12691: Enable Authenticated Table Data Protocol and table permission checking |
058 | DH-13158: TypeScript Conversion for folder 'components' |
057 | DH-13391: Update DnD to DHC 0.17.0, Silverheels 0.56 |
056 | Merge updates from 1.20221001.001 (release creation) |
055 | Merge updates from 1.20210805.332beta
|
054 | Merge updates from 1.20210805.328beta
|
053 | DH-13118: Add Delta support to updateBy |
052 | DH-13252: Expose Internal Partition in Partitioned tables |
051 | DH-13274: Automation grouping scripts should sort by first key |
050 | Fix LearnDeephaven install script broken by DH-12664 |
049 | Fix bad merge of AsOfJoinHelper in 048. |
048 | Merge updates from 1.20210805.316beta
|
047 | DH-12664: Push MatchFilters down into ColumnRegions DH-12633: Write / Use Parquet statistics objects when filtering |
046 | DH-13243: Exclude slf4j from EnterpriseShadowed to fix logging problems. |
045 | DH-13243: Include DB jar in DnD, Update to 0.16.1 |
044 | DH-13213: Update trades-in.bin in C++ BinaryStore Example |
043 | DH-13128: Groovy brought in a private static min() function that a unit test depended on. |
042 | DH-13204: Csv Unit tests failing in Silverheels |
041 | Merge updates from 1.20210805.311beta
|
040 | DH-13155: Fix stranded WAuthenticationServer property in iris-defaults.prop |
039 | DH-13128: Remove LiveDBPlot imports in scripts |
038 | DH-13128: Remove LiveDBPlot |
037 | Merge updates from 1.20210805.303beta
|
036 | DH-13087: Fix retry logic for expired etcd auth token in RawKV execute implementation |
035 | Merge updates from 1.20210805.300beta
|
034 | DH-12960: Move files in DB package onto the standard maven paths. |
033 | DH-12929: Rename shadow grpc prefix from 'dhconfiguration' to 'core'; update jetcd to 0.7.3. |
032 | DH-12993: QueryProcessorRequestHandle Changes for Community unit test fix |
031 | DH-12994: Fix intradayLoggerFactory gradle issues and java dependencies |
030 | DH-12993: QueryProcessorRequestHandle Changes for Community |
029 | Merge updates from 1.20210805.291beta
|
028 | Merge updates from 1.20210805.290beta
|
027 | Merge updates from 1.20210805.289beta
|
026 | Merge updates from 1.20210805.285beta
|
025 | DH-12465: MergeData should validate input tables match schema before writing |
024 | Merge updates from 1.20210805.278beta
|
023 | Merge updates from 1.20210805.275beta
|
022 | Merge updates from 1.20210805.264beta
|
021 | DH-12800: getRecord should fail on invalid row |
020 | DH-12686: Clean up API support for Heap Info |
019 | DH-12805: Fix BPIPE standard line value comparisons and be consistent with empty string import in String Columns |
018 | Merge updates from 1.20210805.261beta
|
017 | DH-12804: Include Shadowed Enterprise Jars in DnD |
016 | DH-12670: Fix broken sort of public key table |
015 | DH-12670: Public key support in ACL database |
014 | DH-12300: Replace GeneralImporter Parser implementations with DHC Fast CSV Parser |
013 | DH-12795: Provide Separate name for Deephaven Enterprise Configuration File |
012 | Merge updates from 1.20210805.257beta
|
011 | DH-12759: Cherry pick parquet improvements from DHC 0.15 |
010 | DH-12743: Upgrade gRPC and jetcd; DH-11713: Conf. Server gRPC retries. |
009 | DH-12686: API Support for Heap and Goto Row |
008 | Merge updates from 1.20210805.255beta
|
007 | Merge updates from 1.20210805.252beta
|
006 | DH-12723: Fix console error logging in DhcInDhe |
005 | DH-12712: Remove lingering dependencies to fish auth |
004 | DH-12681: Fork fishAuth to AuthLib |
003 | Merge updates from 1.20210805.240beta
|
002 | Merge updates from 1.20210805.239beta
|
001 | Initial release creation from 1.20210805.235beta |
Modified Bessel correction formula for weighted variance
The weighted variance computation formula has been changed to match that used in the Deephaven Community engine. We now use the standard formula for "reliability weights" instead of the previous "frequency weights" interpretation. This will affect statistics based on variance such as standard deviation.
Allow removal of "Help / Contact Support ..." via property
A new property, IrisConsole.contactSupportEnabled
has been added, which may be used to remove the "Help / Contact Support ..." button from the swing front-end.
By default, this property is set to true
in order to preserve current behavior. Setting this to false
in properties will remove the menu-option.
Reduce default max table display size
The maximum number of rows that may be displayed in the swing front-end before the red "warning bar" is displayed is now configurable. A new default maximum has been defined as 67,108,864 (64 x 1024 x 1024). Technical limitations cause rows beyond this limit to not properly update. When necessary, the Web UI is capable of displaying much larger tables than Swing.
The previous default max may be configured with the following property:
DBTableModel.defaultMaxRows=100000000
Note that the property-defined maximum may be programmatically reduced based on technical limits.
Deephaven now supports subplotting in the Web UI
Users now have the ability to view multiple charts subplotted in one figure using the Web UI. Create subplots using the newChart
, colSpan
, and rowSpan
functions available on a Figure
. Details are available in the plotting guide.
Example Groovy code of subplots
tt = timeTable("00:00:00.01").update("X=0.01*ii", "Y=ii*ii", "S=sin(X)", "C=cos(X)", "T=tan(X)").tail(1000)
// Figure with single plot
f1 = figure().plot("Y", tt, "X", "Y").show()
// Figure with two plots, one on top of the other
f2 = figure(2, 1)
.newChart(0,0).plot("S", tt, "X", "S")
.newChart(1,0).plot("C", tt, "X", "C")
.show()
// Figure with 3 plots, one that takes up the full width and then two smaller ones
f3_c = figure(2, 2)
.newChart(0,0).plot("T", tt, "X", "T").colSpan(2)
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(1,1).plot("C", tt, "X", "C")
.show()
// Figure with 3 plots, one that takes up the full height and then two smaller ones
f3_r = figure(2, 2)
.newChart(0,0).plot("T", tt, "X", "T")
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(0,1).plot("C", tt, "X", "C").rowSpan(2)
.show()
// Figure with 4 plots arranged in a grid
f4 = figure(2, 2)
.newChart(0,0).plot("Y", tt, "X", "Y")
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(0,1).plot("C", tt, "X", "C")
.newChart(1,1).plot("T", tt, "X", "T")
.show()
// Re-ordered operations from f4, should appear the same though
f5 = figure(2, 2)
.newChart(1,1).plot("T", tt, "X", "T")
.newChart(0,1).plot("C", tt, "X", "C")
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(0,0).plot("Y", tt, "X", "Y")
.show()
Python Integral Widening
In the 1.20211129 release, the jpy module that Deephaven's Python integration depends on converting all Python integral results into a Java integer. This resulted in truncated results when values exceed Integer.MAX_VALUE. In 1.20221001, Deephaven is using an updated jpy Integration that returns values in the narrowest possible type; so results that previously were an integer could be returned as a byte or a short. Moreover, a formula may have different types for each row. This prevented casting the result into a primitive type, as boxed objects may not be casted to another primitive.
In 1.20221001.196, Python calls in a formula now widen Byte and Short results to an Integer. If the value returned exceeds, Integer.MAX_VALUE, then the result is a Long. Existing formulas that would not have been truncated by conversion to an int in 1.20211129, behave as they would have in that release.
As casting from an arbitrary integral type to a primitive may be required, we
have introduced a utility class com.illumon.iris.db.util.NumericCast
that
provides objectToByte
, objectToShort
, objectToInt
, and objectToLong
methods that will convert any Byte, Short, Integer, Long, or BigInteger into
the specified type. If an overflow would occur, an exception is thrown.
Numba formulas (those that are surrounded in the nb
function); have the
narrowing behavior as in prior versions of 1.20221001.
Changed to use DHC Fast CSV parser for readCsv
TableTools.readCsv calls now use the new DHC High-Performance CSV Parser that uses a column oriented approach to parse CSV files.
The change to DHC parser includes the following visible enhancements
-
Any column that is only populated with integer surrounded by white space will be identified as an integer column. The previous parser would identify the column as a double.
-
Only
7-bit ASCII
is supported as valid delimiters. This means characters such as€ (euro symbol)
are not valid. In these cases the following error will be thrown,delimiter is set to '€' but is required to be 7-bit ASCII
. -
Columns populated wholly with only single characters will be identified as Character columns instead of String columns.
-
Additional date time formats are automatically converted to
DBDateTime
columns. Previously, these formats were imported asString
columns. All other date time behavior remains unchanged.
| Format | Displayed Value in 1.20211129 | Data Type In 1.20211129 | Displayed Value in 1.20221001 | Data Type in 1.20221001 |
| DateTimeISO_UTC_1 | 2017-08-30 11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_UTC_2 | 2017-08-30T11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MillisOffset_2 | 2017-08-30T11:59:59.000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MicrosOffset_2 | 2017-08-30T11:59:59.000000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime |
To use the legacy CSV parser, set the configuration property com.illumon.iris.db.tables.utils.CsvHelpers.useLegacyCsv
to true
.
Improvements to command line scripts
Deephaven provides many maintenance and utility scripts in /usr/illumon/latest/bin
. This release changes many of these tools to more consistently handle configuration files, setting java path and classpath, error handling, and logging.
Classpaths now include customer plugins and custom jars. This is important for features that can include custom data types, including table definitions and schemas.
For the tools included in this update, there is now a consistent way to handle invalid configuration and other unforeseen errors.
Override the configuration (properties) file
If the default properties file is invalid for some reason, override it by setting DHCONFIG_ROOTFILE
. For example:
DHCONFIG_ROOTFILE=iris-defaults.prop /usr/illumon/latest/bin/dhconfig properties list
Add custom JVM arguments
Add java arguments to be passed into the java program invoked by these scripts by setting EXTRA_JAVA_ARGS
. For example:
EXTRA_JAVA_ARGS="-DConfiguration.rootFile=foo.prop" /usr/illumon/latest/bin/dhconfig properties list
Scripts included in this update
The following scripts have been updated:
- crcat
- data_routing
- defcat
- delete_schema
- dhconfig
- dhctl
- export_schema
- iriscat
- iristail
- migrate_acls
- migrate_controller_cache
- validate_routing_yml
Remote R Groovy Sessions
The idb.init
method now has an optional remote
parameter. When set to
TRUE, Groovy script code is not executed locally but rather on a remote Groovy
session as is done in the Swing console or Web Code Studio. This eliminates a class
of serialization problems that could otherwise occur with a local Groovy session
serializing classes to the remote server. To enable this, you must pass the
remote parameter as follows:
idb.init(devroot=devroot, workspace, propfile, keyfile=keyfile, jvmArgs=jvmLocalArgs, remote=TRUE)
Additionally, you may now call idb.close()
to terminate the remote worker and
release the associated server resources.
LogEntry Interface Change
In the com.fishlib.io.log.LogEntry
class, the end()
and endl()
methods have been changed so that instead of returning the LogEntry instance on which they are operating, they don't return anything. After these methods have been called their LogEntry
instance should not be operated on, and further operations on that LogEntry
can introduce run-time issues.
Because of this change, any code that uses the Deephaven logging classes will need to be recompiled. If the logging calls rely on the returned LogEntry
they will need to be updated.
LiveDBPlot Support Removed
The LiveDBPlot and DBPlot classes have been removed from Deephaven 1.20230131. Users should migrate to new plotting methods introduced in 2017 which provide improved functionality and web support.
The default Groovy session previously inherited a static import for min
that
returned a double. The min
function is no longer referenced from
StaticGroovyImports allowing other min
overloads to be accepted; thus min(1, 2)
now returns an int rather than a double in the default Groovy session.
Controller Tool "Status" Option
The new --status
subcommand for the persistent query controller tool generates
a report to standard output with details of selected persistent queries.
With --verbose
, more details are included. If a query has a failure
recorded and only one query is selected, the stack trace is printed after the regular report. Use the --serial
option to directly select a specific query.
With --jsonOutput
, a JSON block detailing the selected query states is emitted instead of the formatted report. Use --jsonFile
to specify an output location other than standard output.
Possible breaking changes were introduced with this feature:
- Previously (before Silverheels) the
flag options
--continueAfterError
,--includeTemporary
and--includeNonDisplayable
required but ignored a parameter. For example,--includeTemporary=false
and--continueAfterError=never
were both accepted as "true" conditions. In Silverheels, the argument is still required, but onlytrue
and1
will be accepted as true,false
and0
will be accepted as false, and anything else will be treated as a command line error. - Details of information log entries generated by
command_tool
have changed. Important functionality had previously been deferred to after thestarting
/finished
log entries for the corresponding items had been emitted. Those actions are now bracketed by the log marker entries to better inform troubleshooting. - A warning message is emitted to the console when no queries are processed due to selection (filtering) criteria. An informational console message summarizing the filter actions has also been added.
Added new DbInternal.ProcessTelemetry system table and TelemetryHelperQuery
The new ProcessTelemetry table enables monitoring UI performance within the Swing front-end. For each user-initiated action, the console logs the duration between the start of the operation and when the table is ready for use. By aggregating this telemetry, an overall picture of the system's health - as perceived by users - is available. Detailed information can then be used to investigate potential performance problems.
To write these events, the Swing console internally buffers the data and then sends it to the new TelemetryHelperQuery
query. If the TelemetryHelperQuery
is not running, data is buffered up to a configurable limit, at which point the oldest telemetry data is discarded.
Several new startup parameters define the Telemetry behavior for the console:
[service.name=iris_console] {
# Identifies if telemetry metrics should be logged in the DbInternal.ProcessTelemetry table. To enable logging to
# the remote table, set the followig property to `true`. Note that telemetry will still be logged to the local
# client-logfile unless disabled with the `enableFor` options described below
Telemetry.remoteTelemetry=false
# Defines the frequency for messages to be sent to the server. Events will be batched and sent periodically. The
# default frequence is 15s
Telemetry.sendFrequencyMs=15_000
# Defines the initial size of the buffer which stores messages to be sent in a batch, defaulting to 1,024
Telemetry.initialSendBuffer=1_024
# Defines the maximum number of messages to store and send in a single batch. New messages appended to the buffer
# after it is full will cause "older" events to be removed. The default maximum value is 10,000
Telemetry.maxSendBuffer=10_000
# A number of classes will attempt to log telemetry locally and to the remote table. Individual classes may be
# prevented from being logged by setting the following to `false`
Telemetry.enableFor.DBTableModel=true
Telemetry.enableFor.IrisTreeTableModel=true
Telemetry.enableFor.DbOneClickPanel=true
}
A new method, telemetrySummary()
, which accepts an optional "Date" parameter, has been added to the default Groovy session. The method will provide summary tables derived from the raw Telemetry data.
For new installations, an ACL is automatically applied to the DbInternal.ProcessTelemetry table. For an upgrade, an ACL editor must create a new "Table ACL" for the raw DbInternal.ProcessTelemetry data to be seen by unprivileged users. The ACL should be similar to the "allusers/ProcessEventLog" ACL, but for the ProcessTelemetry table:
allusers | DbInternal | ProcessTelemetry | new UsernameFilterGenerator("EffectiveUser")
Parameterized Query Lock changes
Parameterized Queries now use the shared LTM lock instead of the exclusive LTM lock. Query writers may also now instruct the query not to use any of the LTM locks with the requireComputeLock
on the ParameterizedQueryBuilder
.
When using no locks, the query writer must ensure that the Parameterized Query Action does not use any methods that require the LTM lock. If this parameter is set to false
incorrectly, then results are undefined.
Add support for worker-scope plugin classpaths
Server processes now search in /etc/sysconfig/illumon.d/plugins/*/worker
for server-only plugin jars and classpath entries in addition to searching for path items from /etc/sysconfig/illumon.d/plugins/*/global
.
While global
dependencies are included on both server and client classpaths, worker
dependencies are only be added to server processes (any process using monit or the iris
launch script, as well as any jvm started from a server python session). In particular, the client update service does not make JARs in the worker
directory available to the Swing console.
Deephaven Enterprise supports Downstream Barrage
Deephaven Enterprise now supports subscribing to Barrage tables from Deephaven Community workers, anonymously to regular stock community workers, and authenticated via three-way-handshake with an authentication server token for DnD workers.
How to use
There are two different ways to subscribe to tables. The first, and simplest, is the URI method. In order to use this method, simply add the following code to your query:
import io.deephaven.uri.UriConfig
import io.deephaven.uri.ResolveTools
UriConfig.initDefault()
This will initialize the integration and prepare the system for connecting to Deephaven Community instances. Next,
simply use ResolveTools.resolve(String uri)
to subscribe to a table. The supported URIs can be found in the
Community Reference.
For example, to subscribe to the table 'MarketData' in the Scope of a Community worker you might use:
MarketData = ResolveTools.resolve("dh://my.server.com:8888/scope/MarketData")
The first part of the URI selects SSL(dh:
) or plaintext(dh+plain:
), the second is the path to the server and its
port. The next part selects either the Query Scope(/scope/
) or application scope(/app/my_app/
). Finally, the
name of the table to subscribe to is the last part of the URI.
Finer Grained Control
The next method is more complicated, but provides finer grained control over the resulting subscription. You must create the individual components of the subscription as well as the gRPC channel and session for the Barrage exchange.
When using this method, take care to re-use the BarrageSession when you are going to make further subscriptions to tables within the same workers.
def myhost = 'myserver.com'
def myport = 24003
def isDnd = true // set to false to connect to a stock, non-DnD community worker.
import io.deephaven.client.impl.BarrageSession
import io.deephaven.client.impl.ClientConfig
import io.deephaven.client.impl.ChannelHelper
import io.deephaven.client.impl.SessionImplConfig
import io.deephaven.client.impl.SessionImpl
import io.deephaven.proto.DeephavenChannelImpl
import io.deephaven.qst.table.TicketTable
import io.deephaven.uri.DeephavenTarget
import io.deephaven.shadow.client.flight.io.grpc.ManagedChannel
import io.deephaven.shadow.client.flight.org.apache.arrow.memory.BufferAllocator
import io.deephaven.shadow.client.flight.org.apache.arrow.memory.RootAllocator
import io.deephaven.barrage.BarrageSubscriptionOptions
import io.deephaven.enterprise.auth.AuthenticationClientManager
import io.deephaven.enterprise.auth.DhService
import java.util.concurrent.Executors
import java.util.concurrent.ScheduledExecutorService
bufferAllocator = new RootAllocator()
scheduler = Executors.newScheduledThreadPool(4)
deephavenTarget = DeephavenTarget.builder()
.host(myhost)
.port(myport)
.isSecure(true)
.build()
clientConfig = ClientConfig.builder()
.target(deephavenTarget)
.build()
managedChannel = ChannelHelper.channel(clientConfig)
sessionConfig = SessionImplConfig.builder()
.executor(scheduler)
.channel(new DeephavenChannelImpl(managedChannel))
.build()
authToken = AuthenticationClientManager.getDefault().createToken(DhService.QUERY_PROCESSOR.serviceName())
sessionImpl =
isDnd ? SessionImpl.create(sessionConfig, authToken) : SessionImpl.create(sessionConfig)
session = BarrageSession.of(sessionImpl, bufferAllocator, managedChannel)
// Use a prefix of "s/" before the variable name for the table in the remote worker
MarketData = session.subscribe(TicketTable.of("s/MarketData"), BarrageSubscriptionOptions.builder().build()).entireTable()
Column Rename Support from UI
The Schema Editor
now supports ability to rename a column between the application log file and the table schema.
The UI changes include how data type is handled for new columns in the Logger/Listener
Column Details
section.
The default data type for new columns is not set and instead inherits the Intraday Type
.
Below are example schemas for LoggerListener and a Listener only schema for a table with three columns.
The table below has three columns (Date
, Destination
and SameName
) while The logger has two columns (SameName
and Source
).
Date
column is not present in the log file, but rather determined by the logging process.SameName
column is in both the log file and table schema, and does not need to be transformed.Source
column in the logger is renamed asDestination
in the table.
To rename Source
column as Destination
, the Listener
class should include both Source
and Destination
columns and their attributes should be:
- Source: A value of
none
fordbSetter
attribute. This indicates that the column is not present in the table. Additionally, the attributeintradayType
should be set to the appropriate dataType. - Destination: A value of
Source
fordbSetter
to identify its input source. A valuenone
forintradayType
means it is not present in the log file, and cannot be used as part of adbSetter
.
Schema with only Listener class
If the table has an externally generated log file (e.g., with a C++ logger), then you only need to define a Listener
block to interpret the log file.
<Table namespace="ExampleNamespace" name="RenameColumn" storageType="NestedPartitionedOnDisk" >
<Partitions keyFormula="${autobalance_single}"/>
<Column name="Date" dataType="String" columnType="Partitioning" />
<Column name="Destination" dataType="int" columnType="Normal" />
<Column name="SameName" dataType="int" columnType="Normal" />
<Listener logFormat="1" listenerPackage="com.illumon.iris.test.gen">
<Column name="Destination" intradayType="none" dbSetter="Source" />
<Column name="SameName" dataType="int" />
<Column name="Source" intradayType="int" dbSetter="none" />
</Listener>
</Table>
Schema with a LoggerListener
If you are generating a Java logger, then you should include a LoggerListener
block in your schema.
<Table name="RenameColumn1001" namespace="ExampleNamespace" defaultMergeFormat="DeephavenV1" storageType="NestedPartitionedOnDisk">
<Partitions keyFormula="${autobalance_single}" />
<Column name="Date" dataType="String" columnType="Partitioning" />
<Column name="Destination" dataType="Integer" />
<Column name="SameName" dataType="Integer" />
<LoggerListener logFormat="1" loggerClass="RenameColumn1001Logger" loggerPackage="com.illumon.iris.test.gen"
rethrowLoggerExceptionsAsIOExceptions="false" tableLogger="false" generateLogCalls="true"
verifyChecksum="true" listenerClass="RenameColumn1001Listener" listenerPackage="com.illumon.iris.test.gen">
<SystemInput name="Source" type="int" />
<SystemInput name="SameName" type="int" />
<Column name="Destination" intradayType="none" dataType="int" dbSetter="Source" />
<Column name="SameName" dataType="int" />
<Column name="Source" dataType="int" dbSetter="none" />
</LoggerListener>
</Table>
Renaming column examples for Blob
and String
data types.
The above pattern can be followed for all data types except for Blob
and String
. The differences are detailed below.
Renaming column for a Blob
data type.
To rename column of a Blob
data type, users need to provide the actual data type of the data stored in Blob
.
The Edit Logger/Listener Column
UI now provides the ability to edit the data type field when the Intraday Type
field is Blob
.
Note the data type field can only be changed if the current displayed value is none
. In addition to setting the data type
for the application log file column, the Listener column's dbSetter
attribute must explicitly invoke cast as shown in the example schema below.
The example below shows Destination
, SameName
and Source
columns of data type java.util.List
.
<Table name="RenameColumn101" namespace="ExampleNamespace" defaultMergeFormat="DeephavenV1" storageType="NestedPartitionedOnDisk">
<Partitions keyFormula="${autobalance_single}" />
<Column name="Date" dataType="String" columnType="Partitioning" />
<Column name="Destination" dataType="java.util.List" />
<Column name="SameName" dataType="java.util.List" />
<LoggerListener logFormat="1" loggerClass="RenameColumn101Logger" loggerPackage="com.illumon.iris.test.gen"
rethrowLoggerExceptionsAsIOExceptions="false" tableLogger="false" generateLogCalls="true" verifyChecksum="true"
listenerClass="RenameColumn101Listener" listenerPackage="com.illumon.iris.test.gen">
<SystemInput name="Source" type="java.util.List" />
<SystemInput name="SameName" type="java.util.List" />
<Column name="Destination" intradayType="none" dataType="java.util.List" dbSetter="(java.util.List)blobToObject(Source)" />
<Column name="SameName" intradayType="Blob" dataType="java.util.List" autoBlobInitSize="32000" />
<Column name="Source" intradayType="Blob" dataType="java.util.List" dbSetter="none" autoBlobInitSize="256" autoBlobMaxSize="32000" />
</LoggerListener>
</Table>
Renaming column for a String
data type.
The basic steps are similar to all previous example except the Intraday Type
for a String
column is EnhancedString
. This is reflected in the options available to select for
Intraday Type
. The list of valid Intraday Type
options no longer includes String
.
The dbSetter
value for Destination
column should include a toString() on the setter value as shown below in the example.
<Table name="RenameColumn02" namespace="ExampleNamespace" defaultMergeFormat="DeephavenV1" storageType="NestedPartitionedOnDisk">
<Partitions keyFormula="${autobalance_single}"/>
<Column name="Date" dataType="String" columnType="Partitioning"/>
<Column name="Destination" dataType="String"/>
<Column name="SameName" dataType="String"/>
<LoggerListener logFormat="1" loggerClass="RenameColumn102Logger" loggerPackage="com.illumon.iris.test.gen"
rethrowLoggerExceptionsAsIOExceptions="false" tableLogger="false" generateLogCalls="true"
verifyChecksum="true" listenerClass="RenameColumn102Listener"
listenerPackage="com.illumon.iris.test.gen">
<SystemInput name="Source" type="java.lang.String"/>
<SystemInput name="SameName" type="java.lang.String"/>
<Column name="Destination" intradayType="none" dataType="java.lang.String" dbSetter="Source.toString()"/>
<Column name="SameName" dataType="java.lang.String"/>
<Column name="Source" intradayType="EnhancedString" dataType="java.lang.String" dbSetter="none"/>
</LoggerListener>
</Table>
Fix concurrency error in Table deserialization
When multiple Tables containing the same type of SparseArrayColumnSource were concurrently deserialized, the reader (e.g., a query worker) had a static object that was incorrectly used across threads. This change corrects that error, which could result in corrupted tables or exceptions during the deserialization process.
Kubernetes Worker Creation Parameters
The Query Dispatcher now supports changing a limited number of Kubernetes parameters when creating a worker:
- CPU Shares controls how many cores are assigned to the worker. If no value is specified, then the pod is created without a CPU request or limit, using the default value from your namespace.
- Container image provides a path to the image that should be used for this worker.
- Pod Template is a file (on the dispatcher pod) that is used to construct the worker. Changing the pod template provides the utmost flexibility; additional storage, labels, or resources can be added to the worker's pod.
By default, the query server allows the user to specify the CPU shares, but not the container image or pod template; as that would allow the user to execute arbitrary code on the cluster. The merge server allows specification of all three values, as it is by default restricted to users in the iris-schemamanagers
group. To change the permitted Kubernetes control parameters, you can set the Kubernetes.workerValidator
property. The three built-in values are:
- AllowCpu: CPU shares may be specified, but pod template and container image may not
- AllowAll: All parameters are permitted
- DenyAll: No changes are permitted
Additionally, if the validator poperty begins with class:
, then a class name that implements the com.illumon.iris.db.tables.remotequery.process.K8SWorkerValidator
will be instantiated using the zero-argument constructor.
When starting a console from Swing on a Kubernetes cluster, the "Show Advanced Options" will display three new fields for the CPU Shares, Pod Template, and Container Image. Similarly, when editing a persistent query from Swing the click the "Show Advanced Options" checkbox on the "Settings" screen to change the Kubernetes parameters. Editing these parameters is not yet available from the Deephaven Web UI.
From the "Query Monitor" tab in the Web UI or the "Query Config" panel in Swing, the new parameters are listed as "CPUShares", "ContainerImage" and "PodTemplate". In the DbInternal.PersistentQueryConfigurationLogV2
table, the values are stored as JSON in the "KubernetesControl" column.
The format of the PersistentQueryConfigurationLogV2 has changed, therefore you will need to move your old data to a new partition; or follow some to be determined upgrade steps.
Support for Dynamic Data Routing
Endpoints (host and port) for ingestion servers can now be defined as dynamic endpoints. The host and port for dynamic endpoints are determined at runtime. The routing YAML file format changes are backward compatible so all existing routing configurations are still valid. Deephaven suggests using dynamic endpoints for in-worker ingestion servers (e.g., LastBy DISes or KafkaIngesters).
The new routing YAML format includes the endpoint
tag:
endpoint:
serviceRegistry: registry
# other tags specific to the service
When the endpoint
tag is present, endpoints must be defined in this section and the legacy host
and port
tags are
invalid. Invalid YAML configurations cannot be imported.
The change in an example data routing file for a DIS block is as follows:
Legacy vs New format differences in a DIS block
In the legacy format, a DIS block (named SimpleLastBy
) would be defined as below. Note the host,
tailerPort, tableDataPort tags, and the port under webServerParameters
.
SimpleLastBy:
host: *dh-import
tailerPort: 22222
tableDataPort: 22223
...
webServerParameters:
enabled: true
port: 8086
The same configuration in the new format would look like this:
SimpleLastBy:
endpoint:
host: *dh-import
serviceRegistry: none
tailerPortEnabled: true
tailerPort: 22222
tableDataPortEnabled: true
tableDataPort: 22223
...
webServerParameters:
enabled: true
port: 8086
...
The same configuration can be rewritten to have dynamic endpoints by making the following changes:
SimpleLastBy:
endpoint:
serviceRegistry: registry
tailerPortEnabled: true
tableDataPortEnabled: true
...
webServerParameters:
enabled: true
...
The new format features a mandatory block endpoint
which includes a serviceRegistry
tag. For a DIS, the
tailerPortEnabled
and tableDataPortEnabled
are also required. You may optionally specify the host
, tailerPort
,
and tableDataPort
tags. If the serviceRegistry
is none
, the host and port must be included
in the YAML routing file. If the value is registry
, the process registers enabled ports with the configuration
server and clients retrieve them from the configuration server at runtime.
Summary of legacy and new tags
The following configuration blocks show:
- The legacy tags, which continue to be accepted unless an
endpoint
is also defined for the same service. - The type of ports that may be enabled for each service.
- The optional tag to provide a static port for a service.
Data Import Server
SimpleLastBy:
#host: *dh-import # LEGACY TAG CONFLICTS WITH ENDPOINT
#tailerPort: 22222 # LEGACY TAG CONFLICTS WITH ENDPOINT
#tableDataPort: 22223 # LEGACY TAG CONFLICTS WITH ENDPOINT
endpoint:
serviceRegistry: registry
tailerPortEnabled: true
tableDataPortEnabled: true
# Below optional tags are required if serviceRegistry = none
host: *dh-import # OPTIONAL TAG TO DEFINE HOST STATICALLY
tailerPort: 22222 # OPTIONAL TAG TO DEFINE PORT STATICALLY
tableDataPort: 22223 # OPTIONAL TAG TO DEFINE PORT STATICALLY
...
webServerParameters:
enabled: true
port: 8083 # OPTIONAL TAG TO DEFINE PORT STATICALLY
...
Log Aggregator Servers
- rta:
#port: *default-lasPort # LEGACY TAG CONFLICTS WITH ENDPOINT
#host: *localhost # LEGACY TAG CONFLICTS WITH ENDPOINT
endpoint:
serviceRegistry: registry
# Below optional tags are required if serviceRegistry = none
host: *localhost # OPTIONAL TAG TO DEFINE HOST STATICALLY
port: *default-lasPort # OPTIONAL TAG TO DEFINE PORT STATICALLY
...
Table Data Services
db_ltds:
#host: *iris-dis # LEGACY TAG CONFLICTS WITH ENDPOINT
#port: *default-localTableDataPort # LEGACY TAG CONFLICTS WITH ENDPOINT
endpoint:
serviceRegistry: registry
# Below optional tags are required if serviceRegistry = none
host: *iris-dis # OPTIONAL TAG TO DEFINE HOST STATICALLY
port: *default-localTableDataPort # OPTIONAL TAG TO DEFINE PORT STATICALLY
...
Deephaven services will use and register ephemeral ports if the port tags are omitted.
DataRoutingService
moved to com.illumon.iris.db.v2.routing
The DataRoutingService
, DataRoutingServiceFactory
and other related classes
have moved from com.illumon.iris.db.v2.configuration
to
com.illumon.iris.db.v2.routing
. Customer scripts (e.g. in-worker DISes)
which refer to the old packages must be updated.
This will affect any scripted in-worker DIS persistent queries.
Persistent query scripts that use these imports can be found with the following command.
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_query_grep "com.illumon.iris.db.v2.configuration"
Kafka resumeFrom should fail when previous checkpoint does not exist
When calling io.deephaven.kafka.ingest.ResumeImportFrom.resumeFrom(...)
, a ResumeImportFrom.NoPriorPartitionsFoundException
will be thrown if there is no prior partition found. This prevents us from inadvertently setting the broker's offset to 0.
Table Access Control List implementation for DnD
Table access control has been added for DHC in DHE (DnD). The implementation attempts to create Community native filters using
the FilterGenerator.getCommunityFilterGenerator(...)
method. When conversion is not possible Enterprise SelectFilters
are adapted into Community WhereFilters. Where possible direct analogues are used such as MatchFilter and RangeFilter.
When no direct analogues exist filters are wrapped using the EnterpriseFilterAdapter
class.
Enterprise API modifications
Several core Deephaven Enterprise interfaces have been modified to make this possible.
LiveTableMonitor
When running a Community worker, the LiveTableMonitor should never be started. This has been made more explicit with
the addition of the LiveTableMonitor.DEFAULT.disable()
method, which is called during worker startup. This makes any
invocation of LiveTableMonitor.DEFAULT.start()
throw an exception.
There are some features that require the LiveTableRegistrar
and NotificationQueue
features of the LTM to function,
so two more methods LiveTableMonitor.DEFAULT.setDefaultRegistrar(LiveTableRegistrar)
and
LiveTableMonitor.DEFAULT.setDefaultNotificationQueue(NotificationQueue)
have been added and are used to redirect
those functions to Community's UpdateGraphProcessor
, using a few adapter classes to make it seamless.
LiveTable
To support the redirection of LTM to UGP LiveTable has been augmented to also extend Runnable so they can be used
directly with the UGP. The run()
method has a default implenentation and is annotated with @FinalDefault
. It must
not be overridden.
LiveTableRegistrar & NotificationQueue
Helper methods that existed in LTM for adding collections of objects have been promoted to these interfaces with default implementations.
PermissionFilterProvider
In order to make it simpler to adapt Enterprise ACLs onto Community tables this interface has been augmented with
PermissionFilterProvider.getFilterGenerators(namespace, tablename)
so that DnD can attempt to create Community
native filters, rather than wrapping Enterprise objects in adapter classes. Also, the
PermissionFilterProvider.getRawColumnAcls(namespace, tablename)
method has been added to make it easier for DnD to
create native Community column ACLs
PermissionFilterProviderCommonMethods
This class has been removed entirely and the methods migrated into AclHelper
since they are needed by DnD as part
of the conversion process.
SelectFilter
SelectFilter has had its SelectFilter.getTable()
method's return value changed to DynamicTable
from QueryTable
.
A new method SelectFilter.requiresFullSet()
has been added to indicate that the filter requires the fullSet
parameter
passed into the SelectFilter.filter()
method. A default implementation that returns true
has been added, but
any custom implementations should override this if they do not need this parameter.
AclHelper
The AclHelper class has inherited the methods from the deleted PermissionFilterProviderCommonMethods
class. Also,
the methods that were used to generate Column ACLS were refactored heavily so that native Community filters can be
created for Column ACLs. More specifically, the AclHelper.makeColumnACLs()
method (formerly from
PermissionFilterProviderCommonMethods
) is now templated on the ACL definition type, the Filter type, and the
result type and requires two new factory method parameters filterFactory
and resultFactory
. This modification
lets DnD directly convert the ACL representations in the ACL database into Community native filters without using
adapter classes and introspection.
To support the above, the AclHelper.createGroupToColumnToAclMap()
method was added so that the ACL Database string
representations can be converted into a map of group -> Columns -> filter set
Finally, one more method AclHelper.checkAccess()
has been added so that quick decisions about a user's access to
a specific table can be made without having to create actual filters.
FilterGenerator
To support the AclHelper.checkAccess()
method, FilterGenerator
has also been augmented with the same method so that
each FilterGenerator
implementation can indicate the type of access it will restrict the specified user to. The
default implementation of this method simply delegates to FilterGenerator.generateTable()
. Custom implementations
should override this default where possible.
A new method FilterGenerator.getCommunityFilterGenerator()
has been added as a hook for DnD to directly convert
Enterprise FilterGenerators
into Community native filters. Custom implementations may override this to provide
direct conversions to Community. Certain existing Enterprise generators such as WhereInFilterGenerator
,
DisjunctiveFilterGenerator
and ConjunctiveFilterGenerator
override this method and reflexively instantiate the
DnD analogues. The default implementation of this method will return null
which results in the FilterGenerator
being adapted using the FilterGeneratorAdapter
class.
Lastly, the method
SelectFilter[] generateFilter(@NotNull Database database,
@NotNull PermissionFilterProvider permissionFilterProvider,
@NotNull UserContext userContext,
@Nullable String namespace,
@Nullable String tableName)
has been removed.
Java Security Manager
The Java Security Manager has been deprecated in Java 17. All usages have been removed.
API Adapters and Limitations
In order to support the conversion of Enterprise ACLs to Community several adapter classes have been created
to wrap Enterprise and Community analogues and implement the opposite interface so that instances can be used
interchangeably. For various reasons complete 1:1 adaptation is not always possible and UnsupportedOperationExceptions
must be thrown. These adapter classes are detailed below.
Shadowing
DnD is implemented by Shadowing the Enterprise code, which essentially renames all the internal packages with a new
prefix so that the class path is not polluted by classes with identical names from different modules stepping on each
other. This creates a few problems in that Enterprise uses Java serialization for various protocols and on-disk
representations. To account for this the ShadowCompatibility
package contains a few classes for automatically
changing serialized package prefixes into their shadowed version. For the most part this is restricted to
ObjectCodec
classes used for serializing custom objects into table columns.
Filters
When native Community filters can not be directly derived from Enterprise ACLs, the Enterprise filters are adapted using
EnterpriseFilterAdapter
which wraps Enterprise filters and implements the Community WhereFilter
. The
Wherefilter.filter()
method must further convert the provided RowSet
s to Index
and Community Table
to Enterprises
Table
. This conversion involves completely iterating and copying the Rowset
to Index
and back again after the filter
has completed. This is inefficient, and can be alleviated by overriding the FilterGenerator.getCommunityFilterGenerator()
noted in the sections above.
Furthermore, the Table adaptation is done via TableAdapterForFilters
, in which nearly the entire Table
interface
will throw UnsupportedOperationExceptions
aside from the Table.size()
, Table.getDefinition()
, attribute getters,
and ColumnSource
getters. ColumnSource
s are futher adapted using ColumnSourceAdapterForFilters
detailed below.
Column Sources (for filters)
In order to adapt filters the ColumnSource
classes must be adapted. For the most part, these are identical and the
adapter simply delegates to the Enterprise delegate. The Chunking APIs between Community and Enterprise however can be
adapted without having to copy data between the different chunk types. The ColumnSourceAdapterForFilter
class takes
the Community Chunks
and 'steals' their underlying arrays, wrapping them in ResettableChunk
or
ResettableWritableChunk
. This is done by the ChunkAdapter
implementations in io.dephaven.dhe.compatibility.chunk
RowSet, RowSequence, and OrderedKeys
Community's RowSet is equivalent to Enterprise's Index, but the implementations are incompatible, they must be copied
using the iteration methods. Helper methods for this exist in RowSetCompatibility
. Enterprise's OrderedKeys
and
Community's RowSequence
are functionally equivalent and only require a delegation wrapper. These are implemented by
OrderedKeysToRowSequenceAdapter
.
LiveTableRegistrar and NotificationQueue
Since the LTM is disabled for DnD workers, classes that use the LiveTableRegistrar
and NotificationQueue
must be
redirected to Community's UpdateGraphProcessor
. There are some differences between these APIs, however they are
adapted using the LiveTableRegistrarAdapter
and NotificationQueueAdapter
which replace the LTM's default values
as described in the LiveTableMonitor
section above. These simply redirect the equivalent calls onto the
UpdateGraphProcessor
.
Refresh overhead in batch queries improved
By default, batch queries (i.e. RunAndDone queries) retrieve tables with db.getIntradayTable (db.i) statically. By producing a static table instead of a refreshing table, the query engine is able to skip creating the data structures required for incremental computation. The result is less resource usage like time and memory during batch query execution.
This behavior can be disabled by setting the JVM property RunAndDoneSetupQuery.liveTables=true.
SchemaDiscovery builders requires partitioning-column be defined
Since the schema-discovery tools are intended for use with a Kafka ingester, these generated schemas should be defined
as System tables. System tables cannot be written which are storageType="SplayedOnDisk"
, and therefore MUST have a
partitioning-column defined. If a user attempts to generate a schema without first defining a .columnPartition(...)
,
then an exception will be thrown stating that the field is required.
Added StaticGroupFilterGenerator class
A new StaticGroupFilterGenerator()
class is available for managing data access in environments where multiple users access the same table.
This class creates a match filter for groups to which this user belongs. By default, this filters the Group column, but an optional column name can be passed in to the generator.
- Unlike the
GroupFilterGenerator
, the user’s group list is determined at table access time and is not updated if the user's group membership changes. The user’s group that matches their username is excluded from the filter, which allows the filter to be memoized for two users who belong to the same groups. TheGroupFilterGenerator
, by contrast, is responsive to on-the-fly changes and allows a username to be the group being filtered on, but cannot memoize across users (as each user’s groups may change independently).
Updates to iris_db_user_mod CLI tool
The iris_db_user_mod
tool now has the ability to import and remove challenge-response public keys directly into the
ACL store using the -import_key
and -delete_key
options. Additionally, per-user passwords may be set
non-interactively through this tool during initial -create_user
or via -set_password
.
Challenge Response Public Keys
Public keys may be added and deleted from the ACL store using the commands
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod -import_key /path/to/key-file.base64.txt
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod -delete_key /path/to/key-file.base64.txt
The key may be the public or private key which is created by the generate-iris-keys utility.
Non-interactive Password Update
A password may be included during a user's creation or after the fact, where the password is the base64
encoded
password value (be sure to use -n
option if required when echoing to base64
so that the newline is not included in
the encoding)
# Note that `echo -n abcd | base64` prints out “YWJjZA==”
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod -create_user -user new_user -group a_group -password YWJjZA==
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod -set_password -user new_user -password YWJjZA==
Apache Arrow support
Deephaven now supports reading Apache Arrow files using the com.illumon.iris.db.arrow.ArrowTools
class.
The new ArrowTools.readArrow
method accepts a path to .arrow file and returns a Table.
Many types are supported including int, long (and all others which resemble java primitives), LocalDateTime, LocalDate, Timestamp, BigDecimal, and more.
Queries should route data through the TDCP running on the same machine
The Table Data Cache Proxy is designed to cache data for local use. There is an instance on each query node, and the design intention is that queries route requests through the instance running on the same machine. An error in the Deephaven installer might have configured the system data routing so that all requests are routed to the infra node instead.
To check your data routing file and correct it if necessary, follow the instructions here.
The db_tdcp
entry under tableDataServices:
should have a host entry like the following, not the infra node address.
# Configuration for the TableDataCacheProxy named db_tdcp, and define
# the data published by that service.
# There is typically one such service on each worker machine.
db_tdcp:
host: *localhost
Running the New Client Update Service
The client update service (CUS) has been integrated into the same web server as the Web API service. When enabled, this eliminates the need to use two separate web ports on the infrastructure server and simplifies updating TLS certificates.
How to set up
The new CUS is disabled by default. To enable:
-
Edit iris-environment.prop to set the "Webapi.server.cus.enabled" configuration property to "true"
-
Navigate to your infra node and stop the client_update_service:
sudo -u irisadmin monit stop client_update_service
-
Restart the web_api_service:
sudo -u irisadmin monit restart web_api_service
-
Remove the client_update_service from monit:
sudo -u irisadmin mv /etc/sysconfig/deephaven/monit/cus.conf /etc/sysconfig/deephaven/monit/cus.conf.disabled
What has changed
The old CUS is a stand-alone process with its own web server, and is accessed from a separate port from the Web API Server. The old CUS served files from the /var/www/lighttpd/iris/iris/ directory, which it built on start up. This meant that it needed to be restarted to make new files or configuration available to a user using the legacy Swing Launcher.
The new CUS uses the same web server as the Web API Service and is accessed on the same host and port (by default, 8123).
Instead of restarting the Web API Service to make new files available to Swing clients, you can navigate to the URL https://<WEB_HOST:WEB_PORT>/reload
. This will build a new directory inside a temporary location from which the CUS will serve files.
The webpage for downloading the legacy Swing launcher has moved to https://<WEB_HOST:WEB_PORT>/launcher
.
The old webpage, https://<CUS_HOST:CUS_PORT>/iris/
, will remain accessible as long as the client_update_service process is running. Here, CUS_HOST is the infra node and CUS_PORT is the port used by the old CUS (by default, 8443).
Changes to default users
The default users superuser
and illumon
will no longer be created on new installs. The user iris
will continue to
be created, but will not have a default password, and therefore cannot be used for interactive login by default.
In order to login interactively, a new user may be created (via command-line) and/or the iris
user may be re-enabled
for interactive login.
Create new interactive superuser
A new interactive superuser may be created with the following commands, where {$user}
is substituted with an
appropriate user-name;
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod -create_user -user ${user} -group iris-acleditors,iris-schemamanagers,iris-superusers
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod -set_password -user ${user}
This will prompt for the new user's password.
Re-enable interactive login for iris user
In order to re-enable the iris
user for interactive login, execute the following command, and provide a password
when prompted;
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod -set_password -user iris
Pre-existing systems
The users will not be removed on updates if they already exist. The users may be removed by deleting them from
/etc/sysconfig/deephaven/illumon.d.latest/resources/authusers.txt
. Note that removing the iris
user from this file
will disable the interactive login capability for the user. In order to re-enable interactive capability for iris
,
create a plain-text file at a known location on the server, which consists of the lines;
-delete_user -user iris
-create_user -user iris -group iris-acleditors,iris-schemamanagers,iris-superusers
-set_password -user iris
and execute the following command, which should reference the file created above, and will prompt for the iris user's desired password;
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --direct /known/path/to/file
C++ Example Binary Log Format Change
The BinaryStoreWriter.cpp
example has been updated to use the standard Deephaven date-time format when it generates binary log filenames, so that tailers will find the files.
If this file was used when creating a C++ logging application, consider updating the application to use the latest version of BinaryStoreWriter.cpp
.
Parquet Files Properly Use Codecs
A bug was fixed where ParquetTools.writeTable() would not respect the codec settings set by TableDefinitions.
Data written before this fix would have been written with the ExternalizableCodec if the object was Externalizable,
or SerializableCodec if the object was Serializable. For this reason the data will still be readable.
Data written after this fix will use the properly assigned codec.
Upgrade to etcd version
We are upgrading the version of etcd used during development and certification, and installed by default when an installation doesn't provide etcd before the Deephaven installation scripts are run.
Upgrade etcd manually
Deephaven is upgrading our etcd version from 3.3.18 to version 3.5.5.
The Deephaven installation process will not automatically upgrade etcd on an existing installation. This is intentional because we don't want to make assumptions about other possible etcd usage. However, since we are now certifying Deephaven with etcd version 3.5.5, we strongly recommend that existing installations upgrade to this verison of etcd.
The following set of commands can be used as an example during an upgrade. The application should be stopped before upgrading etcd.
ETCD_VERSION=3.5.5
ETCD_TAR_NAME=etcd-v${ETCD_VERSION}-linux-amd64.tar.gz
upstream_download_url="https://storage.googleapis.com/etcd"
etcd_local=/tmp/$ETCD_TAR_NAME
wget --timeout=60 --waitretry=1 -t 5 -q "${upstream_download_url}/v${ETCD_VERSION}/${ETCD_TAR_NAME}" -O $etcd_local
mkdir -p /tmp/etcd_install
cd /tmp/etcd_install
tar xvf $etcd_local
sudo systemctl stop dh-etcd
cd etcd-v${ETCD_VERSION}-linux-amd64
# Save a backup of the old etcd in case something goes wrong
sudo mv /usr/bin/etcd /etc/sysconfig/deephaven/backups/etcd-3.3
sudo mv /usr/bin/etcdctl /etc/sysconfig/deephaven/backups/etcdctl-3.3
sudo chown root:root etcd
sudo chown root:root etcdctl
sudo chown root:root etcdutl
sudo mv etcd etcdctl etcdutl /usr/bin
sudo systemctl start dh-etcd
Dependency Updates
Several dependencies have been updated to address published CVEs. User scripts which imported the old version will now use the new version.
Package | Old Version | New Version |
---|---|---|
Jetty | 9.4.44.v20210927 | 9.4.49.v20220914 |
mysql | 8.0.28 | 8.0.31 |
org.apache.httpcomponents:httpclient | 4.5.6 | 4.5.13 |
org.apache.kafka:kafka-clients | 2.4.0 | 3.3.1 |
org.yaml:snakeyaml | 1.30 | 1.33 |
Several shadowed dependencies have also been updated, though user scripts are less likely to use them directly, but they may be used in Ingestion scripts (in particular Avro).
Shadowed Package | Old Version | New Version |
---|---|---|
io.fabric8:kubernetes-client | 5.11.2 | 6.2.0 |
Jackson* | 2.13.1 | 2.14.1 |
org.apache.avro:avro | 1.11.0 | 1.11.1 |
Note: Jackson is included in several other shadowed Jars with different versions. This is the top-level Jackson that is used by Deephaven internally.
Update Apache commons-text-1.9 to 1.10.0
The Apache commons-text package had a critical CVE reported by the owasp dependency scan, which is corrected in 1.10.0. Deephaven now packages 1.10.0 instead of 1.9, user scripts that import commons.text will use the new version. For more information on changes see the Apache Commons Text Changes.
iris_keygen.sh updates
/usr/illumon/latest/install/iris_keygen.sh
has been modified:
-
If the executing user is irisadmin or dbmerge, it will attempt to execute commands that require irisadmin or dbmerge rights without sudo. If the executing user is neither of these, then, as in previous versions, the tool will check that the user has rights to sudo -u as both of these users, and will then use sudo -u irisadmin or dbmerge, as appropriate, to execute other commands.
-
Logging has been changed from
/usr/illumon/latest/install/command
to/var/log/deephaven/misc
. -
A new option,
--alt-cert-dir
, has been added, allowing the user to specify an alternate location fortls.crt
andtls.key
files; by default these are expected to be in/etc/deephaven/cus-tls
.
Enterprise Databse Object
The DnD worker now supports reading data from the Enterprise databse in both on-disk historical formats as well as subscripton based live tables backed by Enterprise's Table Data Protocol.
When a DnD worker is started it now includes a variable in the scope db
that provides access to the Enterprise
data. The db
object is an instance of io.deephaven.enterprise.database.Database
and contains two methods
/**
* Fetch a live {@link Table} for the specified namespace and table name.
*
* @param namespace the Namespace in which the table exists
* @param tableName the name of the table in the Namespace.
* @return a new live {@link Table} for the specified parameters.
*/
Table liveTable(@NotNull String namespace, @NotNull String tableName);
/**
* Fetch a static historical {@link Table} from the database.
*
* @param namespace the Namespace in which the table exists
* @param tableName the name of the table in the Namespace.
* @return a new static {@link Table} for the specified parameters.
*/
Table historicalTable(@NotNull String namespace, @NotNull String tableName);
The liveTable
method is used to access live (intraday) data, and is the equivalent of the db.i()
method of an enterprise worker.
The historicalTable
method is used to access static historical data, and is the equivalent of the db.t()
method of an enterprise worker
Fixed Errors in TableMap Liveness management
When TableMaps were exported, they were always managed (via liveness) regardless of if they are refreshing or not.
When a derived object is created from a LivenessObjects (e.g., a TableMap created from a Table), clients must invoke LivenessManager#manage(LivenessReferent) on the parent object if and only if the parent object is itself, live. This can be checked via DynamicNode#isDynamicAndIsRefreshing(Object).
Persistent Query Export Storage Format
Persistent Queries are exported as an XML version 1.0 file, which does not allow for certain characters that may be found in script code. These include the control characters with a numeric value below 0x20 except 0x09 (Horizontal Tab), 0x0A (Line Feed) 0x0D (Carriage Return).
With this change, when a persistent query containing any invalid characters is exported, the script code will be base64 encoded in the resulting XML file to prevent the controller from crashing by attempting to create an invalid XML document.
Persistent Query Export Storage Format
Persistent Queries are exported as an XML version 1.0 file, which does not allow for certain characters that may be found in script code. These include the control characters with a numeric value below 0x20 except 0x09 (Horizontal Tab), 0x0A (Line Feed) 0x0D (Carriage Return).
With this change, when a persistent query containing any invalid characters is exported, the script code will be base64 encoded in the resulting XML file to prevent the controller from crashing by attempting to create an invalid XML document.
Early Worker Authentication
RemoteQueryProcessor now authenticates early in its normal initialization cycle. Before, remote query clients would trigger authentication by executing PresentDelegateTokensQuery; that class is now removed.
RemoteProcessingRequest used to have one AuthToken parameter intended to present credentials to the dispatcher; now it contains two AuthToken parameters, the second intended to be used by the RemoteQueryProcessor for delegate authentication. If authentication fails, the worker fails to start.
Example code using the new API in RemoteQueryClient
:
QueryProcessorConnection worker = tokenFactory.tryGetWithToken(dispatcherToken -> {
final AuthToken processorToken = AuthenticationClientManager.getDefault().createTokenForUser(DhService.DELEGATED_AUTHENTICATION, config.getOwner());
workerRequestHandle = dispatcherConnection.getQueryProcessorRequestHandle(true);
return RemoteQueryClient.connectToNewProcessor(
log,
dispatcherConnection,
dispatcherToken,
processorToken,
workerRequestHandle,
[...]
});
The RemoteQueryDispatcher used to perform up to 5 startup attempts for a job/worker that is failing to start; since a failed authentication can't be retried, and a token is expired by trying to verify it, this can't work anymore and this functionality has been removed, the dispatcher will now only perform one start attempt. Retry should be handled by the client, for example the persistent query controller has a retry policy defined on each query.
Use Manual serialization for Auth Tokens
When Authentication Tokens are sent between clients and servers they no longer use Java Serialiaztion as the format. Instead, we manually serialize and deserialize the objects. This method ensures that we can be compatible with modules that might shadow the Enterprise libraries, such as when running Community workers in the Enterprise context
withColumnDescription modified Parent Table
The withColumnDescription method would modify the parent table if the parent table already had column descriptions. If the parent table had no column descriptions it was unchanged. This change will cause the method to always return a new table when the description is change. If your query depended on modifying the parent table when setting column descriptions, those will no longer be present in the UI, and you must update your query to set the description on the proper table.
Installer improvements and fixes
There no impact from this change. It improves the robustness of some aspects of the install process.
Etcd install enhancements
If the installer process requires an etcd download and no JFrog artifactory API key is provided, the installer will download it from a public Deephaven repository hosted on Google Cloud Platform. If the download fails, the installer will stop at that point in the process making it easier to determine the cause of the failure. If an etcd rpm is present in the installer directory then that will be distributed to the cluster's target nodes in lieu of a download.
Memoize Query ACL Application
When users fetch a table from a persistent query that defines ACLs, a series of complex query operations may be applied. These operations include filtering to valid rows, applying wouldMatch and updateView to provide per-cell access control, and performing a treeTable or rollup operation when a tree or rollup is filtered.
These operations can be quite expensive and many workspaces include the same table with a different combination of filters, sorts, and other customizations; therefore performance may be improved by applying the ACL operations only once per user per exported table. These operations are now memoized, which prevents re-application of the same ACL multiple times.
Schema import now compiles listeners
There are possible schema errors that only become apparent when the listeners are compiled.
This compilation can now be performed when importing a schema, as part of the validation.
dhconfig schemas import
now includes options to control whether to compile listeners when validating a schema.
--compile-listeners compile listeners when importing (default)
--no-compile-listeners do not compile listeners when importing
--lenient-validation
changes listener compilation errors into warnings.
Disabling this validation step might be useful when importing a large number of schemas that are known to be correct.
RemoteQueryDispatcher Jetty SSL Configuration
The RemoteQueryDispatcher always starts the embedded web server to serve the registration endpoint for workers (RemoteQueryProcessors). The JettyServerHelper.Parameters class reads the parameters for the Jetty server from configuration, but if the configuration did not specify enabled; the SSL and authentication properties were ignored.
As of the Jackson release the RemoteQueryDispatcher.webserver.enabled
property only adds URLs for worker queue and usage information rather than
enabling or disabling Jetty overall. In the default configuration, these
URLs were disabled, but the RemoteQueryDispatcher.webserver.sslRequired
and
RemoteQueryDispatcher.webserver.sslRequired
properties were set to true,
but ignored by the system.
Additionally, the RemoteQueryDispatcher.tls.keystore
property was not
specified. This property now takes on a default value of
/etc/sysconfig/illumon.d/auth-user/webServices-keystore.p12
.
ACL Database SQL Initialization Script Path
Files in /etc/sysconfig/deephaven/illumon.d.${VARIANT_VERSION}/sql
are no longer copied/maintained during the update process.
The files are included in the installation, and may be found in /usr/illumon/illumon-db-${VARIANT_VERSION}/etc/sql
.
Prevent Concurrent Thread Pool Usage for One PreemptiveUpdatesTable
When users subscribe to a PreemptiveUpdatesTable or alternatively change their
viewport, the worker must generate a snapshot for the newly subscribed data.
These update propagation jobs are executed on a limited thread pool (by default
400 threads, controlled by the NIO.driver.maxThreadCount
property), shared
with other Deephaven system components. While the update propagation job is
executing, it takes a lock on the PreemptiveUpdatesTable. If a second job was
scheduled while the first was executing, it would then take the lock and wait
on the first job to complete. This behavior could result in multiple jobs for
the same PreemptiveUpdatesTable consuming NIO driver threads.
Now when a update propagation job is executing, a second job concurrently executing will detect the first job and signal it to reexecute after completing the current update propagation phase. This will allow the second job to complete quickly rather than tying up one of the limited threads in the pool for the duration of the first update propagation job.
Authentication Server uses GRPC
The Deephaven authentication server now uses gRPC as a wire protocol and stores relevant state in an etcd backend to improves scalability and failover.
In contrast with prior versions, the authentication servers are all symmetric - authenticating to a single authentication server allows a user to make requests of any of the authentication servers in the Deephaven cluster. To facilitate this shared state, when a client authenticates it provides the server with a UUID and in response the server returns a cookie to the client. The cookie is used to provide a consistent session across multiple servers or transport connections to the same server. As there is no persistent connection with the server, clients must renew this cookie on a regular basis; similarly to how they would periodically request a "ReconnectToken" in earlier versions. A hashed version of this cookie is stored in etcd to allow other authentication servers that may not have met the particular client yet to validate that the client was properly authenticated.
If an authenticated client is unable to contact any authentication servers to renew its cookie before cookie expiration, the client transitions to unauthenticated state. If after this any operation requiring authentication is attempted in the AuthenticationClientManager, the client will try to re-authenticate first if its original authentication method was such that it can be retried (eg, default authentication with a key file can be retried, password authentication cannot).
Code Changes
The com.fishlib.auth.WAuthenticationClientManager
class has been removed. All
client code must migrate to io.deephaven.enterprise.auth.AuthenticationClientManager
.
The AuthenticationClientManager
class hides the gRPC implementation
in io.deephaven.enterprise.auth.GrpcAuthenticationClientManager
, which should not be used
directly by client code. Client code and persistent query scripts can get the default instance of AuthenticationClientManager
via calling AuthenticationClientManager.getDefault()
. For example:
public static void main(String [] args) throws IOException, InterruptedException{
Configuration config = // ...
// ...
AuthenticationClientManager.getDefault().passwordAuthentication(login, pass, user);
The methods that create TokenFactory
objects are now behind a TokenFactoryFactory
interface, which the AuthenticationClientManager
class implements. Example of use:
TokenFactoryFactory.TokenFactory tokenFactory = AuthenticationClientManager.getDefault().getTokenFactory("DbAclWriteServer");
tokenFactory.tryActionUntil(dbAclWriteClient::authenticate);
The io.deephaven.enterprise.auth.TokenFactoryFactory.TokenFactory
provides
improved resilience compared to the prior
com.fishlib.auth.WAuthenticationClientManager.TokenFactory
. In earlier versions,
the TokenFactory would try to generate and verify a token for each
authentication server. With only one authentication server, no retry would take
place. The new TokenFactory uses a deadline-based mechanism so that transient
failures, even from a single authentication server, are retried before reporting
failure to the user.
Since it has been removed, references to com.fishlib.auth.WAuthenticationClientManager.DEFAULT
should be replaced with io.deephaven.enterprise.auth.AuthenticationClientManager.getDefault()
.
Persistent query scripts that use the obsolete classes can be found with the following command.
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_query_grep "com.fishlib.auth"
Authentication Plugins
All plugins will need to be recompiled, with the new package names and other minor changes; please contact support for specific questions.
Property Changes
WAuthenticationClientManager.defaultPrivateKey is kept for backwards compatibility, but the preferred property is now AuthenticationClientManager.defaultPrivateKeyFile
.
Shared/Interactive Console registers with Controller
Users now have the ability to share an interactive console with other groups or users. A shared console uses a single shared script-session, and updates to the scope are shared with all connected console sessions.
Interactive Console Setup
Interactive consoles are created un the "Create Console" dialog in the swing front-end by selecting the "Shared" option, and typing a new "Shared Console" name. Options are provided to allow admins and viewers, and an option is provided for the query to self-terminate when there are no longer any users connected to the query. The ability to connect to a shared console is not currently available in the web front-end. See DH-13465.
Limiting Concurrent Input Columns with Merge
The merge process can now limit the number of input columns processed concurrently for a maximum throughput merge configuration. Instructions are described in the documentation for Merging Data.
Authenticate Table Data Protocol (TDP)
This change adds authentication to the Table Data wire protocol. Enforcement is at the table level. In practice, this affects Remote Table Data Services (RTDS).
- Local Table Data Service (LTDS) - out of scope, and covered by file system permissions.
- Data Import Server (DIS) - requires membership in a set of allowed groups, or
*
for unrestricted data access. - Table Data Cache Proxy (TDCP) - matches the authenticated effective user to the table ACLs to determine access. This is at the table level, row level enforcement still requires filtering by a proxy.
TableDataService now has an authenticate()
method. This will use AuthenticationClientManager.getDefault().ensureAuthentication()
(which triggers default authentication if not already authenticated at the time of the call).
The only time authenticate() needs to be called explicitly is when default authentication changes after a Table Data Service is created.
Changes required to enable ACL based table access checking:
There are several changes that enable this feature. In a new installation, these will all be handled automatically. When upgrading a system, there are several steps that must be performed manually. Instructions are below.
- There is a new system user
tdcp
with an associated private keypriv-tcdp.base64.txt
. This user and key are registered indsakeys.txt
. - The
db_tdcp
process must run as irisadmin to protect the private key. This requires filesystem permission changes to match. - There is a new ACL group
dis-tdp-readers
group, withtdcp
as a member. - iris-defaults.prop has new properties:
# For all dis processes
DataImportServer.allowedGroups=dis-tdp-readers
# for all tdcp processes
[service.name=db_tdcp|db_tdcp_query] {
AuthenticationClientManager.defaultPrivateKeyFile=/etc/sysconfig/illumon.d/auth/priv-tdcp.base64.txt
}
Migrating an existing installation.
WARNING: You must follow migration instructions in DH-12700 and ensure the authentication server is running before proceeding.
Newly updated systems won't have the new user and ACL. Use the following steps to add them (running on an admin machine):
-
The upgrade process automatically creates the "tdcp" user and keys. But you must add the user to the appropriate ACL group with the following command:
/usr/illumon/latest/bin/iris iris_db_user_mod -create_user -user tdcp -group dis-tdp-readers
-
On all machines in the cluster, make sure the TDCP process is running as
irisadmin
. If your environment uses a custom hostconfig, the RUN_AS setting for db_tdcp must be changed to the admin user. The default hostconfig is/etc/sysconfig/illumon.confs/illumon.iris.hostconfig
which is symlinked to/etc/sysconfig/illumon
. This file sources/etc/sysconfig/illumon.confs/hostconfig.system
, which has a section like this (note the RUN_AS line):
db_tdcp)
RUN_AS="${DH_ADMIN_USER}"
WORKSPACE=/db/TempFiles/$RUN_AS/$proc
EXTRA_ARGS="-j -Xmx4g -j -Xms4g -j -DDataBufferPool.sizeInBytes=4294967296 -j -Dservice.name=db_tdcp"
;;
- Restart
db_tdcp
services.
Special instruction for no-root installers
The db_tdcp
process now runs as irisadmin. The regular installer processes automatically update the permissions on the log directory. When installing with the no-root process, you must change these permissions manually.
chown -R irisadmin:dbmergegrp /var/log/deephaven/tdcp
chmod -R 775 /var/log/deephaven/tdcp