Detailed Version Log Deephaven 1.20230511

Note

For information on changes to Deephaven Community, see the Github release page.

Certified versions

Certified VersionNotes
1.20230511.277
1.20230511.309
1.20230511.324The following caveat pertains to this release: if the Python client is given the wrong address, it will keep trying to connect instead of failing after two minutes (DH-15673).
1.20230511.382
1.20230511.403The following caveats pertain to this release: Do not upgrade a Kubernetes cluster to this version.
1.20230511.422
1.20230511.450
1.20230511.474
1.20230511.488

Detailed Version Log: Deephaven v1.20230511

PatchDetails
506DH-17077: Make DeploymentGeneratorTest pass again
505DH-16829: Update worker overhead properties
504Merge updates from 1.20221001.362
  • DH-17072: Do not write temporary-state DH_ properties to cluster.cnf
  • DH-17026: Publish EngineTestUtils (backport of DH-15687)
  • DH-17058: Make pull_cnf disregard java version
  • DH-16884: Add configuration for default user table format to be parquet
  • DH-17048: Fix controller crash and shutdown issues
  • DH-17014: Make cluster.cnf preserve original source and prefer environment variable
503DH-17047: Fix up merge from 20221001.356
502DH-17045: Address Test Merge issue in Vermilion
501DH-17011: Forward Merge of promoted tests from Jackson and promotions in Vermilion
500Backport DH-16948: Always use same grpcio version for Core+ Python proto stub building
  • Merge updates from 1.20221001.356
  • DH-17031: Minor corrections and formatting for QA automation How-to
  • DH-16936: make recreating schemas watch more efficient
  • DH-16717: Add heap usage logging to web api, TDCP, DIS, LAS, controller, and configuration server
  • DH-17004: change closeAndDeleteCentral to clean up tdcp subscriptions
  • DH-17000: Correct improper test promotion in Jackson
  • DH-16888: Preserve original cluster.cnf when regenerating cluster.cnf w/ defaults
  • DH-16599: Bard Mar 2024 test case updates for qa
  • DH-16986: Update for flaky results from merge test starting at Bard
  • DH-16887: Fix test for DH-11284 starting at Bard
  • DH-16797: Change git location on QA testing systems
  • DH-16996: Forward merge of tests fixed in Bard to Jackson
  • DH-16992: Promoting Jackson level tests to RELEASED
  • DH-16979: Fix for CSV tests Jackson and later
  • DH-16663: remove cached data when there are no active subscriptions
  • DH-16934: Fix permissions check for writing workspace data
  • DH-16908: Fix dry run in iris_keygen.sh
  • DH-16851: Improve qa results setup docs
  • DH-16826: Select/Deselect All for OneClick Lists in Export dialog (swing)
  • DH-15247: Set DH_ETCD_IMPORT_NODE default value to the first config server
  • DH-16675: Account for worker overhead in dispatcher memory utilization
499DH-16702: Vermilion April2024 test case updates for qa
498DH-16958: Backport DH-16868 - Check if shadow package already added before adding again
497DH-16875: Fix CSV import tests
496DH-16873: Update and correct "testScript" section of automated QA tests
495DH-16716: Parameterized logging broken in vermilion
494DH-16847: Update and correct Dnd testing scripts
493DH-16836: Fix forward merge anomaly
492Merge updates from 1.20221001.339
  • DH-16813: QA testing git update to Jackson
  • DH-16818: QA Testing System file relocation and documentation updates
  • DH-16072: Jackson Dec2023 test case updates for qa
  • DH-16480: Documentation and support for QA_Results system build
  • DH-16794: better handle export of nonexistent routing file
  • DH-16762: Fix C# docfx task (need to pin older version)
  • DH-16584: Make internal installer use correct sudo when invoking iris_db_user_mod
  • DH-16586: Improve qa cluster cleanup script
  • DH-16640: fixes for tests failing on bard and later revisions
  • DH-16708: Improve import script on qa results
  • DH-16698: Update BHS images to fix a broken rhel8 test
  • DH-16752: Fix installer tests getting null clustername
  • DH-16605: Use grep before sudo sed to avoid root when migrating monit
  • DH-16406: Improve jackson nightly installer test debugability
  • DH-16718: Fix test cases based on CommonTestFunctions refactor
  • DH-16706: ColumnsToRowsTransform.columnsToRows fillChunk does not set output size
  • DH-16700: Ensure QA results setup is maintainable
491DH-16750: Fix temporary and auto-delete scheduling checks
490DH-16542: CUS should trim client_update_service.host - fix for Envoy
489DH-15013: Fix upload CSV from Web UI for timestamp columns
488Fix Unit test failure dur to expanded assertTableEquals checks
487Fix forward merge conflict in Core+.
486Merge updates from 1.20221001.325
  • DH-16469: Bard Feb 2024 test case updates for qa
  • DH-16569: Backport DH-15882 to fix Pandas data frame view bug
  • DH-16149: Improve npm build caching in CI
  • DH-11512: handle '*-OLD' directories better
  • DH-16672: EmptyToNullStringRegionedColumnSource bypasses index narrowing in grouping
  • DH-16623: Unit test fix from .321
  • DH-16623: Index and GroupingBuilder .hasGrouping() should only look at locations relevant to the desired index
  • DH-16624: ShiftedColumns Interacts with Time Parsing
  • DH-16628: whereIn/whereNotIn with Empty Set Tables can Fail
  • DH-16597: check for routing to export before opening output file
  • DH-16591: Fix reading Parquet files with Mixed dictionaries and Offset Indices
  • DH-16443: Add sudo -u DH_MONIT_USER for installer when checking if monitrc needs migration
  • DH-16408: Do not use yum on systems with dnf
  • DH-15523: Allow config_packager to run as irisadmin when irisadmin is monit user
  • DH-14156: improve merge query and dhctl feedback when tailer ports are disabled
  • DH-14169: Fix message when purge fails
  • DH-16363: Remove kubectl from VM base images
  • DH-16442: Make ubuntu monit de-rooting use DH_MONIT_USER instead of DH_ADMIN_USER
  • DH-16113: Bard Jan 2024 test case updates for qa
  • DH-16451: upgrade npm to latest lts/fermium version
  • DH-16450: avoid a deadlock due to lock inversion
  • DH-16053: correct minor errors in DataImportChannel
  • DH-16367: Make INTERNAL_PKI=true work correctly on mac
  • DH-16354: Make INTERNAL_PKI=true cert expiry limits configurable
  • DH-15467: Change superfluous gitlab url into github url
  • DH-16107: NPE in whereIn Error Handling
  • DH-16347: add synchronization to getGroup... methods in AbstractDeferredGroupingColumnSource
  • DH-16499: improve feedback in 'dhconfig routing export' when there is no routing file in etcd
  • DH-15729: Allow resources to be skipped in Test Automation
  • DH-16443: Make ubuntu de-rooting grep on monitrc before trying to sed the file
485DH-16468: Vermilion Feb 2024 test case updates for qa
484DH-16669: Schema with functionPartitionInput=true generates a broken logger
483DH-16622: Address inconsistencies in automated tests for DDR
482DH-16632: Updated controller_tool tests support file locations and stability for vermilion and following
481DH-16592: Find healthy etcd node for etcd snapshot
480DH-16534: Importing Jackson ACLs to Vermilion or later fails because SystemACLs are not recognized
479DH-16612: Avro Kafka Ingestor error with extra consumer fields
478DH-16580: Bad Origin Causes NPE in Auth Server
477DH-15070: Make proto re-builds check for "use shadow package" before altering source
476DH-16542: CUS should trim client_update_service.host
475Release note updates.
474DH-16015: Vermilion Dec 2023 test case updates for qa
473DH-15598: Additional schema validation fixes
472DH-15598: Add merge validate pqs for new tables
471DH-16387: Fix R setup in test automation from forward-merge
470DH-16275: Fix test automation anomalies
469Merge updates from 1.20221001.308
  • DH-16418: Fix DiskBackedDeferredGroupingProvider changing post-mutator "No groups found" to "No grouping exists"
  • DH-16382: Perform monit migration using systemd override.conf
  • DH-16206: Remove duplicated gen-keys.sh script in jackson
  • DH-16401: Fix Groovy script defined classes imported with db.importClass() break internal formulas
  • DH-14283: DeephavenNullLoggerImpl should use dynamic pool
  • DH-16237: change user buffer caching to restore backpressure
  • DH-14938: Properly cache downloadDocFx task, to reduce build flakiness
  • DH-16291: Add tags to test with no data and address one breaking test for Bard
  • DH-16273: backport DH-14452 to fix logging error
  • DH-15740: Test certificate fingerprints so we always update certs when they change
  • DH-16262: Wrap calls from groovy to gsutil inside bash -ic
  • DH-16252: Update USNYSE Business Calendar to Include 2026
  • DH-16242: CART Leaks Connections when Snapshots are Slow, Exception can escape in refresh()
  • DH-16309: EmptyToNullStringRegionedColumnSource should copy and wrap underlying provider by default
  • DH-16309: Fixed loss of grouping when SourceTable.MAP_EMPTY_STRINGS_TO_NULL == true
  • DH-16300: Test Automation: have minorversion flow to results summary
  • DH-16279: Add MessageListener example implementation to SBEStandAlone jar
  • DH-16262: Wrap calls from groovy to gsutil inside bash -ic
468Revert squashed forward-merge
467DH-16415: Fix a race in GrpcLogging initialization.
466DH-16041: Move installer tests to jdk17
DH-16205: Remove nightly core+ tests (vermilion only)
465DH-16328: Add release notes for DH-11713
464DH-16313: Fixed NPE on legacy metadata overflow file access
463DH-15665: Remove internal installer workarounds for jackson+rhel9
462DH-16265: Make LocalMetadataIndexer methods public
461DH-16278: Automation Should Detect "Stuck" PQ Tests
460TestIntradayLoggerFactory fixes.
459QA forward merge changes.
458Merge updates from 1.20221001.299
  • DH-16243: Configure high-cpu integration test box on j17 CI
  • DH-16128: Fix grouping propagation when inputs are filtered
  • DH-16130: Ensure blank line in changelog is handled consistently.
  • DH-16202: QA cluster maintenance script usability
  • DH-15913: Segment parquet tests to an isolated high-CPU box
  • DH-16200: Fix Automation/src/test/resources/testScript/engine/updateby directory duplicity
457Merge updates from 1.20221001.296
  • DH-16131: update DH revision name map for QA results analysis query
  • DH-16087: Add HTTP security headers to Envoy configuration
  • DH-16192: Always set DH_ETCD_IMPORT_NODE to a single machine
  • DH-16181: Fixed MapCodec ignoring offset and length params
456Merge updates from 1.20221001.292
  • DH-16176: Backport of DH-15469 (Use external SSH executable for git)
  • DH-15157: CART Error Propagation and Reconnect Counting Fixes
  • DH-16128: Fix grouping propagation when inputs are filtered
  • DH-15876: Add Test Automation support for configuring java tests
455Merge updates from 1.20221001.291
  • DH-15493: Enable version suffixes for DbInternal tables
454Merge updates from 1.20221001.290
  • DH-16114: Test Automation: revert bad test case that was released
  • DH-16055: Fix sed substitution when numbers and wildcards overlap in vm-tools README
  • DH-16103: Remove etcd passwords from log output
  • DH-16014: Test Automation: add test case updates for December
  • DH-16108: Test Automation: fix NPE on template lookup
  • DH-16090: Test Automation: pull back integration logs even on fatal condition
  • DH-16078: Test Automation: run locally via installer
  • DH-15875: Allow disabled tests to run in testAutomation - control by config only
  • DH-15653: Add tagging to Test Automation
  • DH-15157: CART skipping reconnection attempts
453Merge updates from 1.20221001.289
  • DH-16098: update to test analysis query to remove duplicate data and add MinorVersion field
  • DH-16096: Better check for anonymous mysql users before we attempt to fix them
  • DH-16039: Reenable rhel9 installer test
  • DH-16096: Fix nightly installer test mysql error (anonymous user problem)
  • DH-14113: Use irisrw instead of root when possible in dbacl_init.sh
  • DH-16074: update controller tool tests to sudo use consistent with client env
  • DH-15988: fix logging error
  • DH-15275: Add release-focused testcases to Jackson July-Dec 2023
  • DH-16061: update controller_tool test for null pointer message
452DH-16234: Publish PQ details into session scope
451DH-16212: Add CORS filter to workers / web api server
450DH-16204: Refactor Core+ locations to better support Hive format
449DH-16221: Controller now allows clients to re subscribe.
448DH-16201: Fix intellij-only error in buildSrc/build.gradle
447DH-16141: Sort ProcessEventLog in performance queries
446DH-16138: Backport relevant csv import fixes from grizzly to vermilion
445DH-16111: Allow Flight Put requests for exports to support input tables an
444DH-16135: Core+ workers should report update errors
443DH-16136: Core+ performanceOverviewByPqName Timestamp Filter is Broken
442DH-16120: Allow core+ pip to leverage pipcache
441DH-16009: Fix auto-capitalization of field names in ProtobufDiscovery
440DH-16123: Allow queryviewonly users to restart queries they have permissions to
439DH-16054: Fix HierarchicalTables from Core+ workers not opening
438Spotless application.
437Correct Javadoc error differently.
436Correct Javadoc error.
435DH-16106: Index for coreplus:hive Formatted Partitions
434DH-16004: Fixed Csv import error that happened when value in last row, last column is empty
433DH-15598: Fix DataQualityTestCaseTest failure
432DH-16049: InteractiveConsole cannot start Merge worker
431DH-15598: Fix integration test failures from validation fixes
430DH-16086: Update Core+ to 0.30.4
429DH-15993, DH-15997, DH-15950: Fix dhcVersion handling and dnd publishing
428DH-15598: Schema validation fixes from QA cluster monitoring
427DH-16066: update generate_loggers for consistent user
426DH-15871: Etcd upgrade release note clarifications
425DH-16057: Core+ Python Client Fixes
DH-16059: Pin Core+ Python Client Requirements
DH-16067: Add ping() method to Python session manager
424DH-16064: Core+ R and C++ clients README improvements
423DH-16064: Core+ R and C++ clients README improvements
422DH-16047: Allow Arbitrary pydeephaven.Session Arguments
DH-16048: Add Frozen Core+ Requirements to Build
421DH-16046: Use latest iris-plugin-tools version
420Merge updates from 1.20221001.281
  • DH-16038: Disable flaky rhel9 installer test
  • DH-15964: Fix python3.6 in centos 7 base image
  • DH-15964: Additional tweaks for base image creation
  • DH-15964: Improve base image creation process
  • DH-16005: Test Automation: improve readme and env var passthrough
  • DH-15933: Test Automation: add Nov testcases to Bard
  • DH-15964: Build and consume per-release base images
  • DH-15964: Add rhel8/9 base images
419DH-16036: Fix Core+ only able to read table.parquet files when using Extended layouts
418DH-16031: Update Core+ to 0.30.3.
417DH-15825: Reject auth server and controller reload requests in Kubernetes
416DH-16028: Fix logic bug in decision for etcd lease recreation in Dispatcher
415DH-15276: Test Automation: add Sept-Nov testcases to Vermilion
414DH-16017: Fix Kubernetes setup-nfs-minimal script directories after NFS4
413DH-16006: Allow ERROR with UNAVAILABLE to pass dispatcher liveness.
412DH-16002: Do not show "Preserve Owner" option for non-superusers
411DH-15920: Add overview to Core+ javadocs with links to DHE/DHC docs and javadocs
410DH-16006: Write Integration Test for Dispatcher Liveness
409Release note updates.
408Merge updates from 1.20221001.276
  • DH-16008: Dispatcher allows workers to miss TTL
407DH-15976: Formatting Rule Doesn't use default set by use
406DH-15830: Fixed upgrade-nfs-minimal.sh and DbPackagingTools.groovy
405DH-15939: Option to restrict exported objects by default
DH-15951: Permit multiple PQs per TestCase
404DH-15998: Core+ Kafka Ignore Columns Test should not display all queries
403DH-15991: Fix integration test issues from forward merge
402DH-15830: Changed Helm chart to use NFS 4.1 for RWX PVCs
401Merge updates from 1.20221001.275
  • DH-13351: correct default value in release note
  • DH-15610: Allow Staging test results to segment exit code
  • DH-15940: Integration Test Logs have Wrong Paths
  • DH-15936: fix bash3 + PS4-subshell bug for mac installer
  • DH-15763: Test case updates for Oct 2023
  • DH-15866: Set republishing to use jdk8 by default
  • DH-14989: Use self-signed "internal" PKI for nightly installer tests
  • DH-15897: Fix JDBC testcases
  • DH-15886: Fix controller stop scheduling issue
  • DH-15475 Segment test automation for more timely FB run completion
  • DH-15854: Test Automation: logging usability tweaks
  • DH-15641: Add Reset, close disconnected child panels
  • DH-15882: Pandas data frame view breaks when data frame is empty
400DH-15949: Fix bug introduced by .396. Fallback to plain encoding breaks with arrays.
399DH-15762: Improve matplotlib and seaborn testcase queries
398DH-15985: Fix export not respecting superuser mode
397DH-15957: updated release notes
396DH-15949: Fix ParquetTableWriter writing empty dictionaries resulting in unreadable parquet files.
395DH-15977: Update Vermilion to Community Core 0.30.1
394DH-15728: Rework DDR tests for flakiness
DH-15957: Make flag available in up/down actions available in start, start, restart actions
393DH-15788: Fix Java 11 only logback dependency.
392DH-15788: Unicode Characters Crash Core+ Worker
391DH-15132: Clear selection in Query Monitor before creating a new draft
390DH-15792: Unclear IncompatibleTableDefinitionException with Core+ Kafka ingestion
389DH-15942: Fix etcd config clobber with helm upgrade in k8s
388DH-15937: Update web UI packages to 0.41.4
387DH-15931: Update Core+ to Community Core 0.30.0
386DH-15869: Tool to log system tables from Core+ workers
385DH-15929: Fix bug with user-provided etcd password in k8s environments
384DH-15928: R dockerized build tries to download from wrong github URL
383DH-15911: Logging improvements in k8s workers.
382DH-15354: Fix usage of outdated Java flags like 'PrintGCTimeStamps'
381DH-15789: Backport DH-15840 and handle failed reconnect properly.
380DH-15910: Load balance across etcd endpoints for etcd clients
379DH-15894: Add helm chart toggle for envoy admin port in k8s deployments
378DH-15908: Linux/MacOS launcher tar not available for download
377DH-15903: update generate_loggers tests for vermillion inconsistency
376DH-15902: Update controller tools test for vermillion inconsistencies
375Release notes update.
374DH-15862: Include sbom when republishing to libs-customer
373DH-11431, DH-15855: Equate parquet:kv with coreplus:hive
372DH-15874: Support writing Deephaven format tables from Core+ workers
371DH-15893: DH-15884 release notes in wrong file
370DH-15880: Relabel workers as "Legacy" and "Core+"
369DH-15884: Support CURL_CA_BUNDLE env var for curl cmdline in Core+ C++ client Session Manager
368DH-15855: Java8 Fix
367DH-15855: Add support for multi level partitioned DH format tables in Core+
366DH-15877: Fix core+ table-donor/getter test for Envoy
365Fix spotless failure in .364 version
364Merge updates from 1.20221001.269
  • DH-15861: Fix double-start of persistent queries
363DH-15860: Arrays.sort in Predicate Pushdown is Not Thread Safe
362DH-15856: cleanupAfterYourself Task is Too Aggressive for Core+ Javadoc/Pydoc
361Merge updates from 1.20221001.268
  • DH-15838: remove obsolete jvm args from csharp open-api client
360DH-15141: Save and apply on next restart banner not showing
359DH-15815: Java 8 javadoc fix.
358DH-15815: Java 8 compilation fix.
357DH-15815: Update Vermilion to Community 0.29.1
356Merge updates from 1.20221001.267
  • DH-13351: corrections to readme and default value
  • DH-15827: ReadOnlyIndex should return refcount for tests.
  • DH-15827: UpdateBy incorrectly copies Index without clone
  • DH-15755: Re-enable simplified input table test case
  • DH-15809: Avoid duplicating contents of etcd configuration files
  • DH-13351, DH-11285, DH-15821: make Tailer more resilient to user data storms
  • DH-15808: ShiftedColumnSource Context Reuse
  • DH-15812: TableUpdateValidator result should be composable
  • DH-15806: ReplicatedTable RedirectionIndex shift uses updates linear in table size not viewport size
  • DH-15703: Test Automation: use REPLACE mode for serials to ensure updated test scripts
  • DH-15474: Ensure stderr and stdout are populated in jenkins and binary log for command line tests
  • DH-15614: Test Automation: test case improvements for Sept 2023
  • DH-15761: Backport excludedFilters in test automation
  • DH-15772: Improve Error Messages in PropertyRetriever
  • DH-15819: Fix ETCD ACL provider using shared message builders
  • DH-15586: Teach iris_keygen to pass -legacy flag when openssl version > 3.0
  • DH-15672: Deephaven Launcher 9.07 - DeephavenUpdater honors command line URL over appbase in existing getdown.txt
  • DH-15737: More resilient etcd lease and kv error handling in the Dispatcher
  • DH-15652: Refactor legacy remote client test cases
  • DH-15635: Ensure test automation cluster scripts configured consistently.
  • DH-15684: Developer readme: allow Dnd version to be auto-calculated during upgrade.
  • DH-15600: Fixed Table leak when filtering Pivot widget
  • DH-15739: Re-enable forward-merged unit tests
  • DH-15607: create tests to validate controller_tool
  • DH-15719: added tests for dhconfig:properties
  • DH-15697: Update Jackson jetcd to 0.7.5. Configure waitForReady and deadlines for etcd RPCs
  • DH-15586: Official support for RHEL9 in installer
  • DH-15677: generate-iris-keys and generate-iris-rsa should not overwrite existing files.
  • DH-15660: ShiftedColumns must end in Underscore
355DH-15777: Configurable github URL prefix for DnD client builds
354DH-15811: Add release note
353DH-15789: Fix CART double notification on disconnect
352DH-15132: Fix new draft selection reset when viewport contains a single row
351DH-15683: Bump plugin tools to 1.20221001.008
350DH-15683: Support DH_DND_VERSIONS=auto
349DH-15787: Upgrade seaborn from 0.12.2 => 0.13.0
348DH-15785: DnD workers break with -verbose:class
347DH-15746: Tokenize values to helm chart for Kubernetes deployments
346DH-15441: CUS reload may not show success message
345DH-15779: Hooks for SAML group synchronization.
344DH-15776: Speed up digest generation for CUS via doing digests in a thread pool
343DH-15735: Add kafka dnd manual test steps
342DH-15743: Fix error propagation of source()
341DH-15751: Revert DH-15141, fix query draft switching to Settings on update
340DH-15713: Test uplifts
339DH-15734: BatchQuery hangs when creating input tables
338DH-15667: Improve Table Location Creation Speed
DH-15742: Add very verbose TableDataExporter and RemoteTableLocation logging
337DH-15736: Add missing wait_for_ready for auth ping in python DnD client
336DH-15718: Allow KafkaTableWriter to ignore committed offsets and seek based on partitionToInitialOffsetFallback
335Missing version
334DH-15732: Always run publishToMavenLocal before invoking any DnD GradleBuild tasks
333DH-15733: Set gRPC calls to use wait_for_ready in the DnD python client
332DH-15725: Input Table editors are broken in Swing
331DH-15141: Show "Save and apply on next restart" banner immediate after picking "Save and apply on next restart"
330DH-15716: Fixed a race condition in the controller server to client gRPC
329DH-15705: Automatically clean old-versioned artifacts out of development environments.
328DH-15704: read value of javax.net.ssl.trustStore as a file when possible
327DH-15696: Fix DnD shadowJar + intellij IDE breakage
326DH-15691: Integration test for Kafka offset column name.
325DH-15394: Remove overeager reauth code from controller client
324DH-15674: ArrayBackedPositionTable index coalescer misuse and index errors.
323DH-15638: Include Barrage Client in DnD Worker
DH-15639: DndSessionFactory should allow authentication using a token
322Revert mistaken commit.
321Mistaken commit.
320DH-3139: Add capability for tailers to clean up processed files
319DH-15691: Allow changing KafkaOffset column name in DnD Ingester
318DH-15499: Add automation test cases for matplot lib and other tests
317DH-15640: allow user table lock files to be bypassed
316DH-15663: DnD AuditEventLog fixes including for KafkaTableWriter.
315DH-15654: Fix for worker-to-worker table resolution
314DH-15644: Allow testcases to auto-select engine.
313DH-15681: Fix bundle script on MacOS.
312DH-15687: Publish EngineTestUtils so customers/plugins can write better tests
311DH-15577: Publish DnD jars whenever we publish iris jars
310DH-15673: Use RPC timeouts in the DnD python client
309DH-15681: Upload R and C++ Bundles to GCloud
308DH-15542: C++ Client should propagate errors from server when contained in trailers
307DH-15395: Improve documentation of ControllerClientGrpc
306DH-15628: Break up large audit event log messages into multiple log calls
305DH-15625: Fix link to config file when upgrading in k8s
304DH-15469: Update jgit SshSessionFactory to a more modern/supported version (changing iris_admin docker file for k8s to include ssh)
303Merge updates from 1.20221001.251
  • DH-15627: Promote stable QA tests to released
  • DH-15606: Envoy integration fails in environments where IPv6 DNS is enabled
  • DH-15626: Improve qa-results dashboard query
302DH-15649: Provide a dockerized DnD R client build for RHEL 8
301DH-15643: Creating source bundles for R and cpp should force a well defined filename
300DH-15637: Fix C++ client terminating process if AuthClient fails initial Ping
299DH-15636: Update Fix DnD historicalPartitionedTable fetches intraday data
298DH-15563: Enterprise R client including SessionManager and DndClient
297DH-15488: Test Automation: add option to run scripts from community docs
296DH-15546: Add testcase for nightly snapshot monitoring
295DH-15596: DeephavenCombined needs to merge service files
294DH-15636: Fix DnD historicalPartitionedTable fetches intraday data
293Merge updates from 1.20221001.247
  • DH-15616: Fix a race condition in RegionedPageStore
  • DH-15609: Fix JsTable leaking table handles
  • DH-15540: better support for loggers with generics
292DH-15505: Only close DnD Worker channels on worker shutdown.
291DH-15629: Fix race conditions with DnD Mark / Sweep
290DH-15617: Disable Transactions for DnD Kafka Table Writer
289DH-15469: Update jgit SshSessionFactory to a more modern/supported version
288Merge updates from 1.20221001.244
  • DH-15587: Fix broken README link in cluster setup
  • DH-15274: July 2023 TestCase updates for qa
  • DH-15451: Fixed Wrong Parenthesis on Console Attachment Option
  • DH-15501: Fixed whereDynamicNotIn forwards to wrong method
  • Back-porting DH-15246: Allow commas in ticket list for github PR title
  • DH-15584: Create tests to validate generate_loggers script
287DH-15605: Avro Kafka ingestion hasField performance improvement
286Fix typo in DnD relocation string
285DH-15473: Implement PartitionedTable fetches for DnD Database. Handle Location addition and removal
284Changelog typo.
283DH-15519: Removed Create Console and Attach Console option from Swing for DnD Workers
DH-15589: Fixed Help About Dialog display
DH-15451: Wrong Parenthesis on Console Attachment Option
282Changelog typo.
281Release note updates.
280Merge updates from 1.20221001.242
  • DH-15592: Type of ShiftedColumn results in view are incorrect
  • Changelog typo.
279Merge updates from 1.20221001.240
  • DH-12084: officially support rhel8
  • DH-14983: Add DH_USE_EPEL flag to allow disabling epel repo
278Merge updates from 1.20221001.239
  • DH-15541: Percolate integration test exit codes back to jenkins
  • DH-15352: add release notes for .331 change
  • DH-15562: Make internal deployer use apt update before apt install
  • DH-15545: Don't use symbol tables for rollups with constituents
  • DH-15414: Only use fully qualified /usr/bin/systemctl to control monit, never use service monit
  • DH-15544: make NullLoggerImpl pool sizes configurable
277DH-15556: increase robustness and diagnostics in db.replaceTablePartition
276DH-15574: Fix creation JSON field parsing.
275DH-15581: Dictionary MatchFilter with invertMatch Returns no Results when Keys not Found
274DH-15510: Allow customers to provide supplemental requirements during image build
DH-15574: Option to Create Ephemeral Venvs for DnD Workers
DH-15561: Cannot Create DnD Kubernetes Merge Worker
273DH-15560: Fix DND Ability to read enterprise DbArray columns
272DH-14479: Add specific instructions for auth client manager migration
271DH-15458: Move all cert-manager conditional configs to iris-endpoints.prop in K8S envs
270DH-15171: Fix issue with CSV Import using DHC parser failing to recognize Import sourceColumn attribute
DH-14660: CSV importer ignores/mishandles ImportColumn sourceName attributes
DH-15265: Fix issue with use of SinglePartition when Partition column is in source
DH-14489: Fix issue with SchemaEditors Preview Table functionality
269DH-15559: Truststore population fix for certain K8S environments
268DH-15530: Add a SessionManager to the C++ client
267Merge updates from 1.20230131.197
  • DH-15513: print less of QueryScope in MergeData
  • DH-15160: Avoid calling sudo in prepare_filesystem if we can test files without it
  • DH-15524: add code path to lenient schema import
  • DH-14639: Automatically fix jars which lack an embedded pom, for sbom completeness
266DH-15552: Publish DnD Pydoc
265DH-15516: Publish Javadocs on DnD Java Client
264Make release note edited on deephaven.io consistent.
263DH-15491: Dynamic Kafka Partition Rollover
262DH-15301: Fix error upon closing DnD Python client session manager
DH-15528: DnD Python Client Integration Test
261DH-15428: Cannot log in to Swing client on a cluster with a private PKI certificate and Envoy
DH-15384: In a Kubernetes cluster created with a PKI cert, iris_db_user_mod will time out and fail
DH-15385: After switching a Kubernetes install from a public cert to a private PKI cert, the launcher fails with a PKIX error
260Merge updates from 1.20230131.195
  • DH-15463: JdkInternals getUnsafe() doesn't work with ManifestClassPath jar (Windows IntelliJ) and Java 8
259DH-15517: Fix DnD Python Client
258DH-13736: update digest algorithm to sha256 during private key generation
257DH-15506: Subscription test for cpp controller client; fix for ControllerHashTableServer SE_PUT msg
256DH-15507: Make /db/Users mount writeable in K8S
255Merge updates from 1.20230131.194
  • DH-15497: Test Automation README improvements
  • DH-12216: Use new QA sql server for JDBC import test
  • DH-15425: Improve automation test README for developer workflows
  • DH-13869: Enable more test cases in automation
  • DH-15454: Do not let npm write bin-links (preventing jenkins build instability)
  • DH-14671: Write shell test stdErr stdOut to file
  • DH-15399: Ensure test case metadata is not overwritten by default.
  • DH-15352: port Bessel correction from community to Enterprise
  • DH-15413: Add Logging for newInputTable Fails Silently
  • DH-15160: Allow installing as irisadmin if irisadmin is also DH_MONIT_USER
254DH-15326: Missing lzma from Python 3.9 built by installer on CentOS
253More release note updates.
252DH-15457: K8S pod startup contingent on dependent service availability
251Correct release note heading.
250Changelog update.
249DH-15426: Initial R wrapping for Auth and Controller clients.
248DH-15367: BYO PVC, allow configs to mount secrets in k8s workers, other containerization improvements
247DH-15477: Add javadoc to DnD build.
246DH-15416: StackOverflow in CatalogTable where()
245DH-15446: close DIS index binlog files on last release
244DH-14982: DnD Kafka Ingestion
243Merge updates from 1.20230131.193
  • DH-15470: Fix superusers unable to create some query types from Web UI
242DH-15322: Allow customer provided JARs for DnD Workers
241DH-15468: Switch out deprecated community class in DnD initialization.
240Add release notes link to DHC.
239DH-15424: Can not download Swing launcher on kubernetes installation
238DH-15461: Fix dispatcher registration race
237DH-15460: enable strict yaml parsing to avoid duplicate map keys in data routing file
236Changelog update.
235Update release note text.
234DH-11431: Add DnD support for Parquet hierarchical and fragmented file sets
233DH-15444: Update DHC version to 0.27.1
232Merge updates from 1.20230131.192
  • DH-15440: Use temurin (adoptium) jdk repos for ubuntu installs
  • DH-15419: Use packages.adoptium.net instead of adoptopenjdk.jfrog.io
  • DH-15432: Fix broken syntax in installer's new TarDeployer()
  • DH-15369: Fix MultiSourceFunctionalColumn Prev issue
  • DH-15403: Reenable BinaryStoreWriter C# publishing
  • DH-15433: Fix republishing job for sbom extension
231DH-15434: Update deephaven-csv-fast-double-parser-8 dependency
230DH-15397: Controller client should clone PQ before returning it
229DH-15404: Fix related broken integration tests
228DH-15388: Initial DnD C++ client: Controller client
227Merge updates from 1.20230131.190
  • DH-15422: Prevent admin_init from being executed twice
226DH-15404: Use Java library for Throwable logging
225DH-15234: Controller duplicate PQ exception improvements
224DH-15389: Dictionary Columns need to have unbounded FillContexts
223DH-15348: Fix issue with forward merge for web
222Merge updates from 1.20230131.189
  • DH-15271: Test Automation: allow skip-dependencies mode
  • DH-14688: re-enable csharp with updated dockerfile / dotnet version
  • DH-15280: One click ranges cause illegal argument range exception
  • DH-15348: Allow admins to view script of query types they can't edit
  • DH-15294: Do not overwrite user configuration files when reinstalling deephaven
221DH-15383: Fix controller crash during community worker shutdown
220DH-15325: Add ACLs for DnD Index tables
219DH-15344: Update integration tests to use DIS claims
218DH-15155: Fix issue with console settings being undefined sometimes
217DH-15328: Add a simple-to-use Java DnD client.
216DH-15376: Connected to the web UI in a second tab results in losing authentication for the original tab
215DH-15373: Ensure running dnd tests skips java8
214DH-15165: Add initial set of dnd test scripts
213DH-15346: Fix extra JVM arg propagation to DnD workers, configure SNI workaround for DnD workers in k8s env
212DH-15321: Use Index tables for process info id extraction and error messaging
211DH-15355: Additional log entries for login and logout in WebApiServerImpl
210DH-15298: Add filter by partition to metadata tool
209DH-14787: Add release notes
208Merge updates from 1.20230131.188
  • DH-15314: Fix failing automation test for addManySchemas
  • DH-14167: Plots sometimes do not draw when they have ranges set with OneClick
  • DH-15202: ACL Editor Namespace/Table ComboBoxes are aware of additions and removals (swing)
  • DH-15252: Add instrumentation to Input Tables
  • DH-15318: Do not use swing-components to calculate max viewport in non-swing processes
  • DH-15309: Allow removal of "Help / Contact Support ..." via property (swing)
  • DH-15305: Avoid using RecomputeState.PROCESSING to determine viewport row staleness (swing)
  • DH-15310: Optimize allocations and copies for SortedRanges.insert when it is effectively an append
  • DH-15302: Add a stand-alone SBE Java in StandaloneJavaSbeClient.jar
  • DH-15333: Update java generated from forms to match IJ generated format
  • DH-15178: correct TDCP's handling of removed data - remove locations on subscribe, during rescan
  • DH-15026: correct TDCP's handling of removed data - remove locations on error
207DH-15325: Create Index tables for DbInternal Community tables
206DH-15312: Dedicated certs for controller and aclwriter processes in k8s deployments
205DH-15337: Improve logging in WebApiServer and GrpcAuthenticationClientManager
204DH-15334: Update java generated from forms to match IJ generated format
203DH-15324: Remove deadsnakes ppa from Dockerfile
202DH-15301: Fix error upon closing DnD Python client session manager
201DH-15250: Initial DnD C++ client: Auth client
200DH-15323: Fix NPE when controller disconnects gracefully from client
199Merge updates from 1.20230131.185
  • DH-15316: Fix silverheels VM deployment
  • DH-14742: May-June 2023 test case updates for qa
  • DH-11925: ofAlwaysUpdate not setting MCS Correctly
  • DH-15299: Improve SortedRanges.insert for append case
  • DH-15256: Update USNYSE 2025 calendar
  • DH-15306: GroupingBuilder should return empty grouping for empty input index
  • DH-14482: trim values in user and password config files
  • DH-15291: Remove parallelism bug in dh_install
  • DH-11758: Add installer tests for customer users + plugins
  • DH-15270: Use manually-recursive chown in some prepare_filesystem.sh calls
  • DH-5698: dhconfig support exporting single tables
  • DH-15219: Launcher 9.06 - correct error in prop file location
  • DH-15251: Remove unused logic from DeephavenInstallScript.groovy
198DH-15290: Make db part of the database module so it can be imported from scripts
197DH-15239: Added a note for attaching container registry for AKS install
196DH-14888: Log operation user in DnD DbInternal tables
195DH-15296: delete cookie failed message in auth server log
194DH-15287: Disable writing ProcessMetrics by default
193DH-15076: Kafka ingester that worked under Jackson fails after upgrade to Vermilion
192DH-15311: ControllerConfig grpc serialization serialized tds config 3 times instead of tds, dis and tdcp
191DH-15091: Use dynamically generated services and certificates for workers in k8s
190DH-13169: Fix reactstrap not being usable in JS plugins
189DH-15164: Fix contains ignore filter in JS API
188DH-15168: Cache TDCP Query Filtering Decisions
187DH-15192: Adjust heap overhead parameters for Kubernetes
186DH-15254: Updates to README.md, Helm tar build script, and buildAllForK8s script for Helm deployments
185DH-15258: Fix potential NPE when using multiple ingesters in a Kafka in worker DIS
184DH-15235: Fix error with matplotlib receiving data in incorrect format
183DH-15180: Fix unbounded thread growth in GrpcAuthClientManager
182DH-15243: Include SQL In DnD Build
181DH-15248: Fixed bug resulting in FB build failure introduced in DH-14659
180Changelog and release note fixes.
179Merge updates from 1.20230131.182
  • Fix release note typo.
178DH-15246: Allow commas in ticket list for github PR title
177DH-15244: DnD Wheel Should use dhcVersion not Hardcoded Value
DH-15245: DndSession Needs Certificates Passed Through
176Merge updates from 1.20230131.181
  • DH-15215: Add DataCodeGenerator additional interfaces
  • Launcher 9.05
  • DH-15099: script change to allow Deephaven Launcher to exist in a path with spaces
  • DH-14600: accept new custom certs on an existing instance
  • DH-14496: better reporting when "new" PKCS12 file cannot be parsed by "old" java version
  • DH-15082: --insecure command line option for Deephaven Updater, to accept self-signed certificates
  • DH-15019: command line options to set instance and workspace roots
  • DH-15216: Add options button to show hidden context menu choices
  • DH-15219: add IrisConfigurationLauncher.connectionTimeoutMs property
  • DH-13649: Fix the etcd dispatcher user/config migration scripts.
  • DH-13651: Make readonly etcd keys usable by dbquerygrp.
175DH-14659: Fixed CSV Importer hangs on very small file
DH-15158: Fixed OOM error when doing a large CSV import in vermilion
174DH-15143: Add basic python time lib test for dnd to test automation
173DH-15229: Fix python installer test
172DH-15090: Use cert-manager for Deephaven services in Kubernetes
171DH-15229: Always supply defaults for DH_PYTHON_VERSION variable(s)
170DH-15227: Fix monitrc installation modifications for Ubuntu 22.04
169DH-14041: Fix mysql-connector file privileges
168DH-15146: SortedClockFilter does not handle empty tables
167DH-14461: Add support for Ubuntu 22.04; add DH_PYTHON_VERSION flag for installer
166DH-14824: Safe Mode should show status even if script code is unavailable
165DH-15217: Disable flaky grpc test case
164DH-15057: Add live, historical and catalog tables to resolvable flight tickets
163DH-15137: Unable to connect to Community worker in Safe Mode
162DH-15201: Expose staticUrl from Python Client Session pqinfo() for Envoy
161DH-15210: Integrated CUS does not digest plugin files that are sym links
160DH-15212: Make DEEPHAVEN_UPDATE_DATE build in CI w/ newer versions of git
159DH-15190: Fix internal installer bug for mac os
158DH-15081: Add basic time lib test for dnd to test automation
157Merge updates from 1.20230131.180
  • DH-15193: Make IRIS_VCS_VERSION build in CI w/ newer versions of git
  • DH-15123: Avoid hang when filtering from bottom of large table (swing)
  • DH-15191: Reduce max table display size (swing)
  • DH-14593: Fix duplicate unit test enum class names
156DH-15095: Prevent incorrect "Worker disconnected" messages on console reconnect (swing)
155DH-15156: Fix spotless failure in .154 version
154DH-15156: Audit truststore usage to verify empty and null checks for TrustStorePath
153DH-15187: Install DnD Python on K8s
152DH-15188: iris_db_user_mod needs truststore set on K8s
151DH-15182: Revert DH-14818: allow spaces in PQ names at command line
150Merge updates from 1.20230131.179
  • DH-15169: Fix bad quoting in internal deployer
  • DH-15096: Produce better log output for null socket getAddress result
149DH-14994: Wait for query to be running before fetching API
148DH-15163: Port exec_notebook and import functionality to DnD workers.
147DH-15183: Always Run buildDnd Gradle Task
146DH-14833: Correctly serialize FatalException
145DH-15147: Discard client side subscription when heartbeats are lost to allow for clean resubscription.=
144Merge updates from 1.20230131.178
  • DH-15170: Fix build-info-extractor build issues
143DH-15173: Update DnD to DHC 0.25.3
142DH-15089: Automatically build DnD when deploying internal machines
141DH-14897: Run dnd tests nightly against community latest
140DH-14749: Display community port in Query Summary
139DH-15154: Remove DHC worker port in worker information tooltip
138DH-15014: Remove the "Open Community IDE" button on the QM Summary tab
137DH-15142: Web fails to disable viewer queries without error
136Merge updates from 1.20230131.177
  • DH-15166: Update testcontainers dependency
  • DH-15131: Fix internal installer typo
  • DH-14948: Internal deployer learn deephaven install needs more sudo -u irisadmin
  • DH-15139: Unit test should just use assertSorted
  • DH-15139: Don't mark grouped partitions as sorted ever.
  • DH-14639: Generate SBOM with each build
135DH-15126: Display engine version in Code Studio info popup
134DH-15140: When workers are shut down they should gracefully shutdown the gRPC status stream
133DH-15152: dhctl Logging Integration Test
132DH-15102: Improve the metadata indexer tool with validation and list capabilities
131Merge updates from 1.20230131.175
  • DH-15149: Fix failing CompressedFileUtils Unit Tests
130Merge updates from 1.20230131.174
  • Backport DH-14821: Make Dnd use web’s npm executable
  • Merge updates from 1.20221001.204
  • DH-15085: Don't hold merged intraday partitions in WorkspaceData queries
  • DH-14949: Use Rocky8 in Jenkins
  • DH-15093: ConstructSnapshot Logging is Too Verbose
  • DH-15080: Potential Race in satisfied() lastCompletedStep Set
  • DH-15078: Backport DbArray toArray should use fillChunk (DH-13881)
  • DH-15062: writeTable with out of order grouping fails
  • DH-14999: Ensure PQs identify stability correctly.
  • DH-14951: Null Status fails to Write Test Information
  • DH-15128: Update package-lock.json for jupyter-grid
  • DH-15075: Fix failing test introduced as part of DH-15022
  • DH-15071: Fix Grouped AbstractColumnSource#match() breaking with empty input list.
  • DH-15022: Fixed java 8 compilation error introduced in previous version .199
  • DH-15022: Add support for .zst (ZeeStandard) file compression
  • DH-15039: instance and workspace roots not correctly read from prop files
  • DH-15124: Make prcheck jenkins job use jdk11
  • DH-13577: Add Release Notes for Web UI subplots support
129DH-15141: Fix query apply on restart option not appearing in Web
128DH-15046: Fix blue sharing dot in Query Monitor
127DH-15113: Python Client DnD: Errors when using invalid authentication are too verbose/not informative
126Spotless application.
125DH-15116: DnD workers do not configure DispatcherClient Trust Store
124DH-15105: Update to DHC 0.25.2
123DH-15098: Improve KafkaIngester performance by storing RowSetters in an array
122DH-15103: ACL Editor Fails to Launch with Empty Truststore Property
121DH-15084: Kafka ingestion errors hidden by NPE
120DH-15047: Return better feedback when response status is 500 for a user who is an acl-editor
119DH-15066: Permit Run-As for DnD workers
118DH-14705: enable routing import/export when existing routing file has errors
117Update Web UI to 0.41.2
  • DH-14657: Disconnect handling increase debounce timeout
  • DH-14972: Remove setSearch debounce in CommandHistoryViewportUpdater
  • DH-15032: Fix incorrect warning about updated shared state
116DH-15036: DnD tables and plots would not reconnect after restarting query
115DH-15079: Implement pq:// uri for DHE
114Merge updates from 1.20230131.170
  • DH-15072: Stop building in jdk13
113DH-15077: Update Python README.md.
112DH-15077: DnD Python client type hinds break on Python 3.8
111DH-15074: Fix typo in PerformanceTools init.py
110DH-14966: Update PPQ child panel error message
109DH-15068: Build DnD Python Client Wheel into distTar
108DH-15041: DnD Python Client (Raw Version)
107DH-14870: Resolve lack of DnD formula cache cleanup
106DH-15030: Fix DnD Object Column Regions not implementing gatherDictionaryValuesRowSet
105DH-15012: Remove unit GB from the Data Memory Ratio error message
104DH-15049: Fix Controller not noticing DnD workers dying after initialization completes.
103Merge updates from 1.20230131.169
  • DH-14974: Widen Byte and Short jpy CallableWrapper Returns to Integer
  • DH-12299: Added Release notes for updated readCsv features introduced in DH-12299
102DH-15035: DnD is too eager to read location data when snapshot backed.
101DH-14821: Make Dnd use web's npm executable
100DH-14868: Close worker on console session disconnect
099DH-14731: Fix null query status showing blank in query monitor
098DH-14978: Setting correct trust store on DbAclWriteClient
097DH-15009: Fix typo in Controller gRPC message
096DH-14987: DnD Authorization did not check superusers or supervisors
095DH-14998: Rename Enterprise Proto Files to be Python Friendly.
094DH-14981: Update DnD Tests to DHC 0.25.1
093DH-14890: Separate Enterprise/Community command histories
092DH-14981: Update DnD to DHC 0.25.1
091DH-14964: Support Barrage to Barrage authentication for DnD workers
090DH-14979: Fix gRPC logging integration test being flaky due to timing issues
089DH-14991: Pin versions of JS plugins, add Deephaven plotly express plugin
088DH-14988: Engine should default to the console settings when creating a persistent query from a code studio
087DH-14970: Fixed bug with User delete not removing runAs mapping
086DH-14963: Make testDnd run on java 11 and have a checkbox in feature test ui
085DH-14953: Default Engine selected for Persistent Queries does not match first worker kind provided by server
DH-14965: Default persistent queries do not have a worker kind
DH-14967: Add engine to query monitor summary panel
084Merge updates from 1.20230131.168
  • DH-14960: TestingAutomation needs to count released vs unreleased tests separately
  • DH-14916: Improve installer docs on upgrade
083DH-14844: Patch DnD AEL logging
082DH-14743: Remove System Acls Tab from Swing AclEditor UI
DH-14655: Remove all references to Unsupported SystemAcl API
081DH-14891: Cache DHC JSAPI in Code Studios
080DH-14928: Fix custom objects from queries not exporting type correctly
079Changelog update.
078Merge updates from 1.20230131.167
  • DH-14947: Custom Formatting Long Column Loses Precision
  • DH-14764: Make jdk8 jenkinsfile actually use jdk8
  • DH-14874: backport DH-11489 (IntradayLoggerFactory shouldn't write listener classes)
  • Deephaven Launcher 9.03
  • DH-14942: Deephaven Launcher uses corrected URL after creating a new instance
  • DH-14945: Correct installation java validation error
  • DH-14850: require instance in DeephavenUpdater.sh and DeephavenUpdatesr.bat
  • DH-14807: better feedback when launcher fails to start
  • DH-11466: add release note about script improvements
  • DH-13936: Fix broken jenkinsfile
  • DH-13936: Use the installer for integration tests
  • DH-12662: add upgrade support for server_java_version
  • DH-14841: Check Sharing permissions from New Tab Screen
077DH-13759: improve --status command line options processing
DH-14818: pass arguments with quoted spaces through to command_tool
DH-14817: status report table was limited to ten lines
076DH-14844: Add AuditEventLogger to DnD DatabaseImpl
075DH-14907: Make query script error reports less terrible
074DH-14922: Permit Python shared console sessions (swing)
073DH-14881: Web UI - cookies are unexpectedly expiring
DH-14828: Multiple auth servers are broken
072DH-14563: Load JS Plugins from workers on login
071DH-14852: Make Community Workers the Default for New Installations
070DH-14915: Prevent inaccurate "worker disconnected" exceptions after connecting to worker (swing)
069DH-14952: gRPC Integration Test Failed to Import
068DH-14941: Panels menu not showing in correct location
067Release notes update.
066DH-14827: Display the Engine type in the Console Status Bar tooltip
065DH-14931: WebSocket Message Size Too Low for Complicated Workspace Build Blessing
064DH-14926: Update DHC to 0.24.3
063DH-14924: disallow duplicate storage location in data routing file
062DH-14925: catch 'except:' errors at parse time
061DH-14880: Make controller aware of Shared-Console / Auto-Deleting queries
060Merge updates from 1.20230131.164
  • DH-13936: Fix broken jenkinsfile
  • DH-13936: Use the installer for integration tests
  • DH-12662: add upgrade support for server_java_version
  • DH-14841: Check Sharing permissions from New Tab Screen
059DH-14900: improve handling of failover groups in data routing
058DH-14911: Fix Enabled filter in Query Monitor
057DH-14860: Configuration Server Does not Properly Die when etcd is down on startup
056DH-14904: Fix controller dropping ProcessInfoId and worker name when workers fail.
055DH-14905: Pin Web Plugins in DnD Python requirements.txt
054DH-14919: Fix broken build after forward-merge
053Merge updates from 1.20230131.164
  • DH-14899: Do not write jenkins cache for PR Check jobs
  • DH-14018: Min/max values are ignored when doing a redraw plot
  • DH-14699: Fixed NPE on selecting Copy ProcessInfoId from the status bar context menu when worker is disconnected
  • DH-14873: Make TestPidFileUtil test more deterministic
  • DH-14841: Do not display user list when sharing a dashboard
052Fix Java 8 compilation issue.
051DH-14816: Fix DnD performance overview as-of time
050DH-14896: Make ILF Defer Compilation Directory Creation
DH-14696: DnD Python not installed gives an unclear error message
DH-14153: Label Deephaven Worker Containers with Users
DH-14902: PEL should capture K8s worker Stdout
DH-14903: Properly Set Workspace for DnD Workers
049DH-14861: Dispatcher Should not Send Back Unserializable Exceptions
048DH-14634: Provide helpful error on DnD startup when missing install
047DH-14898: Fix Query server ordering in configuration
046DH-14872: Fix typescript CI build
045DH-14879: Make DnD SNI Host Check and Community PQ Client Authority Configurable
044DH-14889: Fix etcd executable ownership
043DH-14738: Update Web UI to ^0.40.1
Fix export useTableUtils
042DH-14805: Add default ACLs for new DbInternal tables
041DH-14864: Made Controller feedback to clients less opaque. Fix Script language getting lost on shared consoles. Fixed controller not serializing query state update presence.
040DH-14738: Utils supporting ACL Editor
039Merge updates from 1.20230131.163
  • DH-14797: Fix controller PQ/dispatcher failure deadlock
  • DH-14878: Disable flaky test - slow update call
038DH-14887: Cleanup unnecessary reference to caller in InteractiveConsoleSetupQuery.getControllerClient
037DH-14711: Correct cancelJob call outside of lock.
036DH-14706: data routing syntax improvements, validation improvements
035DH-14720: Include Ability to access Importer ConstantColumnValue in Custom Field Writers
034DH-14871: Update DHC to 0.24.2
033DH-14802: Upgrade didn't correctly change getdown.global
032DH-14863: Enable gRPC logs for etcd client in authentication server; add gRPC logging tests
031DH-14865: Intraday index loggers were not shared correctly across listener threads
030DH-14822: Fixed DnD workers not respecting PQ startup timeout setting
029Merge updates from 1.20230131.162
  • DH-14615: Update version of node/npm used by gradle plugin
  • DH-14842: Optionally Filter User and Group Lists for Web
  • DH-14708: Fix bug introduced earlier that failed to throw exception when pid file modification time was less than system uptime
  • DH-14708: Attempt to delete existing pid file when system uptime is less than file modification time
  • DH-14820: Prevent controller-connectivity hang in Swing telemetry
  • DH-14794: update IntelliJ code style
  • Fix Javadoc build break from merge.
028DH-14808: Errors from DHC flight client to standard log; fix gRPC logging
027DH-14855: Integrated Dashboard requires page-reload
026DH-14247: Avoid refresh cookie retry in auth client when server says not authenticated
025DH-14789: Fix NPE if DnD worker crashes during connection initiation crashing controller.
024DH-14417: Remove subscripted list[string] type specification in DnD python, not supported by 3.8
023DH-14845: prevent duplicate storage names in data routing file
022DH-14843: Turn remote Groovy in R on by default (see DH-14715).
021DH-14839: Make controller client subscribe RPC server side streaming only
020DH-14798: Additional fix for remote locations unable to handle unpartitioned tables
019DH-14830: More parametrization for services on gRPC and Envoy
018DH-14800: Cache DnD API instances and authenticated clients per engine/query
017Merge updates from 1.20230131.160
  • Fix Javadoc build break from merge.
016DH-14798: Fix DnD Database handling of splayed user tables
015Merge updates from 1.20230131.159
  • DH-14796: Set new tests enabled by default
  • DH-14670: Use testcase id in log output
  • DH-14178: Feb-Apr test case updates for QA
  • DH-14799: installer quote escaping
  • DH-14759: fix installer log file permissions
  • DH-14715: Do not return published Table in remote mode.
014Update Web UI to v0.39.0
  • DH-14787: Integrated DnD panels from Community PQs
  • DH-14788: Integrated DnD Console
  • DH-14657: Better disconnect handling
  • DH-14803: Link parsing in table cells to be more restrictive
013DH-14656: Fix DnD installation on clusters
012Spotless application.
011DH-14795: NPE while updating Envoy Snapshot
DH-12231: Correct Kubernetes Upgrade With new Branding
010DH-14362: Provide Database access to DHC autocomplete
009DH-14793: DnD Python table_names and namespaces need to return Python collection
008Merge updates from 1.20230131.158
  • DH-14595: Correct MergeData future construction race.
007DH-14752: Update Envoy to v1.24.7, add remote debug configs for k8s
006DH-12231: Updating Copyright Info to 2023, includes changes to automate copyright year update where applicable
005DH-14780: Fix Java 8 compat and spotless
004DH-14780: Handle DHC Barrage boolean serialization changes
003DH-14790: Web Node List Breaks Installer Tests
002Update DnD libSource
001Initial release creation

Detailed Release Candidate Version Log: Deephaven v1.20230201beta

PatchDetails
141Merge updates from 1.20230131.157
  • Release notes update.
140DH-14750: community worker port in PQSL and other minor logging fixed
139DH-14748: Implement getNamespaces getTableNames and getCatalog for DnD
138Merge updates from 1.20230131.156
  • DH-14729: Fix shared dashboards from deleting links
137Merge updates from 1.20230131.155
  • Create Launcher 9.02
  • DH-14609: IrisConfigurationUpdater improvements - better logging, script to run it
  • DH-14769: create new instance by giving a url on the command line
  • DH-14621: IrisConfigurationUpdater will fail on headless systems if DISPLAY environment variable is set
  • DH-14782 sphinx (documentation generator) needs to allow versions <= 7.0.0 for python3.6
  • DH-14765: Do not allow sphinx to use urllib3 v2+
  • DH-14773: Make -PskipLauncher=false work on all branches
  • Create Launcher 9.01
  • DH-14766: Updated copyright text and new branding images in Launcher and Installer and also added support for overlaying copyright text on Splash screens
  • DH-14771: Make artifactExists use DH_JAVA_VERSION instead of JDK_VERSION
  • DH-14768: Regenerate datagen for LogEntry interface changes
  • DH-14198: LogEntry interface changes
  • DH-14643: Remove all references to gitlab from jenkinsfiles
  • DH-13275: Fix double-persistence in fishlib PersistentHashtable
  • DH-13149: Allow blank password file property
  • DH-14718: Correct file computation error in StreamLoggerImpl
  • DH-14715: Convert Enterprise R to use Remote Groovy
  • DH-14682: R integration needs a close method
  • Update Web UI to v0.13.12
  • DH-14732: Arrow keys not selecting next autocomplete item in console
  • DH-14760: Make DnD not-break when using unmerged, jenkins-built fishlib versions
  • DH-14734: spotless fixups
136DH-13826: Update release note for upgrades that do not use the installer
135DH-14755: Support retries and reauth for gRPC controller clients RPCs
134DH-14783: Update to DHC 0.24.1
133DH-14781: Fix possible collision between web api port and CUS port
132DH-14711: Dispatcher holds lock during etcd operations. Hangs dispatcher on shutdown
131DH-14776: CUS Broken on K8s
130DH-14751: Configure iris_db_user_mod to use envoy for aclwriter access
129DH-14767: Web Api Service- log http GET requests
128Revert DH-14767: Web Api Service- log http GET requests
127DH-14770: ui.install_error should be user configurable
126DH-14728: Client-side changes for Envoy DnD routing via headers
125DH-14767: Web Api Service- log http GET requests
124DH-13122: Internal Partition indexing for large internal log tables
123DH-13826: Remove client update service as stand alone process
122DH-14570: Initial refactoring to support DnD query workers in a k8s environment
121DH-14011: Create Enterprise analysis utility for Community performance logs
120DH-14412: Converted PersistentQueryController to use gRPC instead of COMM
119DH-14719: Fix DraftManager config init when controller restarts
118DH-14735: Missing release note.
117DH-14645: make storage optional in DIS config
DH-14069: toString improvements, correct grpc implementation error
116DH-14630: Code gen script for ACL Editor API client
115DH-14735: Add Envoy Route for ACL Write Server Rest Endpoint
114DH-14724: Disable generatePyDoc in non-java8 builds
113Spotless application.
112Merge updates from 1.20230131.149
  • DH-13935: Using installer for deploy branch
  • DH-13935: Make treasureplus use frozen jenkins branch release/20200928
111Merge updates from 1.20230131.148
  • DH-14713: Auth client should retry failed attempts to refresh cookie when there is still time
110DH-14577: Improve JS plugins packaging
109Spotless application.
108DH-14710: Shadow change from .091beta broke Envoy.
107Merge updates from 1.20230131.147
  • DH-14667: Fix file explorer selection when you have thousands of files
  • DH-12700: Update release notes to reflect DH-14195 creating user on upgrade.
  • DH-13128: Release notes for removing old plotting.
106DH-14571: JS API for DHC PQ/Console connections
105DH-14069: add explicit claims in data routing
104Merge updates from 1.20230131.144
  • Edit Release Notes and Changelogs
  • DH-14701: Preview checking should not coalesce tables
  • DH-14695: R integration is slow converting DateTimes and Booleans
  • DH-14694: Test Automation Dnd worker
103Merge updates from 1.20230131.141
  • Update Web UI to v0.31.5
  • DH-14678: Fix OneClick links not filtering plots
  • DH-14608: Display workerName and processInfoId in the console status bar
  • Edits to Jackson release notes
102DH-14581: ACL panel stubs + React Spectrum
101Generating edited Release Notes and Log entries
100DH-14488: Fix serialization exception in Acl Editors Permission Analyzer
099DH-14685: DnD workers show no error logging in the console, and no python stdout
098Update Web UI to ^0.37.2 / v0.37.3
  • DH-14630: Web UI: Display User List
097DH-14656: Installer should copy and unpack DnD tars
096DH-14675: Eliminate Packer Jar
095DH-14475: Improve infra around reliable DnD Python deps
094DH-14693: GUI Uses Unrelocated Guava, Accidentally Pulls in Other Dependencies, Update Outdated Dependencies
093Merge updates from 1.20230131.139
  • DH-14687: Disable CSharp builds until we can fix upstream apt repository issues (Jackson+ changes)
  • DH-14691: Allow gRPC auth service to be configured with a SSL authority override other than the default
  • DH-14624: Issues in configuration for envoy after dispatcher client changes
092Merge updates from 1.20230131.136
  • DH-14687: Disable CSharp builds until we can fix upstream apt repository issues
  • DH-14683: ClockFilter dependency bug in Replay queries
  • DH-14669: Fix error handling in LogAggregatorService
  • DH-14681: Make worker dispatcher-response timeout configurable
  • DH-14372: Add ability for tailer to specify a time to look back for binary logs on startup
  • DH-13385: Update upgrade instructions after DH-14189.
  • DH-14668: Fix rare dispatcher worker startup race condition
  • DH-14673: Fix dispatcher assignment permit loss
  • DH-10348: Backport ii Previous Use Fix from Bard.065beta
  • DH-14650: Update the Javadoc in SimpleMaxValue and SimpleMaxValueCARQ
  • DH-14654: Update persistentQueryStatusMonitor to use ProcessInfoId
  • DH-14225: Don't download MySql connector when etcd ACLs is enabled
  • DH-14626: Time Filter Mode For SyncTableFilter
091DH-14680: Update snakeyaml to 2.0 to address critical CVE
DH-14677: Shadow Packages are not Properly Relocating All Dependencies
090DH-14571: Envoy DnD worker routing via headers
089DH-14679: RESTeasy shadow should not include jetty, misses most relocations
088DH-14144: Fix cause for failed generatePyDoc in ACL REST API changes
087Convert Folder querylist, plugins, main (chunk 1), menu-actions to TypeScript
  • DH-13166: Convert folder main to TypeScript (chunk 1)
  • DH-13167: Deleted folder menu-actions and refractored all usages to community version
  • DH-13169: Convert folder plugins to TypeScript
  • DH-13170: Convert folder querylist to TypeScript
086Merge updates from 1.20230131.131
  • DH-14612: Cleanup gone clients from gRPC authentication server state
  • DH-14662: Fixed an issue remapping shadowed codecs when loading XML Schemas in DnD
  • DH-14627: Web eslint tests are not running in GitHub Actions checks
  • DH-14647: Fixed a bug where filtering a reinterpreted object source as dictionary produces incorrect results
  • DH-14147: Add integration tests for dynamic data routing
  • DH-14465: Eliminate extraneous test dependencies
  • DH-14577: Update DnD build and code to include JS plugins
085Update Web UI to ^0.36.0
  • DH-14581: Web UI: Manage Users tab with Users subtab
084DH-14665: Upload rc builds to artifactory
083DH-14638: Add Kafka Integration Test
082DH-14144: Convert DbAclWriteServer to REST endpoint
081DH-14653: Fix Linting in PermissionUtils
080Merge updates from 1.20230131.124
  • DH-14641: Index regression found in DHC 2517
  • DH-14616: Do not upload tar file with rsync until after root_prepare is run
  • DH-10286: Use a 1MB buffer size for Parquet compression and decompression
  • DH-14264: Make DHFileDigester include plugins//global and plugins//client
  • DH-14619: Use none instead of null for KubernetesControl Field in Old Log Formats
079DH-13171 Convert folder web/client-ui/src/querymonitor to TypeScript
078Merge updates from 1.20230131.121
  • DH-14618: Fixed Bucketed UpdateBy not resetting shared contexts
  • DH-14617: Correct null handling error in LAS stream combination code
  • DH-12141: Fix eslint error from previous Pandas fix
  • DH-14586: Change Unit Test Parameters to Trim Jenkins Time
  • DH-12141: Fix Pandas dataframes not being opened in Web UI
  • DH-7166: Correct NPE in about dialog
  • DH-14502: Update Deephaven Launcher version and copyright
  • DH-14515: Make IrisConfigurationUpdater use system and instance trust stores like the Launcher
  • DH-14600: Deephaven Launcher allows trust override on launch, in addition to add instance
  • DH-14585: Reduce concurrent test startup
  • DH-14278: Combining skipLines and skipFooterLines in a CSV import results in UOE
  • DH-14543: Use SerialTest category for TestParquetTools
  • DH-14003: Better support for custom worker templates on k8s clusters
  • DH-14603: NPE in authentication server error response for AlreadyAuthenticatedException
  • Update Web UI to v0.31.4
  • DH-14596: Fix grid data being formatted incorrectly as 0.00
  • DH-14587: Add property for controlling output directory of ILF-generated test sources
  • Update Web UI to v0.31.3
  • DH-14436: Handling tables with no columns
  • DH-14439: Fix QueryMonitor breaking on null in default search filter
  • DH-12163: Column grouping sidebar test failure fixes
077DH-14010: Ingest Community performance logs into Enterprise
076DH-14581: "New ACL Editor" context menu item
075DH-14589: Add ACLs for Generic objects for DnD workers
074DH-14592: Update Barrage and DnD dependency to DHC 0.23.0
073DH-14581: Converted props to .ts and split out container component
072DH-14581: Minimal conversion of modules to .ts using @ts-nocheck
071DH-14564: Add python bindings for DnD Edge ACLs
070Merge updates from 1.20230131.112
  • DH-14392: Use current values for satisfied swap listener instantiation.
069DH-14529: Reallocation should respect heap/direct buffers.
068DH-14443, DH-14221: Add RecvTime option to Kafka ingesters, add parse-once builder-options to JSON/Kafka
067Merge updates from 1.20230131.111
  • DH-14560: Make internal installer use higher limits on all VMs
  • DH-14037: Prevent installer from adding monit processes to standalone etcd nodes
  • DH-14558: Fix installer bug where fully-generated cnf + etcd on non-infra nodes doesn't set up monit
  • DH-14572: Writing Parquet with empty strings can lead to BufferOverflow
  • DH-14565: Issues with auth-server failover
  • DH-14518: Add missing auth-server files to config_packager.sh
  • DH-14531: Fix backup issues in config_packager.sh
  • DH-14557: Remove duplicate truststore files from config_packager.sh
  • DH-14522: Fix generate_loggers script to use correct java generation directory for compilation
  • DH-14554: Add Javadoc about Listener Dependencies
066Update Web UI to v0.33.0
  • DH-14741: UI Support column header groups
  • DH-14740: Fix QueryMonitor breaking on "null" in default search filter
  • DH-14739: Table with No Columns Produces Error in Web
065DH-14576: Add ACL Tables to WebClientData
064DH-14286: Refactor Dispatcher Worker Keepalive Into Standalone Classes
063DH-14417: Implement Edge ACL support for DnD
062Merge updates from 1.20230131.106
  • DH-14469: Fix --import regression introduced with controller_tool --status option
  • DH-14364: Fix error reporting in Web UI query/dashboard import
  • DH-14550: Fix arguments in config_import script
  • DH-14550: Support SAML in Kubernetes deployments
061DH-13165: Convert folder Login to TypeScript
060DH-14546: Reenable TestIntradayLoggerFactory.testGeneration
059Merge updates from 1.20230131.102
  • DH-14548: Fix typo from forward merge of DH-12630
  • DH-14543: Fix TestParquetTools OOME
058Merge updates from 1.20230131.101
  • DH-12630: Add vmUp/vmDown gradle tasks/scripts for using installer
057DH-14532: Support getNonce and subsequent challengeResponse calls from auth clients landing on different servers
056Merge updates from 1.20230131.100
  • DH-13694: Fix authentication from IntradayLoggerBuilder unit test forward merge
055Merge updates from 1.20230131.099
  • DH-13694: Add Option for Suffix to IntradayLoggerBuilder
  • DH-14524: git ScriptRepository Log can be misleading
054DH-14270: ScriptLoaderState does not make it to PQSL
053DH-14533: DH-14533: ApproximatePercentile Should Expose Min and Max
052DH-14446: Display ProcessInfoId in console session details
051DH-14535: Size requests should not block Comm thread, Optimistic Size Mode
050Merge updates from 1.20230131.098
  • DH-14541: Authentication by delegate token fails in multi-auth server deployments
  • DH-14516: Multiple auth servers don't work
049DH-14529: Binary Rows should not each have a map of name to Index
048DH-14526: Allow WindowCheck and TailInitializationFilter to operate directly on longs
047DH-14530: DataBufferConfiguration startup logging
046DH-14433: TokenBucketThrottle has high error rate for single items
045DH-14525: C++ BinaryStoreWriter Produces Corrupt Application Version Records
044Merge updates from 1.20230131.096
  • DH-14510: Allow ctrl-v paste to Excel
  • DH-14509: Allow an authentication client to retry challengeResponse
  • DH-14514: Fix NPE in TDCP abnormal shutdown cases
  • DH-14470: Fixes ugly error message on login failure
043Merge updates from 1.20230131.092
  • DH-14506: Fix javadoc in AbstractBulkValuesWriter that breaks java 8 build
042Merge updates from 1.20230131.091
  • DH-14495: DispatcherClient should send state changes immediately, not wait a full refresh cycle
  • DH-14474: PR checks shouldn't run the whole pipeline
  • DH-14467: Fix UpdateBy using raw group indices
  • DH-14067: Fix to include shutdown hooks for inWorker methods
  • DH-14249: Concurrent modification exception from workspace
  • DH-14272: Fix integer overflow while writing to Parquet
  • DH-14371: Error reading previous values from ungrouped static data
  • DH-14340: BinaryLogFileManager needs JavaDoc
  • Introduce new Deephaven Launcher 9.0
  • DH-11578: Launcher uses system trust stores on Windows and Mac
  • DH-7166: add Launcher version to IrisConsole help/about
  • DH-14267: expand instance log file location in Launcher dialog
  • DH-11519: save selected JDK when it is selecting during launch, correct representation so it works correctly on Windows
  • DH-14424: allow instance certificates to be used even when there are trust issues
  • Use instance truststore to update the instance
  • DH-14460: Allow disabling xDS envoy routes for COMM/swing via property
  • DH-14479: Silverheels release notes updates
  • DH-14493: Increase worker startup timeout
041DH-14492: Update DnD and Barrage to DHC 0.22.1
040Merge updates from 1.20230131.088
  • DH-14254: NullWithGroups PermissionFilterProvider returns full acces properly
  • DH-14427: Web api server java proc hang on first login
  • DH-14410: Unit test fix for .126
  • DH-14410: Fixed ParameterizedQueryClientImpl#fetchResult() not associating deflated viewport with root table causing a resource leak
  • DH-14288: WorkerTtlCheckJob uses wrong parameter, missing lock
  • DH-14416: Fix formula cache issues associated with simultaneous DnD workers
  • DH-14425: Fix DnD schema shadow class loading issues
  • DH-14408: Enable Swing remote-telemetry by default
  • DH-13317: Updates to DateTime selection widgets (swing)
  • DH-14457: Exclude Barrage and DHC java client from DnD EnterpriseShadow
  • DH-14449: Document docker image builds for various platform architectures
  • DH-13759: minor fix for test objections on new controller_tool status option
039DH-14013: Fix to date-based workspace snapshot threshold logic
038DH-14232: Context menu for deleting a dashboard
037DH-13159: Convert Folder web/client-ui/src/settings to TypeScript
036DH-14426: Add WorkerKind and EngineVersion to the config table
035DH-14418: Update to DHC 0.22.0 release.
034Spotless application.
033DH-13157: Convert folder web/client-ui/src/redux to TypeScript.
032DH-14423: Add a way to filter a Kafka stream
031Merge updates from 1.20230131.079
  • DH-14279: Add ability to map additional persistent volumes to k8s-based workers
  • DH-14442: Make toplevel spotlessApply task invoke spotlessApply on Dnd also
  • Spotless application for DhcInDhe.
  • DH-14437: Prevent NPE when nulling table from query scope
  • Update Web UI to v0.31.1
  • DH-13577: Fix ordering of subplots
  • DH-14422: Improve DnD worker keepalive implementation
  • Fix failing test.
  • Spotless application for compilation fix.
  • Compilation fix.
030Workspace Data Key Frames.
029Spotless.
028Merge updates from 1.20230131.072
  • Spotless application for compilation fix.
  • Compilation fix.
027Merge updates from 1.20230131.070
  • Spotless application from 067.
  • Spotless application to Telemetry.
  • DH-14409: Backported Parquet compression improvements.
  • DH-14407: Disable Swing remote-telemetry by default
  • DH-14187: Add Swing telemetry
  • DH-14196: DnD Worker Keepalive
  • DH-14285: Improve return value, error handling, and heap size control
  • 14376: Upgrade to JUnit5 and add k8s worker test
  • DH-14387: Fix race between DnD worker completion and dispatcher cleanup
  • DH-14316: Bumping dhc csv library to pick up a deadlock fix
  • DH-14199: Web does not send Param Query Apply as a concurrent query
  • DH-14173: Error when auto ranging plots with a one click (OneClick) range
  • DH-14203: Support worker scope plugin dependencies
  • DH-14291: Parallelize Parquet merge across partitions
  • Merge compilation fix.
  • DH-14370: Bump dhcVersion to 0.21.1
  • DH-14373: Add k8s testing util to kube-shadow jar
  • DH-14285: Remove Java 11 usage
  • DH-14285: Create a basic DnD worker integration test tool
  • DH-13988: Fix variable changes being reported after running command
  • DH-13722: Fix some import dashboard functionality
  • DH-14307: Add missing constants for Seek Row functionality
026Revert incorrectly performed merge.
025Incorrectly performed merge.
024DH-14191: Support DnD workers as Persistent Queries
023DH-14386: JS API should expose DHC worker shutdown
022Revert DH-13165: Convert folder Login to TypeScript
021DH-14377: Update dependent libraries to newer versions in Vermilion
020DH-14343: Fix shadow dependencies for DnD/DHC
019DH-14191: Extract PersistentQueryController into smaller parts
018DH-13165 Convert folder Login to TypeScript
017Merge updates from 1.20230131.055
  • DH-14317: Bug Fix to resolve static endpoints in new format using the correct tag
  • DH-14305: Fix controller client is able to unable to reauthenticate.
  • DH-14262: Mac desktop icons are missing in silverheels
  • DH-12664: Fix Dictionary column sources incorrectly matching nulls for parquet
  • DH-12664: Fix Dictionary column sources incorrectly matching nulls
  • DH-14195: Remove obsolete upgrade instructions and scripts
  • DH-12702: Handle unserializable exception when fetching nonexistant barrage tables
  • DH-14311: WorkerKind enabled property should have name in middle
  • DH-14303 Fix padding on query monitor buttons
016Merge updates from 1.20230131.046
  • DH-14294: Make DnD Shadow Version Consistent with Parent Project
  • DH-14040 Removing an unwanted import from playground class
015Merge updates from 1.20230131.044
  • DH-14210, DH-14227: Enable github PR check workflows on treasureplus.
  • DH-14222: Tailer failing on DBDateTimeOverflowException when no timePrecision set
  • DH-14040: Add dhconfig support for service registry
  • DH-13787: Subtract the pending row count when displaying row count
  • Update Web UI to v0.30.1
  • DH-14240: hasHeaders false should hide header bar
  • DH-14237: Down arrow in console not returning to blank field
  • DH-14284: Bug fixed in dhctl intraday options include and exclude
  • DH-14261: Fix etcd_prop_file logging for gRPC showing up on stdout when running the tool
  • DH-14269: Back-port PQ Draft saving fix DH-13770 from Jackson
  • DH-14277: CSV (and other?) builders are missing skip footer lines option
  • DH-12241: IrisConsole fails to lock the workspace in newer JREs
  • DH-14280: Build and Publish .tar.gz of Dockerfiles and Helm Chart
  • DH-14195: Create Missing etcd users on upgrade from Jackson to Silverheels
  • DH-14180: Improve error reporting around DnD Java incompatibility
  • DH-14286: Fix AdminViewerList in ShareModal and QM Permissions tab
  • DH-14181: Relocate fastdoubleparser in EnterpriseShadowed jar
014DH-14227: Correct comparison versions for changelog check
013DH-12538 When importing PQ definitions from controller tool, empty ExtraJvmArguments section causes '0' exception
012Merge updates from 1.20230131.033
  • DH-14073: Fix dhconfig logging configuration making some log lines disappear
  • DH-14250: change password dialog must honor envoy ports
  • DH-13391: Update DnD to DHC 0.21.0
  • DH-14218: Failed Worker Starts Do Not Send Error to Dispatcher Client
  • DH-13252: Fix exposed internal partition column when ACLs precede application
  • DH-14228: securityContext runAsGroup prohibits Secondary Groups on EKS
  • DH-14005: Simplify kubernetes install by eliminating boostrap package
011Merge updates from 1.20230131.027
  • Moved test files to new location after merge
010DH-14227: Enforce Changelog Entries for Pull Requests (Return Exit Code)
009Merge updates from 1.20230131.026
  • DH-14241: StringContainsFilter with any is Unacceptably Slow
  • DH-13874: Fix NPE on failed auth in JDBC tests
  • DH-13824: Dec 2022 test case updates for QA
  • DH-14179: Fix TableLoggerUtil.logTable javadoc
  • DH-14046: SelectableDataSet of min/max axis fails to use the values correct.
  • DH-14071: Fix command line tests for java11
  • DH-14145: Ensure proper snapshot of InputTable rows
  • DH-14146: Fixed bug that introduced intradayType for strings when data type is present
  • DH-14146: Fixed Schema Editor to allow renaming a column for LoggerListener
  • DH-14199: Parameterized Queries now use the Shared lock instead of exclusive. Has a new option to use no lock.
  • DH-14197: Fix controller logging issue
  • DH-14248: Fixed RegionedColumnSourceObjectWithDictionary improperly handling 0 sized symbol offset files
  • DH-14050: Fix infinite loop on login
  • DH-14234: Fix drafts in Web query monitor
  • DH-14133: Fix failing unit tests
  • DH-14133: Pass the worker kind from the web UI to the dispatcher
008DH-14209: Add a speed factor to Replay Queries
007DH-13507: Add parallel-processing of kafka messages, add array unpacking for JSON/Kafka
006DH-14227: Enforce Changelog Entries for Pull Requests
005Merge updates from 1.20230131.020
  • DH-14133: Pass the worker kind from the web UI to the dispatcher
  • DH-14190: DND auth support in Web UI
  • DH-14216: Image Pull Secrets Missing from Hooks and Management Shell
  • DH-14152: Fix tree table NPE from null groupedColumns
  • DH-14206: DbAclProvider should not extend DbAclWriter
  • DH-14194: Kubernetes deployment with envoy fails with log_info command not found
  • DH-14033: Daily backup script
  • DH-14136: Allow null Kafka record values
  • DH-14189: Migrate passphrase files and create tdcp key
  • DH-14185: Fix auth server logging exceptions
  • DH-9482: Safe mode
  • DH-13759: Controller Tool "Status" Option
  • DH-14184: Improve Logging Under Some Error Conditions
  • DH-12163: UI Support column header groups
  • DH-12182: Column grouping layout hints
004DH-13161: Convert Folder web/client-ui/src/client to TypeScript
003DH-14210: Add GitHub action to verify PR format
002Merge updates from 1.20230131.007
  • Spotless application.
  • DH-14047: Simplify DnD logging
  • DH-14182: Fix DnD Build
  • DH-14111: Clean up seek row JS API
  • DH-14088: Fix broken PQ Settings modal in PQ Editor
  • DH-14112: Fix custom-user upgrade
001Initial release creation from 1.20230131

Option to default all user tables to Parquet

Set the configuration property db.LegacyDirectUserTableStorageFormat=Parquet to default all direct user table operations, such as db.addTable, to the Parquet storage format. The default if the property is not set is DeephavenV1.

Deephaven processes log their heap usage

The db_dis, web_api_service, log_aggregator_service, iris_controller, db_tdcp, and configuration_server processes now periodically log their heap usage.

PersistentQueryController.log.current:[2024-05-10T15:00:32.365219-0400] - INFO - Jvm Heap: 3,972,537,856 Free / 4,291,624,960 Total (4,291,624,960 Max)
PersistentQueryController.log.current:[2024-05-10T15:01:32.365404-0400] - INFO - Jvm Heap: 3,972,310,192 Free / 4,291,624,960 Total (4,291,624,960 Max)

The logging interval can be configured using the property RuntimeMemory.logIntervalMillis. The default is one minute.

Configurable gRPC Retries

The configuration service now supports using a gRPC service configuration file to configure retries, and one is provided by default for the system.

{
  "methodConfig": [
    {
      "name": [
          {
              "service": "io.deephaven.proto.config.grpc.ConfigApi"
          },
          {
              "service": "io.deephaven.proto.registry.grpc.RegistryApi"
          },
          {
              "service": "io.deephaven.proto.routing.grpc.RoutingApi"
          },
          {
              "service": "io.deephaven.proto.schema.grpc.SchemaApi"
          },
          {
              "service": "io.deephaven.proto.processregistry.grpc.ProcessRegistryApi"
          },
          {
              "service": "io.deephaven.proto.unified.grpc.UnifiedApi"
          }
      ],

      "retryPolicy": {
        "maxAttempts": 60,
        "initialBackoff": "0.5s",
        "maxBackoff": "2s",
        "backoffMultiplier": 2,
        "retryableStatusCodes": [
          "UNAVAILABLE"
        ]
      },

      "waitForReady": true,
      "timeout": "120s"
    }
  ]
}

methodConfig has one or more entries. Each entry has a name section with one or more service/method sections that filter whether the retryPolicy section applies.

If the method is empty or not present, then it applies to all methods of the service. If service is empty, then method must be empty, and this is the default policy.

The retryPolicy section defines how a failing gRPC call will be retried. In this example, grpc will retry for just over 1 minute while the status code is UNAVAILABLE (e.g. the service is down). Note this applies only if the server is up but the individual RPCs are being failed as UNAVAILABLE by the server itself. It the server is down, the status returned is UNAVAILABLE but the retryPolicy defined here for the method does not apply; gRPC manages reconnection retries for a channel separately/independently as described here: https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md

There is no way to configure the parameters for reconnection; see https://github.com/grpc/grpc-java/issues/9353

If the service config file specifies waitForReady, then an RPC executed when the channel is not ready (server is down) will not fail right away but will wait for the channel to be connected. This, combined with a timeout definition will make the RPC call hold on for as long as the timeout giving the reconnection policy a chance to get the channel to ready.

For Deephaven processes, customization of service config can be done by (a) copying configuration_service_config.json to /etc/sysconfig/illumon.d/resources and modifying it there, or (b) renaming it and setting property configuration.server.service.config.json.

Note that the property needs to be set as a launching JVM argument because this is used in the gRPC connection to get the initial properties.

Note: The relevant service names are:

io.deephaven.proto.routing.grpc.RoutingApi
io.deephaven.proto.config.grpc.ConfigApi
io.deephaven.proto.registry.grpc.RegistryApi
io.deephaven.proto.schema.grpc.SchemaApi
io.deephaven.proto.unified.grpc.UnifiedApi

Update jgit SshSessionFactory to a more modern/supported version

For our git integration, we have been using the org.eclipse.jgit package. Github discontinued support for SHA-1 RSA ssh keys, but jgit's ssh implementation (com.jcraft:jsch) does not support rsa-sha2 signatures and will not be updated. To enable stronger SSH keys and provide GitHub compatibility, we have configured jgit to use an external SSH executable by setting the GIT_SSH environment variable. The /usr/bin/ssh executable must be present for Git updates.

Automatically Provisioned Python venv Will Only Use Binary Dependencies

All pip installs performed as part of the automatic upgrade of Python virtual environments will now pass the --only-binary=:all: flag, which will prevent pip from ever attempting to build dependencies on a customer machine.

As part of this change, we automatically upgrade pip and setuptools in all virtual environments, and have upgraded a number of dependencies which for pip refused to use prebuilt binary dependencies:

For all virtual environments:
dill==0.3.1.1 is now dill==0.3.3
wrapt==1.11.2 is now wrapt==1.13.2

For jupyter virtual environments:
backcall==0.1.0 is now backcall==0.2.0
tornado==6.0.3 is now tornado==6.1

Switch to NFS v4 for Kubernetes RWX Persistent Volumes

NFS v3 Persistent Volume connections do not support locking. This manifests most obviously when attempting to work with user tables in Deephaven on Kubernetes. By default, user table activities will wait indefinitely to obtain a lock to read or write data. This can be bypassed by setting -DOnDiskDatabase.useTableLockFile=false; this work-around was provided by DH-15640.

This change (DH-15830) switches Deephaven Kubernetes RWX Persistent Volume definitions to use NFS v4 instead, which includes lock management as part of the NFS protocol itself. In order for this change to be made, the NFS server must be reconfigured to export the RWX paths relative to a shared root path (fsid=0), but the existing PVs must use the same path to connect, since PV paths are immutable.

To reconfigure the NFS server: manually run the upgrade-nfs-minimal.sh script in the NFS server's Pod. It is important to set the environment variable SETUP_NFS_EXPORTS to y before running the script.

  • To manually run the script against an NFS Pod:
    • Run kubectl get pods to get the name of your NFS server Pod and confirm that it is running.

    • Copy the setup script to the NFS pod by running this command, using your specific NFS pod name:

      # Run 'kubectl get pods' to find your specific nfs-server pod name and use that as the copy target host in this command.
      kubectl cp setupTools/upgrade-nfs-minimal.sh <nfs-server-name>:/upgrade-nfs-minimal.sh
      
    • Run this command to execute that script, once again substituting the name of your NFS Pod:

      kubectl exec <nfs-server-name> -- bash -c "export SETUP_NFS_EXPORTS=y && chmod 755 /upgrade-nfs-minimal.sh && /upgrade-nfs-minimal.sh"
      

The upgrade script:

  • replaces /etc/exports, and backs up the original file to /etc/exports_<epoch_timestamp>. The new file will have only one entry, which exports the /exports directory with fsid=0.
  • adds an exports sub-directory under /exports, and moves the dhsystem directory there. This is so clients will still find their NFS paths under /exports/dhsystem when connecting to the fsid=0 "root".

The existing PVs spec sections are updated with:

mountOptions:
    - hard
    - nfsvers=4.1

After upgrading to a version of Deephaven that includes this change (DH-15830), you should remove the -DOnDiskDatabase.useTableLockFile=false work-around, so normal file locking behavior can be used when working with user tables.

Requiring ACLs on all exported objects

When exporting objects from a Persistent Query, there are now two modes of operation controlled by the property PersistentQuery.openSharingDefault.

In either mode, when an ACL is applied to any object (e.g, tables or plots) within the query, then objects without an ACL are only visible to the query owner and admins (owners and admins never have ACLs applied).

When a viewer connects:

  • If PersistentQuery.openSharingDefault is set to true, persistent queries that are shared without specifying table ACLs allow all objects to be exported to viewers of the query without any additional filters supplied. This is the existing Deephaven behavior that makes it simple to share PQ work product with others.
  • If PersistentQuery.openSharingDefault is set to false, persistent queries that are shared without specifying table ACLs do not permit objects without an ACL applied to be exported to viewers. The owner of the persistent query must supply ACLs for each object that is to be exported.

Setting this property to false makes it less convenient to share queries, but reduces the risk of accidentally sharing data that the query writer did not intend. To enable this new behavior, you should update your iris-environment.prop property file.

Tailer configuration changes to isolate user actions

The tailer allocates resources for each connection to a Data Import Server for each destination (namespace, table name, internal partition, and column partition). System table characteristics are predictable and fairly consistent, and can be used to configure the tailer with appropriate memory.

User tables are controlled by system users, so their characteristics are subject to unpredictable variations. It is possible for a user to cause the tailer to consume large amounts of tailer resources, which can impact System data processing or crash the process.

This change adds more properties for configuration, and adds constraints on User table processing separate from System tables.

User table isolation

Resources for User table locations are taken from a new resource pool. The buffers are smaller by default, and the pool has a constrained size. This puts an upper limit on memory consumption when users flood the system with changed locations, which can happen with closeAndDeleteCentral or when back filling data. The resources for this pool are pre-allocated at startup. The pool size should be large enough to handle expected concurrent user table writes.

PropertyDefaultDescription
DataContent.userPoolCapacity128The maximum number of user table locations that will be processed concurrently. If more locations are created at the same time, the processing will be serialized.
DataContent.producerBufferSize.user256 * 1024The size in bytes of the buffers used to read data for User table locations.
DataContent.disableUserPoolfalseIf true, user table locations are processed using the same resources as system tables.

Tailer/DIS configuration options

The following properties configure the memory consumption of the Tailer and Data Import Server processes.

PropertyDefaultDescription
DataContent.producersUseDirectBufferstrueIf true, the Tailer will use direct memory for its data buffers.
DataContent.consumersUseDirectBufferstrueExisting property. If true, the Data Import Server will use direct memory for its data buffers.
BinaryStoreMaxEntrySize1024 * 1024Existing property. Sets the maximum size in bytes for a single data row in a binary log file.
DataContent.producerBufferSize2 * BinaryStoreMaxEntrySize + 2 * Integer.BYTESThe size in bytes of buffers the tailer will allocate.
DataContent.consumerBufferSize2 * producerBufferSizeThe size in bytes of buffers the Data Import Server will allocate. This must be large enough for a producer buffer plus a full binary row.

Revert to previous behavior

To disable the new behavior in the tailer, set the following property:

DataContent.disableUserPool = true

Added block flag to more dh_monit actions

This flag blocks scripting for the start, stop, and restart actions until the actions are completed. If any actions other than start, stop, restart, up or down are passed with the blocking flag, an error is generated. No other behaviors of the script have been changed.

These following options have been added:

/usr/illumon/latest/bin/dh_monit [ start | stop | restart ] [ process name | all ] [ -b | --block ]

These work as before:

/usr/illumon/latest/bin/dh_monit [ up | down ] [ -b | --block ]

Logging System Tables from Core+

Core+ workers can now log Table objects to a System table.

Many options are available using the Builder class returned by:

import io.deephaven.enterprise.database.SystemTableLogger
opts = SystemTableLogger.newOptionsBuilder().currentDateColumnPartition(true).build()

The only required option is what column partition to write to. You may specify a fixed column partition or use the current date (at the time the row was written, data is not introspected for a Timestamp). The default behavior is to write via the Log Aggregator Service, but you can also write via binary logs. No code generation or listener versioning is performed, you must write columns in the format that the listener expects. Complete Options are available in the Javadoc.

After creating an Options structure, you can then log the current table:

SystemTableLogger.logTable(db, "Namespace", "Tablename", tableToLog, opts)

When logging incrementally, a Closeable is returned. You must retain this object to ensure liveness. Call close() to stop logging and release resources.

lh=SystemTableLogger.logTableIncremental(db, "Namespace", "Tablename", tableToLog, opts)

The Python version does not use any options, but rather named arguments. If you specify None for the column partition, then the current date is used.

system_table_logger.log_table("Namespace", "Tablename", table_to_log, columnPartition=None)

Similarly, if you call log_table_incremental from Python; then you must close the returned object (or use it as context manager in a with statement)

Row by row logging is not yet supported in Core+ workers. Existing binary loggers cannot be executed in the context of a Core+ worker; because they reference classes that are shadowed (renamed). If row-level logging is required, then you must use io.deephaven.shadow.enterprise.com.illumon.iris.binarystore.BinaryStoreWriterV2 directly.

Only primitive types, Strings and Instants are supported. Complex data types cannot yet be logged.

Core+ support for multiple partitioning columns

Deephaven Core+ workers now support reading tables stored in the Apache Hive layout. Hive is a multi-level partitioned format where each directory is a Key=Value pair.

For example:

| Market                          -- A Directory for the Namespace
| -- EquityTrade                  -- A directory for the Table
|  | -- Region=US                 -- A Partition directory for the Region `US`
|  |  | -- Class=Equities         -- A Partition directory for the Class `Equities`
|  |  |  | -- Symbol=UVXY         -- A Partition directory for the Symbol `UVXY`
|  |  |  |  | -- table.parquet    -- A Parquet file containing data,
|  |  |  | -- Symbol=VXX          -- A Partition directory for the Symbol `VXX`
|  |  |  |  | -- table.size       -- A set of files for a Deephaven format table
|  |  |  |  | -- TradeSize.dat
|  |  |  |  | -- ...
|  | -- Region=Asia
|  |  | -- Class=Special
|  |  |  | -- Symbol=ABCD
|  |  |  |  | -- table.parquet
|  |  |  | -- Symbol=EFGH
|  |  |  |  | -- table.parquet

Core+ support for writing tables in Deephaven format

Deephaven Core+ workers now support writing tables in Deephaven format using the io.deephaven.enterprise.table.EnterpriseTableTools class in Groovy workers and the deephaven_enterprise.table_tools python module.

For example, to read a table from disk:

import io.deephaven.enterprise.table.EnterpriseTableTools
t = EnterpriseTableTools.readTable("/path/to/the/table")
from deephaven_enterprise import table_tools
t = table_tools.read_table("/path/to/the/table")

And to write a table:

import io.deephaven.enterprise.table.EnterpriseTableTools
EnterpriseTableTools.writeTable(qq, new File("/path/to/the/table"))
from deephaven_enterprise import table_tools
table_tools.write_table(table=myTable, path="/path/to/the/table")

See the Core+ documentation for more details on how to use this feature.

Core+ C++ client and derived clients support additional CURL options

When configuring a Session Manager with a URL for downloading a connection.json file, the C++ client and derived clients (like Python ticking or R) use libcurl to download the file from the supplied URL. SSL connections in this context can fail for multiple options and it is customary to support options to adjust SSL behavior and/or enable verbose output for supporting debugging. We now support the following environment variables from the clients:

  • CURL_CA_BUNDLE: like the variable of the same name for the curl(1) command line utility. Points to a file containing a CA certificate chain to use instead of the system default.
  • CURL_INSECURE: if set to any non-empty value, disable validation of server certificate.
  • CURL_VERBOSE: if set to any non-empty value, enable debug output.

New Worker Labels

The Deephaven Enterprise system supports two kinds of workers.

The first uses the legacy Enterprise engine that predates the release of Deephaven Community Core. These workers are now labeled "Legacy" in the Code Studio and Persistent Query "Engine" field. Previously, these workers were labeled "Enterprise".

The second kind uses the Deephaven Community Core engine with Enterprise extensions. These workers are now labeled "Core+" in the Code Studio and Persistent Query "Engine" field. Previously, these workers were labeled "Community".

Although these changes may create short-term confusion for current users, Deephaven believes they better represent the function of these workers and will easily become familiar. Both Legacy and Core+ workers exist within the Deephaven Enterprise system. The Core+ workers additionally include significant Enterprise functionality that is not found within the Deepahven Community Core product.

To avoid breaking user code, we have not yet changed any package or class names that include either "Community" or "DnD" (an older abbreviation which stood for "Deephaven Community in Deephaven Enterprise").

Logger overhead

The default Logger creates a fixed pool of buffers. Certain processes are fine with a smaller size.

The following properties can be used to override the default configuration of the standard process Logger. Every log message uses an entry from the entry pool, and at least one buffer from the buffer pool. Additional buffers are taken from the buffer pool as needed. Both pools will expand as needed, so the values below dictate the minimum memory that will be consumed.

PropertyDefaultDescription
IrisLogCreator.initialBufferSize1024The initial size of each data buffer. Buffers may be reallocated to larger sizes as required.
IrisLogCreator.bufferPoolCapacity1024The starting (and minimum) number of buffers in the buffer pool.
IrisLogCreator.entryPoolCapacity32768The initial (and minimum) size of the LogEntry pool.
IrisLogCreator.timeZoneAmerica/New_YorkThe timezone used in binary log file names.

The default value for IrisLogCreator.entryPoolCapacity has been reduced to 16384 for Tailer processes.

generate-iris-keys and generate-iris-rsa no longer overwrite output

The generate-iris-keys and generate-iris-rsa scripts use OpenSSL to generate public and private keys. If you have an existing key file, the scripts now exit with a failure and you must remove the existing file before regenerating the key.

Kubernetes Helm Chart Changes

Some settings have changed or have been explicitly provided in place of whatever the default value was for your Kubernetes platform provider. For example terminationGracePeriodSeconds is set to a default of 10 in the management-shell. To avoid possible errors, delete the management-shell pod prior to doing the helm upgrade if you have an older version already running. The pod can be deleted with this command: kubectl -n <your-namespace> delete pod management-shell --grace-period 1.

Note that any files you may have copied or created locally on that pod will be removed. However, in the course of normal operations such files would not be present.

Kafka Offset Column Name

The default Community name for storing offsets is KafkaOffset. The Core+ Kafka ingester assumed this name, rather than using the name from the deephaven.offset.column.name consumer property.

If the default columns names of KafkaOffset, KafkaPartition, and KafkaTimestamp are not in your Enterprise schema, then the ingester ignores those columns. If you change column names for timestamp, offset, or partition; then you must also ensure that your schema contains a column of the correct type for that column.

Bypassing user table lock files

When a worker tries to write or read a User table, it will first try to lock a file in /db/Users/Metadata to avoid potential concurrency issues. If filesystem permissions are set up incorrectly, or if the underlying filesystem does not support file locking, this can cause issues.

The following property can be set to disable the use of these lock files:

OnDiskDatabase.useTableLockFile=false

Worker-to-worker table resolution configuration

Worker-to-worker table resolution now uses the Deephaven cluster's trust store by default. In some environments, there may be a SSL-related exception when when trying to resolve a table defined in one persistent query from another (see sharing tables for more). The property uri.resolver.trustall may be set to true globally in a Deephaven configuration file, or as a property in a Code Studio session as a JVM argument (e.g. -Duri.resolver.trustall=true). This will let the query worker sourcing the table trust a certificate that would otherwise be untrusted.

Added Envoy properties to allow proper operation in IPv6 or very dynamic routing environments

The new properties envoy.DnsType and envoy.DnsFamily allow configuration of Envoy DNS behaviors for xds routes added by the Configuration server.

  • envoy.DnsType configures the value to be set in dynamically added xds routes for type. The default if this property is not set is LOGICAL_DNS. If there is a scenario where DNS should be checked on each connection to an endpoint, this can be changed to STRICT_DNS. Refer to Envoy documentation for more details about possible settings.

  • envoy.DnsFamily configures the value to be set in dynamically added xds routes for dns_lookup_family. The default if this property is not set is AUTO. In environments where IPv6 is enabled the AUTO setting may cause Envoy to resolve IPv6 addresses for Deephaven service endpoints; since these service endpoints only listen on IPv4 stacks, Envoy will return a 404 or 111 when getting "Connection refused" from the IPv6 stack. Refer to Envoy documentation for more details about possible settings.

Since Deephaven endpoint services listen only on IPv4 addresses, and Envoy, by default, prefers IPv6 addresses, it may be necessary to modify the configuration in environments where IPv6 is enabled. To do this:

  1. add an entry to the iris-environment.prop properties file of envoy.DnsFamily=V4_ONLY

  2. edit the envoy3.yaml (or whichever configuration file Envoy is using) and add dns_lookup_family=V4_ONLY to the xds_service section:

    static_resources:
      clusters:
        - name: xds_service
          connect_timeout: 0.25s
          type: STRICT_DNS
          dns_lookup_family: V4_ONLY
    
  3. import the new configuration and restart the configuration server and the Envoy process for the changes to take effect.

Modified Bessel correction formula for weighted variance

The weighted variance computation formula has been changed to match that used in the Deephaven Community engine. We now use the standard formula for "reliability weights" instead of the previous "frequency weights" interpretation. This will affect statistics based on variance such as standard deviation.

Managing Community Worker Python Packages

When starting a Deephaven Python worker, it executes in the context of a Python virtual environment (venv). This environment determines what packages are available to Python scripts. Packages that are important systemically or for multiple users should be added to the permanent virtual environment. With Community workers, the administrator may configure multiple worker kinds each with distinct virtual environments to enable more than one environment with a simple drop-down menu. For legacy Enterprise workers, users must manually set properties to select different virtual environments.

For experimentation, it can be convenient to install a Python package only in the context of the current worker. Community Python workers now have a deephaven_enterprise.venv module, which can be used to query the current path to the virtual environment and to install packages into the virtual environment with via pip with the install method. On Kubernetes, the container images now permit dbquery and dbmerge to write to the default virtual environment of /usr/illumon/dnd/venv/latest; which has no persistent effects on the system.

On a bare-Linux installation, the /usr/illumon/dnd/venv/latest must not be writable by users to ensure isolation between query workers. To allow users to install packages into the virtual environment, the administrator may configure a worker kind to create ephemeral environments on worker startup by setting the property WorkerKind.<name>.ephemeralVenv=true. This process increases worker startup time as it requires executing pip freeze and then pip install to create a clone of the original virtual environment. With an ephemeral virtual environment, the user can use deephaven_enterprise.venv.install to add additional packages to their worker. There is currently no interface to choose ephemeral environments at runtime.

Kubernetes Image Customization

When building container images for Kubernetes, Deephaven uses a default set of requirements that provide a working environment. However, many installations require additional packages. To facilitate adding new packages to the default virtual environment, a customer_requirements.txt file can be added to the deephaven_python and db_query_worker_dnd subdirectories of the docker build. After installing the default packages into the worker's virtual environment, pip is called to install the packages listed in customer_requirements.txt. If these files do not exist, the Deephaven build script creates empty placeholder customer_requirements.txt files.

Make /db/Users mount writeable in Kubernetes

This changes both the yaml for worker templates and the permissions on the underlying volume that is mounted as /db/Users in pods. If you are installing a new cluster, there is no action necessary. However, if you have an existing cluster installed then run this command to change the permissions: kubectl exec management-shell -- /usr/bin/chmod -vR 775 /db/Users

Helm improvements

A number of items have been added to the Deephaven helm chart, which allow for the following features:

  • Configuration options to use an existing persistent volume claim in Deephaven, to allow for use of historical data stored elsewhere.
  • Configuration options to mount existing secrets into worker pods.
  • Configurable storageClass options to allow for easier deployment in various Kubernetes providers.

Required action when upgrading from an earlier release

  1. Define global.storageClass: If you have installed an earlier version of Deephaven on Kubernetes then your my-values.yaml file used for the upgrade (not the Deephaven chart's values.yaml) should be updated to include a global.storageClass value, e.g.:

    global:
       storageClass: "standard-rwo"    # Use a value suitable for your Kubernetes provider
    

    The value should be one that is suitable for your Kubernetes provider; standard-rwo is a GKE-specific storage class used as an example. To see storageClass values suitable for your cluster, consult your provider's documentation. You can view your cluster's configured storageClass by running kubectl get storageClasses

  2. Delete management-shell pod prior to running helm upgrade: Run kubectl delete pod management-shell to delete the pod. Note that if you happen to have any information stored on that pod it would be removed, though in the normal course of operations that would not be the case. This pod mounts the shared volumes used elsewhere in the cluster, and so changes to the storageClass values might result in an error similar to the following if it is not deleted when the upgrade is performed:

    $ helm upgrade my-deephaven-release-name ./deephaven/ -f ./my-values.yaml --set image.tag=1.20230511.248 --debug
    
    Error: UPGRADE FAILED: cannot patch “aclwriter-binlogs” with kind PersistentVolumeClaim: PersistentVolumeClaim “aclwriter-binlogs”
    is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
      core.PersistentVolumeClaimSpec{
        ... // 2 identical fields
        Resources:        {Requests: {s”storage”: {i: {...}, s: “2Gi”, Format: “BinarySI”}}},
        VolumeName:       “pvc-80a518f6-1a24-4c27-93b5-c7e9bd25d824”,
    -   StorageClassName: &“standard-rwo”,
    +   StorageClassName: &“default”,
        VolumeMode:       &“Filesystem”,
        DataSource:       nil,
       DataSourceRef:    nil,
    }
    

Ingesting Kafka Data from DnD

The Deephaven Community Kafka ingestion framework provides several advantages over the existing Enterprise framework. Notably:

  • The Community Kafka ingester can read Kafka streams into memory and store them to disk.
  • Key and Value specifications are disjoint, which is an improvement over the io.deephaven.kafka.ingest.ConsumerRecordToTableWriterAdapter pattern found in Enterprise.
  • The Community KafkaIngester uses chunks for improved efficiency compared to row-oriented Enterprise adapters.

You can now use the Community Kafka ingester together with an in-worker ingestion server in a DnD worker. As with the existing Enterprise Kafka ingestion, you must create a schema and create a data import server within your data routing configuration. After creating the schema and DIS configuration, create an ingestion script using a Community worker.

You must create a KafkaConsumer Properties object. Persistent ingestion requires that auto commit is disabled in order to ensure exactly once delivery. The next step is creating an Options builder object for the ingestion and passing it to the KafkaTableWriter.consumeToDis function. You can retrieve the table in the same query, or from any other query according to your data routing configuration.

import io.deephaven.kafka.KafkaTools
import io.deephaven.enterprise.kafkawriter.KafkaTableWriter

final Properties props = new Properties()
props.put('bootstrap.servers', 'http://kafka-broker:9092')
props.put('schema.registry.url', 'http://kafka-broker:8081')
props.put("fetch.min.bytes", "65000")
props.put("fetch.max.wait.ms", "200")
props.put("deephaven.key.column.name", "Key")
props.put("deephaven.key.column.type", "long")
props.put("enable.auto.commit", "false")
props.put("group.id", "dis1")

final KafkaTableWriter.Options opts = new io.deephaven.enterprise.kafkawriter.KafkaTableWriter.Options()
opts.disName("KafkaCommunity")
opts.tableName("Table").namespace("Namespace").partitionValue(today())
opts.topic("demo-topic")
opts.kafkaProperties(props)
opts.keySpec(io.deephaven.kafka.KafkaTools.FROM_PROPERTIES)
opts.valueSpec(io.deephaven.kafka.KafkaTools.Consume.avroSpec("demo-value"))

KafkaTableWriter.consumeToDis(opts)

ingestedTable=db.liveTable("Namespace", "Table").where("Date=today()")

Customers can now provide their own JARs to Community in Enterprise (i.e. DnD) workers

Customers can now provide their own JARs into three locations that DnD workers can load from:

  1. Arbitrary locations specified by the "Extra Classpaths" field from e.g. a console or Persistent Query configuration
  2. A user-created location specific to a DnD Worker Kind configuration, specified by the WorkerKind.<Name>.customLib property
  3. A default directory found in every DnD installation, e.g. /usr/illumon/dnd/latest/custom_lib/

Data routing file checks for duplicate keys

The data routing file is a YAML file. The YAML syntax includes name:value maps, and like most maps, cannot contain duplicate keys. Data routing file validation now raises an error when duplicate map keys are detected. The prior behavior was for the duplicate keys to silently replace the value in the map.

Reading Hierarchical Parquet Data

Deephaven Community workers can now read more complex Parquet formats through the db.historical_table method (or db.historicalTable from Groovy). Three new types of Parquet layouts are supported:

  1. metadata: A hierarchical structure where a root table_metadata.parquet file contains the metadata and paths for each partition of the table.
  2. kv: A hierarchical directory with key=value pairs for partitioning columns.
  3. flat: A directory containing one or more Parquet files that are combined into a single table.

To read a Parquet table with the historical_table, you must first create a Schema that matches the underlying Parquet data. The Table element must have storageType="Extended", and a child element for ExtendedStorage that specifies a type. The valid type values are parquet:metadata, parquet:kv, and parquet:flat, corresponding to the supported layouts.

Legacy workers cannot read advanced Parquet layouts. If you call db.t with a table that defines Extended storage, an exception is raised.

com.illumon.iris.db.exceptions.ScriptEvaluationException: Error encountered at line 1: t=db.t("NAMESPACE", "TABLENAME")
...
caused by:
java.lang.UnsupportedOperationException: Tables with storage type Extended are only supported by Community workers.

Extended storage tables may have more than one partitioning column. The data import server can only ingest tables with a single partitioning column of type String. Attempts to tail binary files for tables that don't meet these criteria will raise an exception.

java.lang.RuntimeException: Could not create table listener
...
Caused by: com.illumon.iris.db.schema.SchemaValidationException: Tailing of schemas with multiple partitioning columns is not supported.

java.lang.RuntimeException: Could not create table listener
...
Caused by: com.illumon.iris.db.schema.SchemaValidationException: Tailing of schemas with a non-String partitioning column is not supported.

Discovering a Schema from an Existing Parquet Layout

You can read the Parquet directory using the standard community readTable function and create an Enterprise schema and table definition as follows:

import static io.deephaven.parquet.table.ParquetTools.readTable
import io.deephaven.enterprise.compatibility.TableDefinitionCompatibility
import static  io.deephaven.shadow.enterprise.com.illumon.iris.db.tables.TableDefinition.STORAGETYPE_EXTENDED

result = readTable("/db/Systems/PQTest/Extended/commodities")
edef = TableDefinitionCompatibility.convertToEnterprise(result.getDefinition())
edef.setName("commodities")
edef.setNamespace("PQTest")
edef.setStorageType(STORAGETYPE_EXTENDED)
ss=io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.SchemaServiceFactory.getDefault()
ss.authenticate()
schema=io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.xml.SchemaXmlFactory.getXmlSchema(edef, io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.NamespaceSet.SYSTEM)
// If this is a new namespace
ss.createNamespace(io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.NamespaceSet.SYSTEM, "PQTest")

// insert the ExtendedStorage type
schema.setExtendedStorageType("parquet:kv")
ss.addSchema(schema)

Read the table with:

db.historicalTable("PQTest", "Commodities")

Java Exception Logging

Deephaven logs now use the Java standard format for Exception stack traces, which includes suppressed exceptions and collapses repetitive stack trace elements, among other improvements.

ACLs for DbInternal CommunityIndex tables

Preexisting installs must manually add new ACLs for the new DbInternal tables.

First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:

-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCommunityIndex -overwrite_existing
exit

Then, run the following to add the new ACLs into the system:

sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt

Alternatively, the ACLs can be added manually one by one in the Swing ACL Editor:

allusers | DbInternal | ServerStateLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))

Seamless integration of Community panels in Deephaven Enterprise

Deephaven Enterprise now supports opening plots and tables from Community queries via the Panels menu. Community panels can be linked and filtered the same way as Enterprise.

Allow removal of "Help / Contact Support ..." via property

A new property, IrisConsole.contactSupportEnabled has been added, which may be used to remove the "Help / Contact Support ..." button from the swing front-end.

By default, this property is set to true in order to preserve current behavior. Setting this to false in properties will remove the menu-option.

db available via import in Community Python workers

In Community Python workers, the Database object db can now be imported into user scripts and modules directly using import statements, for example:

from deephaven_enterprise.database import db

my_table = db.live_table(namespace="MyNamespace", table_name="MyTable").where("Date=today()")

The db object is still available as a global variable for Consoles and Persistent Query scripts.

OperationUser columns added to DnD DbInternal tables

The internal performance tables for Community workers now have columns for OperationAuthenticatedUser and OperationEffectiveUser. This updates the schema for QueryPerformanceLogCommunity, QueryOperationPerformanceLogCommunity, and UpdatePerformanceLogCommunity. The operation user reflects the user that initiated an operation over the network, which is especially important for analyzing the performance of shared persistent queries. For example, filtering, sorting, or rolling up a table can require significant server resources.

No manual changes are needed. The Deephaven installer will deploy the new DbInternal schemas and the new data is ingested into separate internal partitions.

ProcessMetrics logging is now disabled by default

ProcessMetrics logging is now disabled by default in both Enterprise (DHE) and Community in Enterprise (DnD). To enable ProcessMetrics logging, set IrisLogDefaults.writeDatabaseProcessMetrics to true. If desired, you can control DnD ProcessMetrics logging separately from DHE via statsLoggingEnabled.

Kafka Version Upgrade

We have upgraded our Kafka code from version 2.4 to version 3.4.

Confluent Breaking Changes

Confluent code must be upgraded to version 7.4 to be compatible with version 3.4. https://docs.confluent.io/platform/current/installation/versions-interoperability.html

Clients using Avro or POJO for in-worker DISes must switch to the 7.4 versions of the required jars, as specified here: https://deephaven.io/enterprise/docs/importing-data/advanced/streaming/kafka/#generic-record-adapter

The following dependencies are now included in the Deephaven installation:

jackson-core-2.10.0.jar
jackson-databind-2.10.0.jar
jackson-annotations-2.10.0.jar

Users should remove these from their classpath (probably /etc/sysconfig/illumon.d/java_lib) to avoid conflict with the included jars.

Flight can now resolve Live, Historical and Catalog tables from the database

DnD workers now support retrieving live, historical and catalog tables through Arrow Flight. DnD's Python client has been updated with DndSession.live_table(), DndSession.historical_table() and DndSession.catalog_table() to support this.

For example, to fetch the static FeedOS.EquityQuoteL1 table

from deephaven_enterprise.client.session_manager import SessionManager

connection_info = "https://my-deephaven-host.com:8000/iris/connection.json"
session_mgr: SessionManager = SessionManager(connection_info)
session_mgr.password("iris","iris")

session = session_mgr.connect_to_persistent_query("CommunityQuery")
Quotes = session.historical_table("FeedOS", "EquityQuoteL1").where("Date=`2023-06-15`")

Flight ticket structure

Database flight tickets start with a prefix d and then are followed by a path consisting of three parts. The first part selects the type, the second is the namespace, and the third is the table name. Available types are catalog for the catalog table, live for live tables and hist for historical tables.

For example d/live/Market/EquityQuote would fetch a the live Market.EquityQuote table. Note that the catalog version does not use a namespace or tablename d/catalog will fetch the catalog table.

Reduce default max table display size

The maximum number of rows that may be displayed in the swing front-end before the red "warning bar" is displayed is now configurable. A new default maximum has been defined as 67,108,864 (64 x 1024 x 1024). Technical limitations cause rows beyond this limit to not properly update. When necessary, the Web UI is capable of displaying much larger tables than Swing.

The previous default max may be configured with the following property:

DBTableModel.defaultMaxRows=100000000

Note that the property-defined maximum may be programmatically reduced based on technical limits.

Improved Metadata Indexer tool

The Metadata Indexer tool has been improved so that it can now validate and list table metadata indexes on disk.
The tool can be invoked using the dhctl script with the metadata command.

Deephaven now supports subplotting in the Web UI

Users now have the ability to view multiple charts subplotted in one figure using the Web UI. Create subplots using the newChart, colSpan, and rowSpan functions available on a Figure. Details are available in the plotting guide.

Example Groovy code of subplots

tt = timeTable("00:00:00.01").update("X=0.01*ii", "Y=ii*ii", "S=sin(X)", "C=cos(X)", "T=tan(X)").tail(1000)

// Figure with single plot
f1 = figure().plot("Y", tt, "X", "Y").show()

// Figure with two plots, one on top of the other
f2 = figure(2, 1)
    .newChart(0,0).plot("S", tt, "X", "S")
    .newChart(1,0).plot("C", tt, "X", "C")
    .show()

// Figure with 3 plots, one that takes up the full width and then two smaller ones
f3_c = figure(2, 2)
    .newChart(0,0).plot("T", tt, "X", "T").colSpan(2)
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(1,1).plot("C", tt, "X", "C")
    .show()

// Figure with 3 plots, one that takes up the full height and then two smaller ones
f3_r = figure(2, 2)
    .newChart(0,0).plot("T", tt, "X", "T")
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(0,1).plot("C", tt, "X", "C").rowSpan(2)
    .show()
    
// Figure with 4 plots arranged in a grid
f4 = figure(2, 2)
    .newChart(0,0).plot("Y", tt, "X", "Y")
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(0,1).plot("C", tt, "X", "C")
    .newChart(1,1).plot("T", tt, "X", "T")
    .show()

// Re-ordered operations from f4, should appear the same though
f5 = figure(2, 2)
    .newChart(1,1).plot("T", tt, "X", "T")
    .newChart(0,1).plot("C", tt, "X", "C")
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(0,0).plot("Y", tt, "X", "Y")
    .show()

Improved validation of data routing configuration can cause errors in existing configurations

This Deephaven release includes new data routing features, and additional validation checks to detect possible configuration errors. Because of the additional validation, it is possible that an existing data routing configuration that was previously valid is now illegal and will cause parsing errors when the configuration server reads it.

If this occurs, the data routing configuration must be corrected, with the dhconfig tool in --etcd mode to bypass the configuration server (which fails to start when the routing configuration is invalid).

Export the configuration:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing export --file /tmp/routing.yml --etcd

Edit the exported file to correct errors, and import it:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing import --file /tmp/routing.yml --etcd

Additional details When the data import configuration is incorrect, the configuration_server process will fail with an error like this:

Initiating shutdown due to: Uncaught exception in thread ConfigurationServer.main io.deephaven.UncheckedDeephavenException: java.util.concurrent.ExecutionException: com.illumon.iris.db.v2.routing.DataRoutingConfigurationException:

In the rare case when this happens in a previous version of Deephaven, or if the solution above doesn't work, the following direct commands can be used to correct the situation:

Export:

sudo DH_ETCD_DIR=/etc/sysconfig/illumon.d/etcd/client/datarouting-rw /usr/illumon/latest/bin/etcdctl.sh get /main/config/routing-file/file > /tmp/r.yml

Import:

sudo DH_ETCD_DIR=/etc/sysconfig/illumon.d/etcd/client/datarouting-rw /usr/illumon/latest/bin/etcdctl.sh put /main/config/routing-file/file </tmp/r.yml

Python Integral Widening

In the 1.20211129 release, the jpy module that Deephaven's Python integration depends on converting all Python integral results into a Java integer. This resulted in truncated results when values exceed Integer.MAX_VALUE. In 1.20221001, Deephaven is using an updated jpy Integration that returns values in the narrowest possible type; so results that previously were an integer could be returned as a byte or a short. Moreover, a formula may have different types for each row. This prevented casting the result into a primitive type, as boxed objects may not be casted to another primitive.

In 1.20221001.196, Python calls in a formula now widen Byte and Short results to an Integer. If the value returned exceeds, Integer.MAX_VALUE, then the result is a Long. Existing formulas that would not have been truncated by conversion to an int in 1.20211129, behave as they would have in that release.

As casting from an arbitrary integral type to a primitive may be required, we have introduced a utility class com.illumon.iris.db.util.NumericCast that provides objectToByte, objectToShort, objectToInt, and objectToLong methods that will convert any Byte, Short, Integer, Long, or BigInteger into the specified type. If an overflow would occur, an exception is thrown.

Numba formulas (those that are surrounded in the nb function); have the narrowing behavior as in prior versions of 1.20221001.

Changed to use DHC Fast CSV parser for readCsv

TableTools.readCsv calls now use the new DHC High-Performance CSV Parser that uses a column oriented approach to parse CSV files.

The change to DHC parser includes the following visible enhancements

  1. Any column that is only populated with integer surrounded by white space will be identified as an integer column. The previous parser would identify the column as a double.

  2. Only 7-bit ASCII is supported as valid delimiters. This means characters such as € (euro symbol) are not valid. In these cases the following error will be thrown, delimiter is set to '€' but is required to be 7-bit ASCII.

  3. Columns populated wholly with only single characters will be identified as Character columns instead of String columns.

  4. Additional date time formats are automatically converted to DBDateTime columns. Previously, these formats were imported as String columns. All other date time behavior remains unchanged.


    | Format | Displayed Value in 1.20211129 | Data Type In 1.20211129 | Displayed Value in 1.20221001 | Data Type in 1.20221001 |

    | DateTimeISO_UTC_1 | 2017-08-30 11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_UTC_2 | 2017-08-30T11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MillisOffset_2 | 2017-08-30T11:59:59.000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MicrosOffset_2 | 2017-08-30T11:59:59.000000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime |

To use the legacy CSV parser, set the configuration property com.illumon.iris.db.tables.utils.CsvHelpers.useLegacyCsv to true.

Support Barrage subscriptions between DnD workers

DnD workers can now subscribe to tables in other DnD workers using Barrage.

This can be done using ResolveTools and a new URI scheme pq://<Query Identifier>/scope/<Table name>[?snapshot=true] The Query Identifier can be either the query name or the query serial. The Table Name is the name of the table in the server query's scope. The optional snapshot=true parameter indicates that a snapshot should be fetched instead of a live subscription.

import io.deephaven.uri.ResolveTools
TickingTable = ResolveTools.resolve("pq://CommunityQuery/scope/TickingTable?snapshot=true") 
from deephaven_enterprise.uri import resolve
TickingTable = resolve("pq://CommunityQuery/scope/TickingTable?snapshot=true") 

Improvements to command line scripts

Deephaven provides many maintenance and utility scripts in /usr/illumon/latest/bin. This release changes many of these tools to more consistently handle configuration files, setting java path and classpath, error handling, and logging.

Classpaths now include customer plugins and custom jars. This is important for features that can include custom data types, including table definitions and schemas.

For the tools included in this update, there is now a consistent way to handle invalid configuration and other unforeseen errors.

Override the configuration (properties) file

If the default properties file is invalid for some reason, override it by setting DHCONFIG_ROOTFILE. For example:

DHCONFIG_ROOTFILE=iris-defaults.prop /usr/illumon/latest/bin/dhconfig properties list

Add custom JVM arguments

Add java arguments to be passed into the java program invoked by these scripts by setting EXTRA_JAVA_ARGS. For example:

EXTRA_JAVA_ARGS="-DConfiguration.rootFile=foo.prop" /usr/illumon/latest/bin/dhconfig properties list

Scripts included in this update

The following scripts have been updated:

  • crcat
  • data_routing
  • defcat
  • delete_schema
  • dhconfig
  • dhctl
  • export_schema
  • iriscat
  • iristail
  • migrate_acls
  • migrate_controller_cache
  • validate_routing_yml

Controller Tool "Status" Option

The new --status subcommand for the persistent query controller tool generates a report to standard output with details of selected persistent queries.

With --verbose, more details are included. If a query has a failure recorded and only one query is selected, the stack trace is printed after the regular report. Use the --serial option to directly select a specific query.

With --jsonOutput, a JSON block detailing the selected query states is emitted instead of the formatted report. Use --jsonFile to specify an output location other than standard output.

Possible breaking changes were introduced with this feature:

  • Previously (before Silverheels) the flag options --continueAfterError, --includeTemporary and --includeNonDisplayable required but ignored a parameter. For example, --includeTemporary=false and --continueAfterError=never were both accepted as "true" conditions. In Silverheels, the argument is still required, but only true and 1 will be accepted as true, false and 0 will be accepted as false, and anything else will be treated as a command line error.
  • Details of information log entries generated by command_tool have changed. Important functionality had previously been deferred to after the starting/finished log entries for the corresponding items had been emitted. Those actions are now bracketed by the log marker entries to better inform troubleshooting.
  • A warning message is emitted to the console when no queries are processed due to selection (filtering) criteria. An informational console message summarizing the filter actions has also been added.

Code Studio Engine Display Order

When selecting the engine (Enterprise or Community) in a Code Studio, existing Deephaven installations show the Enterprise engine first for backwards compatibility. New installations show the Community engine first. This is controlled by a display order property defined for each worker kind. Lower values are displayed first by the Code Studio drop down.

By default, the Enterprise engine has a display order of 100 and Community engine has a display order of 200. For a new installation, the iris-environment.prop file sets the priority of the Community engine to 50 as follows:

WorkerKind.DeephavenCommunity.displayOrder=50

You may adjust the display order properties for community workers by changing the display order property as desired.

etcd ownership

In previous releases, if the Deephaven installer installed etcd, the etcd and etcdctl executables in /usr/bin were created with the ownership of the user who ran the installation. They should be owned by root.

ls -l /usr/bin/etcd*

If the ownership isn't root:

sudo chown root:root /usr/bin/etcd*

ACLs for DbInternal Index and Community tables

Preexisting installs must manually add new ACLs for the new DbInternal tables.

First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:

-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessEventLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessTelemetryIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessInfoLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessMetricsLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCommunity -overwrite_existing
exit

Then, run the following to add the new ACLs into the system:

sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt

Alternatively, the ACLs can be added manually one by one in the Swing ACL Editor:

allusers | DbInternal | ProcessEventLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessTelemetryIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | ProcessInfoLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessMetricsLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ServerStateLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))

DnD Now supports Edge ACLs

Query writers can now specify ACLs on derived tables. These ACLs will be applied when tables or plots are fetched by a client based upon the client's groups.

Edge ACLs are created using the EdgeAclProvider class in the io.deephaven.enterprise.acl package. Additionally, the io.deephaven.enterprise.acl.AclFilterGenerator interface contains some helpful factory methods for commonly used ACL types.

The following example assumes that a table "TickingTable" has already been created. Edge ACLs are created using a builder that contains a few simple methods for building up ACL sets. Once build() is called you have an ACL object which can then be used to transform one or more tables using the applyTo() method. Note that you must overwrite the scope variable with the result of the application, since Table properties are immutable.

import io.deephaven.enterprise.acl.EdgeAclProvider
import io.deephaven.enterprise.acl.AclFilterGenerator

def ACL = EdgeAclProvider.builder()
        .rowAcl("NYSE", AclFilterGenerator.where("Exchange in `NYSE`"))
        .columnAcl("LimitPrice", "*", AclFilterGenerator.fullAccess())
        .columnAcl("LimitPrice", ["Price", "TradeVal"], AclFilterGenerator.group("USym"))
        .build()

TickingTable = ACL.applyTo(TickingTable)
from deephaven_enterprise.edge_acl import EdgeAclProvider
import deephaven_enterprise.acl_generator as acl_generator

ACL = EdgeAclProvider.builder() \
    .row_acl("NYSE", acl_generator.where("Exchange in `NYSE`") \
    .column_acl("LimitPrice", "*", acl_generator.full_access()) \
    .column_acl("LimitPrice", ["Price", "TradeVal"], acl_generator.group("USym")) \
    .build()
    
TickingTable = ACL.apply_to(TickingTable)

See the DnD documentation for details on the AclFilterGenerator and EdgeAclProvider interfaces.

Remote R Groovy Sessions

The idb.init method now has an optional remote parameter. When set to TRUE, Groovy script code is not executed locally but rather on a remote Groovy session as is done in the Swing console or Web Code Studio. This eliminates a class of serialization problems that could otherwise occur with a local Groovy session serializing classes to the remote server. To use the old local Groovy session, you must pass the remote parameter as follows:

idb.init(devroot=devroot, workspace, propfile, keyfile=keyfile, jvmArgs=jvmLocalArgs, remote=FALSE)

Additionally, you may now call idb.close() to terminate the remote worker and release the associated server resources.

LogEntry Interface Change

In the com.fishlib.io.log.LogEntry class, the end() and endl() methods have been changed so that instead of returning the LogEntry instance on which they are operating, they don't return anything. After these methods have been called their LogEntry instance should not be operated on, and further operations on that LogEntry can introduce run-time issues.

Because of this change, any code that uses the Deephaven logging classes will need to be recompiled. If the logging calls rely on the returned LogEntry they will need to be updated.

The Client Update Service has been removed as a stand-alone service

The Client Update Service (CUS) that serves Deephaven jars, properties, etc. to clients has been integrated into the same web server as the Web API Service. This eliminates the need to use two separate web ports on the infrastructure server and simplifies updating TLS certificates.

How to configure

The host, port, and protocol of the old CUS could be configured using the files getdown.host, getdown.port, and getdown.protocol. This is no longer supported. The new CUS uses the same protocol as the Web API Server. If Envoy configuration is not enabled, the host and port can be configured with the client_update_service.host and client_update_service.port properties.

What has changed

The old CUS was a stand-alone process with its own web server and was accessed from a separate port from the Web API Server. The old CUS served files from the /var/www/lighttpd/iris/iris/ directory, which it built on startup. This meant that it needed to be restarted to make new files or configuration available to a user using the legacy Swing Launcher.

The new CUS uses the same web server as the Web API Service and is accessed on the same host and port (by default, 8123). The new CUS will still be available on the old default port, 8443. Instead of restarting the Web API Service to make new files available to Swing clients, you can navigate to the URL https://<WEB_HOST:WEB_PORT>/reload. This will build a new directory inside a temporary location from which the CUS will serve files.

The webpage for downloading the legacy Swing launcher has moved to https://<WEB_HOST:WEB_PORT>/launcher.

If you are upgrading without using the installer

Delete the standalone client update service process configuration from monit: sudo -u irisadmin rm /etc/monit.d/deephaven/service/cus.conf

You will need to reload monit afterwards to see changes (note that this will not restart any processes): /usr/illumon/latest/bin/dh_monit reload

Inside iris-environment.prop, set the property client_update_service.host to the fully-qualified domain name of the host on which the Web API Service will run. If this is not set, the Web API Service will fail with com.illumon.iris.web.server.DHFileDigester$DigestException: Unknown host for client update service, client_update_service.host must be set

Replace PersistentQueryController communication with gRPC

The PersistentQueryController now uses gRPC as a communication transport for clients. All data that was previously serialized and sent using Java serialization is now packaged and sent as protobufs. This will enable cross-language clients to access the Deephaven backend using standard tools.

In order to facilitate this, a few polymorphic classes contained in messages exchanged by the Controller and Client have been explicitly converted to JSON.

ScriptPathLoaderState

The ScriptPathLoaderState contains information about the git repository, branch and commit to use when fetching script details. This should be transparent unless you have implemented your own ScriptPathLoader, or are directly using it in a custom PersistentQueryConfiguration type. A method has been added to ScriptPathLoaderState to encode into JSON:

/**
 * Encode this state object as a JSON string.
 * @return a JSON encoded version of this object.
 */
String encodeJSON();

A new method has been added to the ScriptPathLoader to decode a JSON object back into a ScriptPathLoaderState instance:

/**
 * Create a state object from the serialized JSON String provided by {@link ScriptPathLoaderState#encodeJSON()}
 * 
 * @param stateJSON the JSON state
 * @return the deserialized object from the JSON state
 */
void ScriptPathLoaderState makeState(@NotNull final String stateJSON)

Special Persistent Query types

If you have implemented your own PersistentQueryConfigurationTypes you will need to implement JSON encoding for each of the following classes.

TypeSpecificWireConfig

The type specific configuration is still sent to the SetupQuery that runs within the worker as an object using Java serialization. However, the UI Components for configuring the query in Swing must be updated to expect this value as a JSON string. A new method has been added to the interface for you to serialize the object to JSON.

 public interface TypeSpecificWireConfig {
    /**
     * Encode this object as JSON for serialization.
     * @return a JSON string representing this object
     */
    String encodeJSON();
}

TypeSpecificState

The TypeSpecific state for custom query types is set during the SetupQuery phase of initialization. Previously, you would pass the TypeSpecificState object directly into the PersistentQueryState object. You must now encode it into a JSON string. This is reflected in the signature change of the PersistentQuery.getSessionState() method. The last parameter is now a String instead of a TypeSpecificState.

public static PersistentQueryState getSessionState(final PersistentQueryConfiguration config,
                                                   final ScriptSession session,
                                                   @Nullable final String typeSpecificStateJson)

TypeSpecificFields

Type specific fields encode information about the query configuration. For example, Merge queries have a TableName and Namespace field. In the backend, type specific fields are stored as a Map<String, Object>. When communicating with the front end, they must now be encoded as JSON using the following methods:

PersistentQueryConfigurationMarshaller.encodeTypeSpecificFieldsJson(Map<String, Object> values)
PersistentQueryConfigurationMarshaller.decodeTypeSpecificFieldsJson(String typeSpecificFieldsJson)

The types allowed in the map are now restricted to primitive types (eg. byte, int long, etc.), String, and arrays of those types. More complex types must be pre-encoded into a string.

Envoy Configuration for ACL Write Server

The ACL write server now uses a REST API instead of a Java serialization protocol. When using Envoy in prior versions, the setting the property envoy.hasCommRoutes=false would disable the ACL write server route. To disable the ACL write server route, you must now explicitly set envoy.hasAclSvc=false.

Simplified data routing

Deephaven supports multiple data import servers (DIS), including in-worker implementations that support importing specific data sources. The existing data routing configuration file can become very complicated because the tables handled by one of these DISes must be excluded from the others.

This change creates a framework wherein specific namespaces and tables can be assigned globally to specific DIS instances.

Claiming a table

In the YML data routing configuration file, claim a table or namespace for a DIS by adding claims to the configuration section for that DIS.

  dataImportServers:
    dis_claims_kafka:
      ...
      claims:
        - {namespace: Kafka, tableName: table1}
        - {namespace: Kafka2, tableName: "*"}
        - {namespace: Kafka3}

The tableName is optional in the claim. If omitted, or if the value is *, then all tables in the namespace are claimed. If given, only that single table is claimed.

Each DIS may claim multiple tables or namespaces. Any number of DISes may claim distinct tables or namespaces. A DIS that uses filters may not make claims and vice versa.

A DIS with claims handles only data for those claims. No other DIS (except in the same failover group) handles data for those claimed tables.

Multiple claims on the same table or namespace are not allowed (but see failover groups below).

Overriding claims

A claim may be made for a single table, or for an entire namespace. A single table claim overrides a namespace claim. All claims in the file are evaluated together, so file order is not meaningful.

In this example, dis_claims_kafka_t1 will be the source for Kafka.table1, and dis_claims_kafka will be the source for all other tables in the Kafka namespace.

  dataImportServers:
    dis_claims_kafka:
      ...
      claims:
        - {namespace: Kafka}
    dis_claims_kafka_t1:
      ...
      claims:
        - {namespace: Kafka, tableName: table1}

Failover groups

A failover group is a set of DISes that handle equivalent data. All members of a failover group must have identical filters and claims. If one DIS in this list is unavailable, the other will be automatically used.

This release adds a failoverGroup keyword to make failover groups more explicit.

DISes in the same failover group are permitted to make identical claims.

Create a failover group by adding the same failoverGroup tag to all member dataImportServers sections, and then use the group name in the tableDataServices section:

  dataImportServers:
    dis_kafka1:
      ...
      failoverGroup: kafkaGroup
      claims:
        - {namespace: Kafka}
    dis_kafka2:
      ...
      failoverGroup: kafkaGroup
      claims:
        - {namespace: Kafka}

  tableDataServices:
    db_tdcp:
      sources:
        - name: kafkaGroup

Note: failover groups were supported in previous releases with this syntax in the tableDataServices section:

  tableDataServices:
    db_tdcp:
      sources:
        - name: [dis1, dis2]

Implications

The notion of tables being claimed by specific DISes is enforced at a global level. This means that the claimed tables do not need additional exclusions in the configuration file, for purposes of determining tailer targets (to which DISes should this data be sent) and table data services (TDS) (from which DIS or service should this data be requested).

Important: Data Import Servers only supply online data. Claims and related filtering only apply to online data.

This new concept has the following implications (note that a DIS is implicitly also a TDS): Data Import Servers:

  • If a claim is made, the implicit DIS filter is "have I claimed this table?"
  • If no claims are made, the DIS filter is "is the table unclaimed?" plus any filter specified

When composing Table Data Services (sources tag):

  • A failover group ([dis1, dis2]) is filtered with "is table unclaimed" + plus any filter given. Note that this ignores filters on the DISes themselves.
  • If a filter is provided, that filter will be used for the named source.
  • If a filter is not provided:
    • dataImportServers causes the section to be repeated for all applicable DISes.
    • A named failover group is filtered with "is claimed by this group".
    • A named DIS is filtered with that DIS's filter, which will be an is-claimed or is-not-claimed filter.
    • Anything else is delegating to another TDS and will be filtered with "is table unclaimed".

All the Data Import Servers

This release introduces a shortcut for referring to the Data Import Servers as a group. This can be useful for complex routing and filtering. It also means it is possible now to add new DIS sections without updating the TDS section.

The syntax below will direct the db_tdcp to include all DISes, except for db_rta and db_also_special.

  tableDataServices:
    db_tdcp:
      sources:
        - name: dataImportServers
          except:
            - db_rta
            - db_also_special

Note on "legacy format"

New features sometimes require new syntax. Deephaven strives to continue supporting data routing configuration files in "legacy" format, but new features are often unavailable until updates are made. This is generally painless, because if you want to use a new feature, you must ask for it.

However, if you change a section of the YML file to use a new feature, you may have to make additional changes to bring the section into the current file format.

claims is a new feature, and it requires the current format. Specifically here, if you add claims, you will likely also need to add an endpoint section. See the "Data Import Servers" section of "Data routing service configuration via YAML" in the documentation for more details.

For example, change this:

    dis1:
      host: host-value
      tailerPort: *default-tailerPort
      tableDataPort: *default-tableDataPort

to this:

    dis1:
      endpoint:
        serviceRegistry: none
        host: host-value
        tailerPort: *default-tailerPort
        tableDataPort: *default-tableDataPort

New YML tags

This release creates the following new tags in the data routing configuration file:

TagDescription
claimsList of namespaces and/or tables to be exclusively handled by the current DIS
failoverGroupIndicates the current DIS is part of the named failover group
dataImportServersIn the tableDataServices section, is shorthand for "each known DIS"
exceptwith dataImportServers, indicates a list of DISes to exclude from the set of all DISes

LiveDBPlot Support Removed

The LiveDBPlot and DBPlot classes have been removed from Deephaven 1.20230131. Users should migrate to new plotting methods introduced in 2017 which provide improved functionality and web support.

The default Groovy session previously inherited a static import for min that returned a double. The min function is no longer referenced from StaticGroovyImports allowing other min overloads to be accepted; thus min(1, 2) now returns an int rather than a double in the default Groovy session.

Parse-once builder-options added to JSON/Kafka

There are cases where a non-JSON message is encapsulated within a JSON message. A pair of new builder options are added to the JSON/Kafka adapter to allow the user to avoid parsing the embedded message multiple times.

For example, the following example parses the embedded FIX message each time a field is read from the message:

builder
    .addColumnToValueField("SomeTag", "someTag")
    .addColumnToValueFunction("MsgType", jsonRecord -> {
        // parse the FIX message, and get MsgType<35> from the header
        return FixMessage.parse(jsonRecord.getRecord().get("fixMsg")).getHeader().getField(35);
    })
    .addColumnToValueFunction("FixVersion", jsonRecord -> {
        // parse the FIX message again, and get FixVersion<8> from the header
        return FixMessage.parse(jsonRecord.getRecord().get("fixMsg")).getHeader().getField(8);
    })

The builder now provides the ability to parse the JSON message and store an arbitrary Object, which may be accessed multiple times without parsing multiple times:

builder
    .setParseOnce(jsonRecord -> {
        // parse the FIX message once, and store the value for later access
        return FixMessage.parse(jsonRecord.getRecord().get("fixMsg"));
    })
    .addColumnToValueField("SomeTag", "someTag")
    .addColumnToValueFunction("MsgType", jsonRecord -> {
        // access the pre-parsed object, and get MsgType<35> from the header
        return ((FixMessage) jsonRecord.getParsedObject()).getHeader().getField(35);
    })
    .addColumnToValueFunction("FixVersion", jsonRecord -> {
        // access the pre-parsed object, and get FixVersion<8> from the header
        return ((FixMessage) jsonRecord.getParsedObject()).getHeader().getField(8);
    })

C++ BinaryStoreWriter Produces Corrupt Application Version Records

If the BinaryStoreHeader setApplicationVersion record method was called, then a corrupt application version would be added to the header of the resultant binary log file. This meant that a C++ logger would have been unable to generate anything but a default "0" application version log format. This fix applies to all Deephaven versions, as the C++ binary logging library is independent of the Deephaven version.

Workspace Data Key Frames

The process that serves as persistent state storage for the Deephaven web console is backed by Deephaven tables in the DbInternal namespace. These tables grow a little every time a user updates their workspace and it is re-scanned each time the WebClientData query restarts. In the aggregate this can become a significant resource drain, so we have implemented a "snapshot" facility specific to this table.

New snapshots should be recorded periodically. The tool is designed to be run daily and will only create a new snapshot if sufficient changes have been made, or sufficient time has passed since the last snapshot. The following console command will attempt to do so and should be run in a merge server session or persistent query. The updateSnapshot() function returns true if a new snapshot was made, or false if there were not enough changes to justify a new snapshot.

wws = new com.illumon.iris.utils.WriteableWorkspaceSnapshot(log,db)
wws.updateSnapshot()

The internal logic steps are logged to the ProcessEventLog with LogEntry strings beginning with loadLatestSnapshot and updateSnapshot.

The following configuration parameters control the frequency of snapshot recording. A new snapshot is recorded only if at least one of the criteria are met.

WorkspaceDataSnapshot.daysSinceLastSnapshotThreshold  # default 7
WorkspaceDataSnapshot.changesToSnapshotSizeRatioThreshold # default 0.2 (20% new)

Added new DbInternal.ProcessTelemetry system table and TelemetryHelperQuery

The new ProcessTelemetry table enables monitoring UI performance within the Swing front-end. For each user-initiated action, the console logs the duration between the start of the operation and when the table is ready for use. By aggregating this telemetry, an overall picture of the system's health - as perceived by users - is available. Detailed information can then be used to investigate potential performance problems.

To write these events, the Swing console internally buffers the data and then sends it to the new TelemetryHelperQuery query. If the TelemetryHelperQuery is not running, data is buffered up to a configurable limit, at which point the oldest telemetry data is discarded.

Several new startup parameters define the Telemetry behavior for the console:

[service.name=iris_console] {
    # Identifies if telemetry metrics should be logged in the DbInternal.ProcessTelemetry table. To enable logging to
    # the remote table, set the followig property to `true`. Note that telemetry will still be logged to the local
    # client-logfile unless disabled with the `enableFor` options described below
    Telemetry.remoteTelemetry=false
    
    
    # Defines the frequency for messages to be sent to the server. Events will be batched and sent periodically. The
    # default frequence is 15s
    Telemetry.sendFrequencyMs=15_000
    
    # Defines the initial size of the buffer which stores messages to be sent in a batch, defaulting to 1,024
    Telemetry.initialSendBuffer=1_024
    
    # Defines the maximum number of messages to store and send in a single batch. New messages appended to the buffer
    # after it is full will cause "older" events to be removed. The default maximum value is 10,000
    Telemetry.maxSendBuffer=10_000
    
    
    # A number of classes will attempt to log telemetry locally and to the remote table. Individual classes may be
    # prevented from being logged by setting the following to `false`
    Telemetry.enableFor.DBTableModel=true
    Telemetry.enableFor.IrisTreeTableModel=true
    Telemetry.enableFor.DbOneClickPanel=true
}

A new method, telemetrySummary(), which accepts an optional "Date" parameter, has been added to the default Groovy session. The method will provide summary tables derived from the raw Telemetry data.

For new installations, an ACL is automatically applied to the DbInternal.ProcessTelemetry table. For an upgrade, an ACL editor must create a new "Table ACL" for the raw DbInternal.ProcessTelemetry data to be seen by unprivileged users. The ACL should be similar to the "allusers/ProcessEventLog" ACL, but for the ProcessTelemetry table:

allusers | DbInternal | ProcessTelemetry | new UsernameFilterGenerator("EffectiveUser")

Parameterized Query Lock changes

Parameterized Queries now use the shared LTM lock instead of the exclusive LTM lock. Query writers may also now instruct the query not to use any of the LTM locks with the requireComputeLock on the ParameterizedQueryBuilder.

When using no locks, the query writer must ensure that the Parameterized Query Action does not use any methods that require the LTM lock. If this parameter is set to false incorrectly, then results are undefined.

Add support for worker-scope plugin classpaths

Server processes now search in /etc/sysconfig/illumon.d/plugins/*/worker for server-only plugin jars and classpath entries in addition to searching for path items from /etc/sysconfig/illumon.d/plugins/*/global.

While global dependencies are included on both server and client classpaths, worker dependencies are only be added to server processes (any process using monit or the iris launch script, as well as any jvm started from a server python session). In particular, the client update service does not make JARs in the worker directory available to the Swing console.

Exporting Dashboards from the Web UI

Dashboards and their associated queries can now be exported from the Web UI. This allows you to easily archive, backup, or transfer dashboards between systems.

To export dashboards:

  1. Go to the New tab screen.
  2. Select the dashboards you want to export.
  3. Click the Export button above the dashboard list.
  4. Select whether to Export related queries along with the dashboards.
  5. Click Export.

The browser then downloads a zip or archive file containing the dashboards (and queries, if specified).

Importing Dashboards from the Web UI

Dashboards and their associated queries can also be imported with the Web UI.

  1. Go to the New tab screen.
  2. Click the Import button above the dashboard list.
  3. Select the zip or archive file containing the dashboards and queries you wish to import.
  4. Make any modifications to the import data, if desired.
  5. Click Import.

DnD Workers as PQs

DnD workers can now be started as first class PQs in Live Query, and Run and Done modes. The Community IDE can be launched from the Query Monitor tab of the Web UI.

Changes were made to the following classes that may affect customer code

PersistentQuery

The PersistentQuery class used to be where the worker control code was located for the Controller. That code has been extracted and moved into smaller more self contained classes. The PersistentQuery class remains as a non-constructable class that contains only static methods that can be used in PQ scripts, or ContextAwareRemoteQueries.

Custom Persistent Query Types

There is a new tag CommunityInitializer that defines the class that will be invoked once a ConsoleSession to a community worker has been established. If this tag is not present on a type, the Community engine will not be selectable when configuring those PQ types.

Due to the split up of the PersistentQuery class, the expected type for the ShutdownProcedure class has changed to Consumer<PersistentQueryHandle>>. Any customers using custom PQ configurations must update their code with this new signature.

In the PersistentQueryState class, details related to worker connections such as worker name, host, ports, and processInfoID have moved into the QueryProcessorConnectionDetails class, accessible through PersistentQueryState.getConnectionDetails().

Deephaven Enterprise supports Downstream Barrage

Deephaven Enterprise now supports subscribing to Barrage tables from Deephaven Community workers, anonymously to regular stock community workers, and authenticated via three-way-handshake with an authentication server token for DnD workers.

How to use

There are two different ways to subscribe to tables. The first, and simplest, is the URI method. In order to use this method simply add the following code to your Query:

import io.deephaven.uri.UriConfig
import io.deephaven.uri.ResolveTools
UriConfig.initDefault()

This will initialize the integration and prepare the system for connecting to Deephaven Community instances. Next, simply use ResolveTools.resolve(String uri) to subscribe to a table. The supported URIs can be found in the Community Reference.

For example, to subscribe to the table 'MarketData' in the Scope of a Community worker you might use:

MarketData = ResolveTools.resolve("dh://my.server.com:8888/scope/MarketData")

The first part of the URI selects SSL(dh:) or plaintext(dh+plain:), the second is the path to the server and its port. The next part selects either the Query Scope(/scope/) or application scope(/app/my_app/). Finally, the name of the table to subscribe to is the last part of the URI.

Finer Grained Control

The next method is more complicated, but provides finer grained control over the resulting subscription. You must create the individual components of the subscription as well as the gRPC channel and session for the Barrage exchange.

When using this method, take care to re-use the BarrageSession when you are going to make further subscriptions to tables within the same workers.

dndWorkerHost = 'myserver.com'
dndWorkerPort = 24003

import io.deephaven.client.impl.BarrageSession
import io.deephaven.client.impl.ClientConfig
import io.deephaven.client.impl.ChannelHelper
import io.deephaven.client.impl.SessionImplConfig
import io.deephaven.client.impl.SessionImpl
import io.deephaven.proto.DeephavenChannelImpl
import io.deephaven.qst.table.TicketTable
import io.deephaven.uri.DeephavenTarget
import io.deephaven.shadow.client.flight.io.grpc.ManagedChannel
import io.deephaven.shadow.client.flight.org.apache.arrow.memory.BufferAllocator
import io.deephaven.shadow.client.flight.org.apache.arrow.memory.RootAllocator
import io.deephaven.barrage.BarrageSubscriptionOptions
import io.deephaven.barrage.util.SessionUtil
import io.deephaven.enterprise.auth.AuthenticationClientManager
import io.deephaven.enterprise.auth.DhService

import java.util.concurrent.Executors
import java.util.concurrent.ScheduledExecutorService

bufferAllocator = new RootAllocator()
scheduler = Executors.newScheduledThreadPool(4)

deephavenTarget = DeephavenTarget.builder()
    .host(dndWorkerHost)
    .port(dndWorkerPort)
    .isSecure(true)
    .build()

clientConfig = ClientConfig.builder()
    .target(deephavenTarget)
    .build()

managedChannel = ChannelHelper.channel(clientConfig)

authToken = AuthenticationClientManager.getDefault().createToken(DhService.QUERY_PROCESSOR.serviceName())

authStr = SessionUtil.authenticateAndObtainAuthenticationInfo(managedChannel, authToken)

sessionConfig = SessionImplConfig.builder()
    .executor(scheduler)
    .channel(new DeephavenChannelImpl(managedChannel))
    .authenticationTypeAndValue(authStr)
    .build()

sessionImpl = SessionImpl.create(sessionConfig)

session = BarrageSession.of(sessionImpl, bufferAllocator, managedChannel)

// Use a prefix of "s/" before the variable name for the table in the remote worker
MarketData = session.subscribe(TicketTable.of("s/MarketData"), BarrageSubscriptionOptions.builder().build()).entireTable()

Column Rename Support from UI

The Schema Editor now supports ability to rename a column between the application log file and the table schema. The UI changes include how data type is handled for new columns in the Logger/Listener Column Details section. The default data type for new columns is not set and instead inherits the Intraday Type.

Below are example schemas for LoggerListener and a Listener only schema for a table with three columns.

The table below has three columns (Date, Destination and SameName) while The logger has two columns (SameName and Source).

  • Date column is not present in the log file, but rather determined by the logging process.
  • SameName column is in both the log file and table schema, and does not need to be transformed.
  • Source column in the logger is renamed as Destination in the table.

To rename Source column as Destination, the Listener class should include both Source and Destination columns and their attributes should be:

  • Source: A value of none for dbSetter attribute. This indicates that the column is not present in the table. Additionally, the attribute intradayType should be set to the appropriate dataType.
  • Destination: A value of Source for dbSetter to identify its input source. A value none for intradayType means it is not present in the log file, and cannot be used as part of a dbSetter.

Schema with only Listener class

If the table has an externally generated log file (e.g., with a C++ logger), then you only need to define a Listener block to interpret the log file.

<Table namespace="ExampleNamespace" name="RenameColumn" storageType="NestedPartitionedOnDisk" >
  <Partitions keyFormula="${autobalance_single}"/>
  <Column name="Date" dataType="String" columnType="Partitioning" />
  <Column name="Destination" dataType="int" columnType="Normal" />
  <Column name="SameName" dataType="int" columnType="Normal" />

  <Listener logFormat="1" listenerPackage="com.illumon.iris.test.gen">
    <Column name="Destination" intradayType="none" dbSetter="Source" />
    <Column name="SameName" dataType="int" />
    <Column name="Source" intradayType="int" dbSetter="none" />
  </Listener>
</Table>

Schema with a LoggerListener

If you are generating a Java logger, then you should include a LoggerListener block in your schema.

<Table name="RenameColumn1001" namespace="ExampleNamespace" defaultMergeFormat="DeephavenV1" storageType="NestedPartitionedOnDisk">
  <Partitions keyFormula="${autobalance_single}" />
  
  <Column name="Date" dataType="String" columnType="Partitioning" />
  <Column name="Destination" dataType="Integer" />
  <Column name="SameName" dataType="Integer" />
  
  <LoggerListener logFormat="1" loggerClass="RenameColumn1001Logger" loggerPackage="com.illumon.iris.test.gen" 
                  rethrowLoggerExceptionsAsIOExceptions="false" tableLogger="false" generateLogCalls="true" 
                  verifyChecksum="true" listenerClass="RenameColumn1001Listener" listenerPackage="com.illumon.iris.test.gen">
    
    <SystemInput name="Source" type="int" />
    <SystemInput name="SameName" type="int" />
    
    <Column name="Destination" intradayType="none" dataType="int" dbSetter="Source" />
    <Column name="SameName" dataType="int" />
    <Column name="Source" dataType="int" dbSetter="none" />
  </LoggerListener>
</Table>

Renaming column examples for Blob and String data types.

The above pattern can be followed for all data types except for Blob and String. The differences are detailed below.

Renaming column for a Blob data type.

To rename column of a Blob data type, users need to provide the actual data type of the data stored in Blob. The Edit Logger/Listener Column UI now provides the ability to edit the data type field when the Intraday Type field is Blob. Note the data type field can only be changed if the current displayed value is none. In addition to setting the data type for the application log file column, the Listener column's dbSetter attribute must explicitly invoke cast as shown in the example schema below.

The example below shows Destination, SameName and Source columns of data type java.util.List.

<Table name="RenameColumn101" namespace="ExampleNamespace" defaultMergeFormat="DeephavenV1" storageType="NestedPartitionedOnDisk">
  <Partitions keyFormula="${autobalance_single}" />
  
  <Column name="Date" dataType="String" columnType="Partitioning" />
  <Column name="Destination" dataType="java.util.List" />
  <Column name="SameName" dataType="java.util.List" />
  
  <LoggerListener logFormat="1" loggerClass="RenameColumn101Logger" loggerPackage="com.illumon.iris.test.gen" 
                  rethrowLoggerExceptionsAsIOExceptions="false" tableLogger="false" generateLogCalls="true" verifyChecksum="true" 
                  listenerClass="RenameColumn101Listener" listenerPackage="com.illumon.iris.test.gen">
    
    <SystemInput name="Source" type="java.util.List" />
    <SystemInput name="SameName" type="java.util.List" />
    
    <Column name="Destination" intradayType="none" dataType="java.util.List" dbSetter="(java.util.List)blobToObject(Source)" />
    <Column name="SameName" intradayType="Blob" dataType="java.util.List" autoBlobInitSize="32000" />
    <Column name="Source" intradayType="Blob" dataType="java.util.List" dbSetter="none" autoBlobInitSize="256" autoBlobMaxSize="32000" />
  </LoggerListener>
</Table>

Renaming column for a String data type.

The basic steps are similar to all previous example except the Intraday Type for a String column is EnhancedString. This is reflected in the options available to select for Intraday Type. The list of valid Intraday Type options no longer includes String.

The dbSetter value for Destination column should include a toString() on the setter value as shown below in the example.


<Table name="RenameColumn02" namespace="ExampleNamespace" defaultMergeFormat="DeephavenV1" storageType="NestedPartitionedOnDisk">
    <Partitions keyFormula="${autobalance_single}"/>

    <Column name="Date" dataType="String" columnType="Partitioning"/>
    <Column name="Destination" dataType="String"/>
    <Column name="SameName" dataType="String"/>

    <LoggerListener logFormat="1" loggerClass="RenameColumn102Logger" loggerPackage="com.illumon.iris.test.gen"
                    rethrowLoggerExceptionsAsIOExceptions="false" tableLogger="false" generateLogCalls="true"
                    verifyChecksum="true" listenerClass="RenameColumn102Listener"
                    listenerPackage="com.illumon.iris.test.gen">

        <SystemInput name="Source" type="java.lang.String"/>
        <SystemInput name="SameName" type="java.lang.String"/>

        <Column name="Destination" intradayType="none" dataType="java.lang.String" dbSetter="Source.toString()"/>
        <Column name="SameName" dataType="java.lang.String"/>
        <Column name="Source" intradayType="EnhancedString" dataType="java.lang.String" dbSetter="none"/>
    </LoggerListener>
</Table>

New parameter for Replay Queries

The option to replay data at variable speeds has been added to Replay Queries. This new parameter can be leveraged in stress testing to simulate greater data loads.

New builder-methods for JSON/Kafka

A number of new builder-options have been added to the JSON/Kafka builder, which are included in javadoc for JsonConsumerRecordToTableWriterAdapter#Builder. The new builder fields allow unpacking/expanding JSON arrays into a number of rows, and allow for easy builder-defined parsing of nested JSON messages.
Additionally, we are now able to parse multiple JSON messages in parallel to improve overall throughput while still guaranteeing proper row-ordering.