Detailed Version Log Deephaven 1.20231218

Note

For information on changes to Deephaven Community, see the Github release page.

Certified versions

Certified VersionNotes
1.20231218.534
1.20231218.532
1.20231218.528
1.20231218.523
1.20231218.491
1.20231218.478Certification on the following tickets is incomplete: DH-17886, DH-17885, DH-17873, DH-17835 and DH-17824.
1.20231218.446
1.20231218.432The following caveat pertains to this release: DH-17557 (pre-built Kubernetes images) is not included in this certification.
1.20231218.385
1.20231218.345The following caveat pertains to this release:
  • Dashboard Export exports all the dashboards you own rather than only selected dashboards.
1.20231218.289
1.20231218.260The following caveats pertain to this release:
  • It is not recommended to use dh_helm from this version (fixed in 1.20231218.274).
  • Sharing and using deephaven.ui from PQs remains in development.
  • Web UI CSV Import Queries do not refresh schemas.
1.20231218.219Note that the deephaven_enterprise.notebook module in Core+ is not currently working. It is expected to be fixed in version .229.
1.20231218.202The following caveats pertain this release:
  • plotly-express is recommended for plotting.
  • The backup_deephaven script will not currently work when used to create a full etcd backup on a system with multiple etcd servers. This impacts the --all and --etcd options.
1.20231218.153The following caveats pertain to this release:
  • DH-15771 (the dh_helm script) is not yet functional in this version.
  • Automated server selection does not function in this version.
1.20231218.115

Detailed Version Log: Deephaven v1.20231218

PatchDetails
534Merge updates from 1.20221001.417
  • DH-19767: Fix link in release notes file
  • DH-19728: Add inotifywait to rocky8 jenkins images
  • DH-17440: Migrate all Jenkins builds off gcr.io images
  • DH-19455: Update bard VM images for Ubuntu 20.04 EOL
  • DH-12106: Correct locking around ScriptRepository script lookups
  • DH-12106: Improve locking around controller git actions
533DH-19692: PQ backed dashboards shared with multiple users not always visible
532DH-19360: remove unused property
531DH-19360: convert LAS audit logging to use current features
530Merge updates from 1.20221001.415
  • DH-19505: Fix release notes
  • DH-19144: correction to in-process LAS for unit tests
  • DH-19457: JS API usage of GetAttributesQuery should be concurrent
  • DH-19365: Added releases notes
  • DH-19365: Add printf-style formatting for log-format suffix
  • DH-19144: correction to removeTable method
  • DH-19144: add audit logging to user table operations in legacy workers
529DH-19304: Release notes fixes
528Merge updates from 1.20221001.408
  • DH-19216: CART clearOnDisconnect is not reliably honored.
  • DH-19219: Remove Jupyter notebook integrations
  • DH-19208: Add __DbConfig.Tables as a default irisInternal schema to import
  • DH-19129, DH-17574: correct release note, add bounds checking to properties
  • DH-19129: server side limit on size of atomic table append via LAS
  • DH-17574: client side limit on size of atomic table append via LAS
  • DH-14383: make result of IntradayControlImpl commands easier to use
527DH-19179: Fix NPE in Kafka ingestion with transformation
526DH-19135: Disable nightly podman tests (for vplus only)
525DH-18984: Connected Legacy CodeStudios cause an unexpectedly large support log file
524DH-19122: Release note documentation fixes
523Merge updates from 1.20221001.402
  • DH-18792: Config server deadlock (fishlib push)
  • DH-18792: Config server deadlock
  • DH-18723: Input Tables cannot paste more rows than number of visible rows
  • DH-18622: Fix controller issue for started-then-deleted PQs
522DH-17824: Fix docs for restart
521DH-18954: updateBy ArrayIndexOutOfBoundsException
520DH-18967: Do not use local -n in locally-run installer scripts
519DH-17419: Show dashboard modifications with deephaven.ui changes
DH-17418: Fix dashboard major/minor notifications
518DH-18442: Fix Export Logs Fails with Large Number of Queries
517DH-18830: Update internal VM images to version 7
516DH-18645: Fix XSS issue in file list drag and drop
515Update UI packages to v0.78.9
  • DH-18798: Fix token cache growing indefinitely
514DH-18101: Adding keepalive seconds for win boxes
513DH-18125: close the LAS logger in DatabaseImpl.appendLiveTable
512DH-18708: Change gRPC logging exclusion list separator from semicolon to comma
511DH-18701: Update web packages to v0.78.8
DH-18645: Fix panel titles using html instead of just text
DH-18346: Fix partial holiday range breaks
DH-16016: Fix xBusinessTime throwing errors
510DH-18176: Suppress scary "non-fatal" warnings; only upload missing files
DH-15878: Automatically upload etcd*tar.gz files to remote machines
509DH-18422: Update generation of iris-endpoints.prop for Podman so Web ACL Editor will work correctly
508Merge updates from 1.20221001.398
  • DH-18345: Update USNYSE calendar with national day of mourning for Jimmy Carter
  • DH-18166: Avoid lock inversion in OneClickUtils (swing)
  • DH-17927: Backport DH-16434, automatic node being at the top of the list, to jackson
  • DH-18028: Fix ConnectorWrapper race conditions
507DH-18468: Wire up kubernetes flag to jenkins for eggplant
506DH-18519: Allow adding GrpcLogging exclusions via properties and env vars
505DH-18510: Ensure the exclusions list class names for gRPC logging match inner classes as well
504DH-18153: Fix bad substitution in installer script error handling function
503DH-18426: Expose DHLog in global context to allow changing the log level via the browser console
502DH-16191: Core+ Python Auth Context Methods
501DH-16872: Fix Web not displaying PQ error restart count >10 correctly
500DH-18329: Allow user calendars to override system calendars in Core+
499DH-18187: Fix console history not sticking to bottom
498DH-18071: Add test to support DeephavenUI dashboards from a code studio.
497DH-18175: Modified Podman start_command.sh to support Podman on MacOS and to fix --nohup always being applied
DH-17696: Added A to start_command.sh dig call, to ensure IPv4 address is retrieved.
496DH-17932: Change array handling and add label searches to dh_helm uninstall functions
495DH-16189: Fix deephaven.ui panels when permissions change
494DH-17936: Warn when DH_JAVA is set to an invalid path
493DH-17798: Pin deephaven.ui version to 0.15.4
DH-16150: deephaven.ui in Enterprise
DH-17292: Fix tables opened with deephaven.ui throw error when disconnected
492DH-17880: Change Podman start_command.sh default behavior to preserve existing properties
DH-17977: Add volume options to Podman start_command.sh for illumon.d/java_lib and illumon.d/calendars volumes
DH-17999: Fix coreplus_test_query.py nightly test
491DH-18025: Add missing gradle inputs for web dependencies
490DH-18075: Disable certificate-validation script
489DH-18054: Improve validate_certificates.sh script for older OSes
488DH-18035: Remove local - from installer scripts
487DH-17852: Add validation that truststore contains desired certs; ensure all web cert intermediates in new truststore
486Merge updates from 1.20221001.394
  • DH-17822: Use ubuntu instead of centos for installer tests
  • DH-17890: Fix issue where PQ crashing outside schedule skips next start
  • DH-17822: Update iris-defaults.prop to use python3.8 by default
  • DH-17921: Update GWT-RPC to avoid websocket reuse bug
  • DH-17757: make csv import sensitive to CopyTable schemas
  • DH-17822: fix python setuptools, remove python 3.6 and 3.7
  • DH-17995: Pull back filesystem validation in etcdctl.sh
  • DH-17990: Lock inversion deadlock in WorkerLeaseHandler
  • DH-17974: WouldMatch memo key is incorrect
  • DH-17952: Improve Merge DataIndexer consumeTable performance
485DH-18002: Move QA SAML Instructions into repo to sync with releases
484DH-18004: Add explicit dependency from coreplus client to numpy to track upstream dependencies
483DH-18003: Pull back username-as-group fix from DH-17754
482Changelog fix.
481DH-17093: Discard failed promises in CompilerTools.
480DH-17951: Make InternalDeployer stop using username for group in chown
479DH-17949: Backport DH-17481 Core+ Python SystemTableLogger codec support
478DH-17933: Fix java 8 compilation issues in JpyInit
477DH-17873: add --nohup option to Podman start_command.sh
DH-17886: add option to Podman start_command.sh to mount /db/IntradayUser volume
DH-17885: add option to Podman start_command.sh to mount /db/Users volume
DH-17835: remove writability check of volume directories in Podman start_command.sh
476DH-17903: ClassCastException reading parquet file in Legacy
475DH-17928: Fix QueryScheduler token warning
474DH-17929: Fix extra character in TestDefinition
473DH-17915: Fix kegacy barrage subscriptions for rows with empty object arrays of non-Object type
472DH-17824: Fix podman redeployments when logs are stored on a volume
471DH-17902: QA DNS name utility enhancement
470DH-17791: Modified configurations lose creation time
469DH-17909: Increase performance overview test wait time
468DH-17901: Enable legacy python to lookup location of libpython.so
467DH-17499: Fix several dh_helm problems and improve usability when used with values.yaml
466DH-17894: Updates to QA DNS name utility
465DH-17887: deephaven_enterprise.remote_table should return a python deephaven.table.Table object
464DH-17883: Relocate QA DNS Utility to more appropriate location
463DH-17811: Eggplant SUT setup - cleanup final cmds.
462DH-17864: Fix missing tests on integration runs
461DH-17589: Fix summary table on qa-results
460DH-17811: Setup scripts for new SUT boxes to use for Eggplant tests
459DH-17849: Set eggplant VM size in correct location
458DH-17635: Create utility to manage virtual names for QA Test results servers
457DH-17601: Setup auditable dashboards for junit tests on qa-results
456DH-17830: Stop pip from attempting to check PyPI during container initialization
455Merge updates from 1.20221001.389
  • DH-17707: CART does not schedule reconnection on some failures.
  • DH-17717: Check for Connection in com.illumon.iris.db.util.config.TableInputHandler#getTableRaw
  • DH-17744: Remove setuptools.extern from legacy python
454DH-17111: Add better handling for known error case
453DH-17626: Add Eggplant nightly jenkins job
452DH-17718: Add atexit handler to shutdown workers rapidly
DH-17430: Handle trailing metadata to produce better error messages in python client
DH-16939: More error message improvements
451DH-17770: Installer jar needs to be republished to io.deephaven.enterprise
450DH-17795: Fixed passing script text as script name to classloader in Core+
449DH-17697: Support volume for /var/log/deephaven and custom volumes in podman deployment
448DH-17687: Allow incremental include filters in TestAutomation runs
447DH-17589: Fix summary table on qa-results
446DH-17688: Fix PQ imports for eggplant
445DH-17664: Disable some inconsistent controller tests
444DH-17657: Fix default DH_ETCD_USER value in dh_users script
443DH-17654: PersistentQueryConfigTableFactory per-client tables must override satisfied.
442DH-17030: Add single-server, non-root accounts, and Envoy support to podman deployments
441DH-17638: Fixed WebClientData query Reconnects to Controller using incorrect UserContext
440DH-17630: Update Core+ to 0.35.2
DH-17622: DeferredACLTable must copy filters (cherry-pick)
439DH-17609: pushAll.sh should allow a source tag
DH-17557: Include db_query and db_merge images.
438Release note formatting fix.
437DH-17634: Fixed Web API Server Reconnects to Controller using Incorrect Context
436DH-17623: Core+ Performance Overview has Bad Error Message on V+
435DH-17604: Allow int-tests to setup gwt tests
434DH-17559: Republish should capture Installation Media (tar files)
433DH-17608: Tighten permissions for java plugins
432DH-15896: Update build instructions for a qa-results system based on testing of Junit ticket
431DH-17443: prohibit --password from being given more than once
430DH-17496: Additional fixes to writing vectors for Core+ support
429DH-17496: Fix writing Vectors to User tables and reading parquet arrays in legacy workers
428Merge updates from 1.20221001.386
  • DH-17518: Fix dependent scheduling stop-time restart issue
  • DH-17322: Restrict appendCentral by ACL group membership
  • DH-17408: pause tailer connections
  • DH-17467: Missed path on handling *-OLD directories
427DH-17583: Replace a stray jcenter() with mavenCentral() in gradle
426DH-17557: Build and upload container images to GS Buckets
425DH-15624: Correct tolerations applied to Envoy.
424DH-17505: Allow data managers to command DIS truncate
423DH-17568: Fix typos in cluster monitoring queries
422DH-17550: EggplantIntTestSetup should not pass --prodTests flag
421DH-17539: Do not use sudo with -g flag to invoke chgrp
420DH-17551: Qa results metrics add new release
419DH-17540: Update merge/validate queries for Test Automation
418DH-17435: Improve installer test robustness/feedback
417DH-17542: Fix test results to handle non-zero exit status
416DH-17541: Update test results server build instructions
415DH-15896: Track unit tests more accurately
414DH-15624: Add support for tolerations, selectors, and affinity in Helm chart
413DH-17483: Fix run counter logic on qa-results
412DH-17343: Make installing from infra node as irisadmin work (plus test)
411DH-14499: Containerized deployment with podman
410DH-17120: Add qualified references to etcdctl in installer scripts
409DH-16353: Ability to disable password authentication in front-end (swing)
408DH-17498: Fix for dhconfig NPE introduced in 406.
407Merge updates from 1.20221001.382
  • DH-16827: Parameterized Queries listen for OneClick events (swing)
  • DH-17541: Check diskspace before unpacking large tar files
406DH-17055: dhctl checks for disabled tailer ports when scanning
DH-17443: remove auth options from dhconfig checkpoint
DH-17498: remove duplicate status and garbage logging from dhctl
405DH-17506: Do not treat a default INSTALLER_ROUTE value as a user override value
404DH-17504: Fix disabled context menu items for superusers in Query Monitor
403DH-17493: Fix controller_tool test 11 for all supported java
402DH-17485: Fix Web Temp Schedule
401DH-17373: Add DH_NODE_N_INSTALLER_ROUTE for installs from bastion
400DH-16001: Enforce logDirectory with zoneId in Core+ SystemTableLogger builder
399DH-17120: Add DH_DIR_ETCD_BIN to control where etcd binaries are found
398DH-17232: Do not call require_owner if DH_SSH_USER not set
397DH-17272: Make V+ Core+ Python Client Compatible with non-Envoy Grizzly
396DH-17463: Update Core+ to 0.33.6
395DH-17445: Allow config Property to override ServiceRegistry hostname
DH-17056: Allow Endpoint config to override ServiceRegistry hostname
394DH-17279: Add options to disable WebGL
393DH-17420: Fix error with context menu filter on TreeTables
392DH-17414: Dispatcher should log cancellation reason
391DH-17395: Fixed an issue reading old parquet files with improper dictionary offsets. Fixed an issue reading nulls in INT96 encoding
390DH-17400: Use --verbose flags when installation scripts invoke dhconfig
389DH-17353: Remove centos test coverage
388DH-17413: Fix bad string substitution when ssh keys have -v in them
387DH-17288: Fix Exception When Importing a Jackson Query
386Merge updates from 1.20221001.380
  • DH-17353: Deprecate centos7, remove centos nightly tests
  • DH-17378: Fix monit log file location on rocky/rhel OS
  • DH-17372: Fix a bug in internal capacities of UpdateBy
  • DH-17377: avoid location subscriptions in closeAndDeleteCentral
  • DH-16495: handle reference counts while processing pending request snapshots
  • DH-17317: Updates to jackson gen loggers test
  • DH-17291: initialize BasicTableEventHandlerFactory earlier
385DH-17164: Fix JsTreeTable Fails when Same Filter is Applied Twice
384DH-17407: Fix Temurin repo setup for RHEL/Rocky
383DH-16346: Fix Validate Settings Tab for View Only Mode
382DH-17369: Convert qa-results scripts to corePlus and python
381DH-17359: Fixed random Test failures noticed for csv Custom Setters
380DH-17356: Core+ Logger not handling parameters appropriately
379DH-16737: Fix package-lock.json file that was erroneously generated
378DH-16346: Fix Query Monitor Right Click Menu for View Only Query
377DH-17357: Core+ workers should listen on all interfaces in Bare Metal
376DH-17347: Core+ kafka ingester NPE with transformation
375DH-17334: Cherry pick CART improvements from 1.20211129.422
374DH-17303: Add primitive and String array support to Core+ SystemTableLogger
373DH-17327: 'dhconfig dis export' handles empty set better
372DH-17238: combine nested table filters in table data services
371DH-17332: Fix for QA meta results bug in .362
370DH-16987: Fix client-only etcd update scripts
369Merge updates from 1.20221001.374
  • DH-17295: Make rhel8 nightly tests less flaky
  • DH-17273: Manual changes after forward-merge
  • DH-17289: Put back testSerial and testParallel in main jdk8 build
  • DH-17188: CART does not detect reconnection if source is empty
  • Version log typos.
  • DH-17212: Remove PULL_CNF from jenkins menu
  • DH-17172: Controller Connection Memory Leak
  • DH-17221: AggDistinct previous values error
  • DH-17172: Controller Connection Memory Leak
  • DH-17162: Use dh-mirror for internal VM images
  • DH-14625: add release note
  • DH-14625: create an optional lenient IOJobImpl to avoid write queue overflow (improvement)
  • DH-14625: create an optional lenient IOJobImpl to avoid write queue overflow
  • DH-16604: Add controller memory stats to performance metrics
368DH-16737: Reconnect deephaven.ui widgets upon PQ restart
DH-16738: Report errors in deephaven.ui correctly to user
DH-17311: Update Core+ to 0.33.5
367DH-17318: Update VPlus gen loggers test for env
366DH-17314: QA Documentation only update path corrections
365DH-16987: Prefer etcd config files from /etc/sysconfig/deephaven/etcd/client over etcd tar
364DH-17081: Fix Pandas widgets from Core+ workers in dashboards
363DH-17299: allow configuration server to start without routing file
362DH-17170: Qa results - move testEvalStats to a 2col, 3 row table
361DH-16394: Fix Query Summary Out of Sync
360DH-16172: Show Engine, Community Port, and k8s information in Safe Mode
359DH-17168: qa-results refactoring of audit metrics
358DH-16504: ConstructSnapshot and PUT do not consistently handle Instant
357DH-17154: QA meta results refactoring
356DH-17281: Fix Padding for Dashboard Shortcut Titles
355DH-16854: If Login Cancelled after Auth then Log Out
354DH-17280: Make eggplant-api.sh properly update existing test case, fix installer tests
353DH-16876: Fixed csv_import utility not respecting proper default or explicit SAFE flag
352DH-16346: For View Only Query, hide the Save, Copy, and Delete Buttons
351DH-17268: Correctly pad zeroes for JS datetime format
350DH-17262: Better support for input and output cluster.cnf as separate files
349DH-16747: Add eggplant gradle task and jenkinsfile
348DH-16129: Update Instances of community language to Core+ in UI
347DH-17264: Ensure cron on qa-results does not repeat unneccessary elements
346DH-17098: Fix package-lock.json for Jupyter-Grid
345DH-17236: Backport DH-16790 Controller test improvements
344DH-17243: Script for rebumping changelog.
343DH-17162: Use dh-mirror for internal VM images
342DH-17219: Fix how installer handles comments in cluster.cnf.
DH-17240: Reduce cluster.cnf parser warnings
341DH-16766: Capture segmented results from nightly tests
340DH-17229: Fix inaccuracies in filtering test cases
339DH-17063: quiet dhconfig output when configuration server is down (logging npe)
338DH-17228: Use mysql acls in some nightly tests
337DH-16425: Vplus Feb-June 2024 test case updates for QA
336Merge updates from 1.20221001.364
  • DH-17138: fix pseudo subscription errors
  • Changelog typo corrections.
335DH-17211: Fix erroneous Core+ hist part table data discovery
334DH-17197: Fix failing DeploymentGeneratorTest
333DH-17076: Update Web to 0.78.1, fix LayoutHint groups on TreeTables
332DH-17174: Update Core+ to 0.33.4
331DH-17145: Remove unused CUS and RTA installer roles and stop tracking ROLE_COUNT
330DH-17157: Code Studio cannot set Kubernetes Container Image
329DH-17160: Auth server must set authenticatedContext after successful external auth
328DH-17137: Authentication Server not cleaning up all client state when client sessions expire
327DH-17131: Dependencies must be built to Java 8 API, not just bytecode
326DH-17104: Ensure worker overhead properties are applied by default for kubernetes
325Merge updates from 1.20230511.506
  • DH-17077: Make DeploymentGeneratorTest pass again
324DH-17118: Improve cluster.cnf parsing logic
Merge updates from 1.20230511.505
DH-16829: Update worker overhead properties
DH-17072: Do not write temporary-state DH_ properties to cluster.cnf
DH-17026: Publish EngineTestUtils (backport of DH-15687)
DH-17058: Make pull_cnf disregard java version
DH-16884: Add configuration for default user table format to be parquet
DH-17048: Fix controller crash and shutdown issues
DH-17014: Make cluster.cnf preserve original source and prefer environment variable
DH-17045: Address Test Merge issue in Vermilion
DH-17011: Forward Merge of promoted tests from Jackson and promotions in Vermilion
Backport DH-16948: Always use same grpcio version for Core+ Python proto stub building
DH-17031: Minor corrections and formatting for QA automation How-to
DH-16936: make recreating schemas watch more efficient
DH-16717: Add heap usage logging to web api, TDCP, DIS, LAS, controller, and configuration server
DH-17004: change closeAndDeleteCentral to clean up tdcp subscriptions
DH-17000: Correct improper test promotion in Jackson
DH-16888: Preserve original cluster.cnf when regenerating cluster.cnf w/ defaults
DH-16599: Bard Mar 2024 test case updates for qa
DH-16986: Update for flaky results from merge test starting at Bard
DH-16887: Fix test for DH-11284 starting at Bard
DH-16797: Change git location on QA testing systems
DH-16996: Forward merge of tests fixed in Bard to Jackson
DH-16992: Promoting Jackson level tests to RELEASED
DH-16979: Fix for CSV tests Jackson and later
DH-16663: remove cached data when there are no active subscriptions
DH-16934: Fix permissions check for writing workspace data
DH-16908: Fix dry run in iris_keygen.sh
DH-16851: Improve qa results setup docs
DH-16826: Select/Deselect All for OneClick Lists in Export dialog (swing)
DH-15247: Set DH_ETCD_IMPORT_NODE default value to the first config server
DH-16675: Account for worker overhead in dispatcher memory utilization
DH-16702: Vermilion April2024 test case updates for qa
DH-16958: Backport DH-16868 - Check if shadow package already added before adding again
DH-16875: Fix CSV import tests
DH-16873: Update and correct "testScript" section of automated QA tests
DH-16716: Parameterized logging broken in vermilion
DH-16847: Update and correct Dnd testing scripts
DH-16836: Fix forward merge anomaly
DH-16813: QA testing git update to Jackson
DH-16818: QA Testing System file relocation and documentation updates
DH-16072: Jackson Dec2023 test case updates for qa
DH-16480: Documentation and support for QA_Results system build
DH-16794: better handle export of nonexistent routing file
DH-16762: Fix C# docfx task (need to pin older version)
DH-16584: Make internal installer use correct sudo when invoking iris_db_user_mod
DH-16586: Improve qa cluster cleanup script
DH-16640: fixes for tests failing on bard and later revisions
DH-16708: Improve import script on qa results
DH-16698: Update BHS images to fix a broken rhel8 test
DH-16752: Fix installer tests getting null clustername
DH-16605: Use grep before sudo sed to avoid root when migrating monit
DH-16406: Improve jackson nightly installer test debugability
DH-16718: Fix test cases based on CommonTestFunctions refactor
DH-16706: ColumnsToRowsTransform.columnsToRows fillChunk does not set output size
DH-16700: Ensure QA results setup is maintainable
DH-16750: Fix temporary and auto-delete scheduling checks
DH-16542: CUS should trim client_update_service.host - fix for Envoy
DH-15013: Fix upload CSV from Web UI for timestamp columns
323DH-17066: Apply Kubernetes Control to Legacy workers in ConsoleCreator
322DH-17113: Fix permissions on test support files
321DH-17063: Fix integration tests from quieter error output
320DH-17070: AuthenticateByPublicKey misses state when different servers are involved
319DH-17101: Update protobuf gradle plugin
318DH-16557: Fixed DHC CSV Import not working with gzip files
317DH-16955: Fix rollup rows and moved columns hydration
316DH-16983: Test Automation - push git scripts to all controller nodes
315DH-17098: Update package-lock in jupyter-grid
314DH-17087: Minor test system documentation update for V+
313DH-17086: Fix Test Automation README
312DH-17074: Controller Tool Status Should Use a Static Table
311DH-14265: Make new PR check cancel any still-running PR check
310DH-16255: Fix incorrect log message in python setup script
309DH-17063: quiet dhconfig output when configuration server is down
308DH-17057: add support for remote DataImportServiceConfig.asTableDataServiceConfig
307DH-17049: Allow disabling password authentication
306Update web version 0.78.0
  • DH-17051: Fix partition selector not showing more than 1000 options
  • DH-17052: Do not show "Delete Selected Rows" for input tables without columns
305Java 8 build fix and changelog fixes.
304DH-17035: Ensure BUILD_URL from jenkins is populated in Test Automation results
303DH-17032: Deep linking can cause the wrong dashboard to open after logout
302DH-17042: Forward-merge Test Automation
301DH-17033: Combine JS API table ops on login to improve speed
300DH-16978: Additional fixes for multiple auth servers
299DH-17029: handle removed locations in the LTDS
298DH-16866: Improve Test Automation to target cluster
297DH-17023: Added "target version" parameter to update-dh-packages script
296DH-17017: Skip staging tests on Feature Branch runs in jenkins
295DH-16143: Update GWT-RPC to avoid websocket reuse bug
DH-16642: Web UI should allow a second QM
294DH-16164: dhconfig schema import -d does not handle symlinks properly
293DH-16933: DH-16778: Fix dashboard export saving extra dashboards and queries
292DH-16995: Plotly express does not work in Deephaven UI
291DH-16997: Make internal installer detach install scripts from java process group to avoid getting killed on failures
290DH-16950: Prevent ChunkerCompleter.resolveScopeType from getting into an infinite recursive loop and crashing
289DH-16988: Ensure nightly test VM names are unique, and other test stability improvements
288DH-16658: Hive layouts should return an empty table if the table base location does not exist
287DH-16941: MergeParametersBuilder should have a default value for threadPoolSize
286DH-16926: Fix test generation error on multi-PQs
285DH-16914: Update DHC packages to ^0.77.0
DH-16914: ACL Editor crashes with error: No item found matching key: 0
284DH-16976: Fix java 11 compile from 283
283DH-16976: Fixed Core+ out of bounds errors when trying to unbounded fill
282DH-16916: Pin Spectrum Dependencies for @adobe/react-spectrum 3.33.1
DH-16916: ACL Editor: unable to scroll the Namespace dropdown
281DH-16978: Multiple auth server private-key validation failures
280DH-16970: Ensure EXCLUDE filter in Test Automation is honoured on kafka
279DH-16969: Allow RemoteTableBuilder to work with clusters behind envoy
278DH-16971: Make internal installer clear failed systemd units so systemctl is-system-running works
277DH-16907: Allow test automation with no FeedOS schemas
276DH-16913, DH-16962: Make all nightly tests pass, and run stably
275DH-16965: correct error message when LAS is not available
274DH-16544: Bug fixes for dh_helm
273DH-16890, DH-16779: fix java version on nightly tests, use internal java repositories
272DH-16953: Put the version back into rpm package names
271DH-16921: Fix DashboardOverride rewriting without changes
270Update Web Version 0.76.0
  • DH-16924: Fix laggy notebooks in Web UI
  • DH-16595: Fix null partition filter
  • DH-16230: Fix child selection in Query Monitor Query Type dropdown
269DH-14825: Java 8 compilation fix.
268DH-16948: Always use same grpcio version for Core+ Python proto stub building
267DH-16944: Add Cross Cluster test to Grizzly QA
266DH-16925: Snapshot locations break multi-level pages for parquet regions
265DH-16910: Adjust Kubernetes heap overhead parameters
264DH-16923: make claims filter consistently accept user tables
263DH-15984: better handle export of nonexistent routing file
262DH-16907: Update FeedOS schemas for ticking source
261DH-14825: CUS should ensure served files are accessed
260DH-15824: Fix cluster.cnf backup commands
259DH-16862: Core+ does not properly convert between Legacy and Core NULL_CHAR
258DH-16883: Upgrade should import the new status-dashboard-defaults.json file
257DH-16889: Fixed an NPE in ungroup with nulls in array native array columns
256DH-16898: Fix configuration for high cpu tests
255DH-16890: Fix imports in AbstractDeploymentTest.groovy
254DH-16811: Config to support nightlies in test automation
253DH-16868: Check if shadow package already added before adding again
252DH-16189: Update deephaven.ui and plotly plugins
DH-16189: Fix re-hydration of deephaven.ui plugins in dashboards
plotly-express v0.7.0: https://github.com/deephaven/deephaven-plugins/releases/tag/plotly-express-v0.7.0
deephaven.ui v0.13.1: https://github.com/deephaven/deephaven-plugins/compare/ui-v0.8.0...ui-v0.13.1
251DH-16720: Support deephaven.ui dashboards from PQs
250DH-16852: Do not permit scheduling a worker with a heap more than available memory.
249DH-14975: Make DH_JAVA the ultimate source of truth for "where to find java"
DH-15824: Backup previous /etc/sysconfig/deephaven/cluster.cnf whenever upgrading
248DH-16420: More versatile configuration of status dashboard query monitoring
DH-16850: Fix Kubernetes installation issues
247DH-16832: Test built in Community Code should not be run in Java8
246DH-16842: Use Parameter instead of QueryTracker Config in Dispatcher Usage Update
245DH-16821: Pull qa-results improvements forward to vplus
244DH-16838: DELETEs handled incorrectly in Presence KV Monitor
243DH-16822: ReplicatedTable doesn't handle all possible long backed time sources
242DH-16544: dh_helm fixes and enhancements
241DH-16835: Expose WorkerHandle through PersistentQueryHandle as well as connections.
240DH-16224: Refresh ACL data when switching to Import, Merge, or Validate tabs
239DH-16823: Controller client should not print scary error on graceful shutdowns
238DH-16773: Web version bump to v0.72.0
237DH-16816: Failure to Cancel PQ Can Result in Controller Crash
236DH-16783: Fix ChartBuilder in Web UI
235DH-16804: Update deephaven.io version log generation script.
234DH-14610: Use domain names to send files to etcd server machines
DH-15749: Etcd server IP address should be configurable to support multiple network interfaces
DH-14859: Never leave world-readable etcd config tars on disk
233DH-16776: Fix errors when sorting symbol tables with mixed nulls
232DH-16791: SystemTableLogger Checker is Timing Out
231DH-16787: PresenceWatcher is started under lock
230DH-16693: Run core+ integration tests during nightly installer testing
229DH-16767: Core+ exec_notebook broken in .213
228DH-16633: Rebuild VM images with etcd 3.5.12 instead of 3.5.5
227DH-16805: Fix C# docfx task (need to pin older version)
226DH-16655: Make internal installer replace certs that expire in 2 months or less
225DH-16605: Use grep before sudo sed to avoid root when migrating monit
224DH-16721: Core+ Python Client Should Reauthenticate to Controller
223DH-16740: Share JS API cache between deferred loader and app
222DH-15994: Fixed Core+ DictionaryRegionHelper incorrectly accounting null values
221DH-16689: Core+ worker cannot read direct DbArray Columns
220DH-14774: correct syntax error in update_workspace.py, update installer version
219DH-16731: Republish coreplus java jars, and always use jdk11 for republishing
218DH-16719: SAML Login From Core+ Python Client.
DH-16695: Support io.StringIO as a private key in Core+ Python Client.
217DH-16729: unbox primitive types even when specified as java.lang.Type in schema
216DH-16728: correct error message diagnosing invalid listener
215Release note fixes.
214Merge updates from 1.20230511.488
  • Fix Unit test failure due to expanded assertTableEquals checks
  • Fix forward merge conflict in Core+.
  • DH-16469: Bard Feb 2024 test case updates for qa
  • DH-16569: Backport DH-15882 to fix Pandas data frame view bug
  • DH-16149: Improve npm build caching in CI
  • DH-11512: handle '*-OLD' directories better
  • DH-16672: EmptyToNullStringRegionedColumnSource bypasses index narrowing in grouping
  • DH-16623: Unit test fix from .321
  • DH-16623: Index and GroupingBuilder .hasGrouping() should only look at locations relevant to the desired index
  • DH-16624: ShiftedColumns Interacts with Time Parsing
  • DH-16628: whereIn/whereNotIn with Empty Set Tables can Fail
  • DH-16597: check for routing to export before opening output file
  • DH-16591: Fix reading Parquet files with Mixed dictionaries and Offset Indices
  • DH-16443: Add sudo -u DH_MONIT_USER for installer when checking if monitrc needs migration
  • DH-16408: Do not use yum on systems with dnf
  • DH-15523: Allow config_packager to run as irisadmin when irisadmin is monit user
  • DH-14156: improve merge query and dhctl feedback when tailer ports are disabled
  • DH-14169: Fix message when purge fails
  • DH-16363: Remove kubectl from VM base images
  • DH-16442: Make ubuntu monit de-rooting use DH_MONIT_USER instead of DH_ADMIN_USER
  • DH-16113: Bard Jan 2024 test case updates for qa
  • DH-16451: upgrade npm to latest lts/fermium version
  • DH-16450: avoid a deadlock due to lock inversion
  • DH-16053: correct minor errors in DataImportChannel
  • DH-16367: Make INTERNAL_PKI=true work correctly on mac
  • DH-16354: Make INTERNAL_PKI=true cert expiry limits configurable
  • DH-15467: Change superfluous gitlab url into github url
  • DH-16107: NPE in whereIn Error Handling
  • DH-16347: add synchronization to getGroup... methods in AbstractDeferredGroupingColumnSource
  • DH-16499: improve feedback in 'dhconfig routing export' when there is no routing file in etcd
  • DH-15729: Allow resources to be skipped in Test Automation
  • DH-16443: Make ubuntu de-rooting grep on monitrc before trying to sed the file
  • DH-16468: Vermilion Feb 2024 test case updates for qa
  • DH-16669: Schema with functionPartitionInput=true generates a broken logger
  • DH-16622: Address inconsistencies in automated tests for DDR
  • DH-16632: Updated controller_tool tests support file locations and stability for vermilion and following
  • DH-16592: Find healthy etcd node for etcd snapshot
  • DH-16534: Importing Jackson ACLs to Vermilion or later fails because SystemACLs are not recognized
  • DH-16612: Avro Kafka Ingestor error with extra consumer fields
  • DH-16580: Bad Origin Causes NPE in Auth Server
  • DH-15070: Make proto re-builds check for "use shadow package" before altering source
  • DH-16542: CUS should trim client_update_service.host
213DH-16678: Add vermin check to Core+.
DH-16705: Add meta import machinery for controller.
DH-16709: Provide Mechanism to Refresh Controller Scripts without Git Configured
DH-16710: git repository state is incorrectly serialized
212DH-16703: Update Vermilion+ to 0.33.3
211DH-16687: Add etcd ACL encoding tool
210DH-16670: FeedOS test support from Bard to VPlus
209DH-16668: Refactor controller_tool tests to wait for logging to be done.
208DH-16686: Update Vermilion+ to 0.33.2
207DH-16634: Fix dashboards migration issue
206DH-16626: Support deephaven.ui dashboards from a code studio
205DH-16621: Expose available query objects as a table to users
204DH-16664: Fix Core+ cpp-client dockerized build after incompatible changes on DHC 0.33
203DH-16656: Fix listener reachability in TableMapTest, added integration test for DH16656
202DH-16656: ResolveTools sets empty columns on snapshot
201DH-16659: tailer handles data routing impl that does not support change notification
200DH-16652: Update automated tests on controller_tool for VPLus
199DH-16644: Update copyright year in web launcher page
198DH-16637: Fixed Core+ .toUri() stat'ng directories during discovery
197DH-16462: Add profile JIT CPU options
196DH-16582: upgrade etcd from 3.5.5 to 3.5.12
195DH-16616: Fix Safe Mode in Web UI
194DH-16617: Fix line plots in Web UI
193DH-16589: automated validation test for import driven kafka lastBy DIS
192DH-16593: Fixed Legacy CART trying to reconnect even after good data was received.
191DH-16189: Enable deephaven.ui widgets from PQs
190DH-16575: Core+ Python Client Wheel Should be Usable in Worker VEnv
DH-16530: Loosen Core+ Client Version Requirements
189DH-16492: Fix javadoc
188DH-16564: Package jupyter logged_session in iris repo
187DH-16500: Update deephaven-plotly-express plugin to 0.5.0, update Web UI to v0.67.0
DH-16427: Web plotting should not ignore xRange for histPlots
DH-16490: Fix deephaven.plot.express data
186DH-16596: Reapply fix for DH-16221 (Controller allows resubscriptions)
185Jdk8 Compilation Fix.
184DH-16290: correct initial install condition
183DH-16290: 'dhconfig routing' validate and import must consider existing extra dises
DH-15984: improve 'dhconfig routing export' feedback when there is no routing file
182DH-16492: create local cached DataRoutingService, use it in the tailer
181DH-16249: Use correct API for widgets
180DH-16364: If etcd is setup but not working correctly, fail the install instead of generating a new etcd cluster
179DH-16579: URL encode groups in removeMembership for MySQL ACLs
178DH-16537: Fix partition_by failing to render the table
177DH-15918: correct unit test
176DH-15918: tailer restarts on routing change
DH-16148: create listener framework for data routing service
175DH-16144: add writers group for data routing service writers
174DH-16543: Add missing WorkspaceData data types, and all-types file, to backup_deephaven script
173DH-16554: Update Web UI to v0.66.1
DH-16554: Upgrade React Spectrum to ^3.34.1
DH-16554: Removed some ACL Editor css classes
172DH-16383: Remove all passwords from logs, automated test of no passwords in logs
171DH-16483: Fix WindowCheck entry combination bug.
170DH-16370: Update Core to version 0.33.1
169DH-16533: Fix dispatcher error response failure conditions
168DH-16551: Link to Enterprise Javadoc from Core+ Javadoc
167DH-16563: dhconfig dis add should mention --force when the dis already exists
166DH-16556: allow export of core dises
165DH-16535: Fix Persistence KVs not being cleaned up properly
164DH-16220: Add DBNameValidator to namespace and tablename ACL inputs fields
163DH-16529: SBOM coreplus artifact shouldn't use dnd in its name
162DH-16524: Fix WorkerKind JSON generation from controller request
161DH-16337: Update delete intraday data label to match swing
160DH-16483: Fix javadoc build failure
159DH-15771: Fixes for dh_helm script
158DH-16508: Integration test update rocky compatibility
157DH-16493: Make core+ builds leverage gradle task caching
156DH-16483: Reduce WindowCheck memory usage. #1398
155DH-16417: Make manifest.json visible in k8s environments
154DH-16282: Fixed CI build to fail on jest / junit errors not just failures
DH-16282: ACL Editor - Table ACLs error when clicking "Update ACL" that will become "Add ACL"
153DH-16494: Fix Swing ACL Editor requesting ACLs for null NS or TN
152Merge updates from 1.20230511.475
  • Release note updates.
151DH-16479: Integration tests added for core+ kafka transformations
150DH-16489: Integration test for Python Core+ table groups.
149DH-16488: Update Core to 0.32.1
148DH-16489: Core+ Python ACL Transformer not unwrapping Tables
147DH-16438: Add time to installer dependencies / rocky VM images
146DH-16475: Integration test fixes
145Merge updates from 1.20230511.474
  • DH-16015: Vermilion Dec 2023 test case updates for qa
  • DH-15598: Additional schema validation fixes
  • DH-15598: Add merge validate pqs for new tables
  • DH-16387: Fix R setup in test automation from forward-merge
  • DH-16275: Fix test automation anomalies
  • DH-16418: Fix DiskBackedDeferredGroupingProvider changing post-mutator "No groups found" to "No grouping exists"
  • DH-16382: Perform monit migration using systemd override.conf
  • DH-16206: Remove duplicated gen-keys.sh script in jackson
  • DH-16401: Fix Groovy script defined classes imported with db.importClass() break internal formulas
  • DH-14283: DeephavenNullLoggerImpl should use dynamic pool
  • DH-16237: change user buffer caching to restore backpressure
  • DH-14938: Properly cache downloadDocFx task, to reduce build flakiness
  • DH-16291: Add tags to test with no data and address one breaking test for Bard
  • DH-16273: backport DH-14452 to fix logging error
  • DH-15740: Test certificate fingerprints so we always update certs when they change
  • DH-16262: Wrap calls from groovy to gsutil inside bash -ic
  • DH-16252: Update USNYSE Business Calendar to Include 2026
  • DH-16242: CART Leaks Connections when Snapshots are Slow, Exception can escape in refresh()
  • DH-16309: EmptyToNullStringRegionedColumnSource should copy and wrap underlying provider by default
  • DH-16309: Fixed loss of grouping when SourceTable.MAP_EMPTY_STRINGS_TO_NULL == true
  • DH-16300: Test Automation: have minorversion flow to results summary
  • DH-16279: Add MessageListener example implementation to SBEStandAlone jar
  • DH-16262: Wrap calls from groovy to gsutil inside bash -ic
  • Revert squashed forward-merge
  • DH-16415: Fix a race in GrpcLogging initialization.
  • DH-16041: Move installer tests to jdk17
  • DH-16205: Remove nightly core+ tests (vermilion only)
  • DH-16328: Add release notes for DH-11713
144DH-16472: NPE in PQWorkerServiceGrpcImpl
143DH-16463: Update Web UI to v0.63.0
DH-16463: Fix false positives when detecting layout changes
142DH-16471: Added shortcut for copy version info
141DH-16460: fix poor contrast color of notice message in share modal
140DH-16455: Fix Download CSV in Web UI
139DH-16458: Fix Swing ACL Editor requesting ACLs for null NS and TN
138DH-16452: Disable table name dropdown when * ns is selected
137DH-14914: Test core+ auto install
136DH-16127: Fix readme for dnd version
135DH-16373: ACL write server should enforce system user limitations
134DH-16446: Legacy Parquet does not interpret LocalDate stored as int in Parquet format
133DH-16315: ACL Write Server Should Prohibit Namespace=* without Tablename=*
132DH-16326: io.deephaven.kv.acl.AclJetcdProvider Needs to Escape Data
131DH-16336: Consistent handling of whitespace typing / pasting
130DH-16411: Integration test had duplicated serial number.
129DH-16440: Fix Kubernetes restartAll script errors
128DH-16025: Legacy BarrageTableResolver should return a table
127DH-16437: Make rocky9 require rsync-3.2, same as RHEL 9
126DH-16426: Update Web UI to v0.61.0
DH-16426: Allow themes to use any srgb color for definitions
125Release note updates.
124DH-16413: Non-superusers should have access to WebClientData tables
DH-16416: UserGroupArrayFilterGenerator should escape groups
123DH-16277: When using Rollup Rows, ungrouped columns become sorted alphabetically and should not
122DH-16302: Fix Merge/Validate queries adding an extra field to the PQ
DH-16371: Fix PQ Start/Stop actions inconsistently enabled/disabled
121DH-16385: Envoy Does not Have Cluster/Route for Multiple Auth Servers
120DH-16411: Dispatcher crashes when invalid WorkerKind is requested
119DH-15771: Create Kubernetes Deephaven install/uninstall/upgrade wrapper script
DH-16217: Update buildAllForK8s.sh to use coreplus instead of dnd
118DH-15955: Official installer support for rocky8/9
117DH-16327: Fix Java 8 incompatibility
116DH-16327: Properly URL encode ACL requests
115DH-16362: allow dises+routing for complete import of routing config
114DH-16362: revert allow dises+routing for complete import of routing config
113DH-16362: allow dises+routing for complete import of routing config
112DH-16350: fix installer keygen script for controller and acl write server
111DH-16321: Duplicated values in a cluster.cnf file should cause a validation error
110DH-15803: Improve error messaging around partitioned user table location overlap
109DH-16368: Add Support for remote clusters with RemoteTableBuilder
108DH-16332: Ensure worker to controller notifications (eg table errors) are not lost if controller restarts
107DH-16369: Make internal installer overwrite versions when using pull_cnf
106DH-15934: Routing config change for RemoteTableAppender in k8s
105DH-16288: Hide k8s-related fields in query monitor when not deployed in k8s
104DH-16251: Allow Core+ workers to load calendars from disk
103DH-16082: Don't show RunAndDone queries in the Panels menu
102DH-16355: Kafka Community Test Fails After .079
101DH-16219: Disallow namespaces and table names containing spaces at ACL API endpoint
100DH-16287: Web API Server Reconnections Preserve Code Studio
099DH-16331: Make PULL_CNF work for jenkins and local vm deploys
098Revert DH-16251: Allow Core+ workers to load calendars from disk
097SH-15353: Add client IP address to audit log for authentication events in web_api_server
096DH-16251: Allow Core+ workers to load calendars from disk
095DH-16320: ACL Editor - url encoding
094DH-16333: db.livePartitionedTable error message misspelled
093DH-15415: Fix jdk8 javadoc task
092DH-16269: Add support for Core+ queries in irisapi examples
091DH-16312: ACL Editor - Close selectors on select
DH-16314: ACL Editor - Only allow * table name when * ns is selected
090DH-16139: Add cert expiry times to status dashboard
089DH-15415: Improve ACL exceptions
088DH-15521: Add official installer support for ubuntu 22.04
087DH-16324: Fix DbAclEditorTableQuery canedit logic
086Merge updates from 1.20230511.464:
  • DH-16313: Fixed NPE on legacy metadata overflow file access
085DH-16235: Fix QM Summary out of sync with the Queries Grid
084Merge updates from 1.20230511.463:
  • DH-15665: Remove internal installer workarounds for jackson+rhel9
083DH-15864: Fix undefined partitions in IrisGridPanel state
082DH-16318: Make iris_keygen.sh avoid adding to truststore when --skip-* flags are used
081DH-16305: Fixes to get Deephaven working with IAP in Kubernetes
080DH-16121: ACL Editor - Action tooltips
079DH-16296: MySQL publickey table fails on Jackson to Vermilion+ Upgrade
DH-16298: New Installations Should default to etcd ACLs
DH-16307: DH_DND_VERSIONS should write "auto" not automatically selected version to cluster.cnf
DH-16297: Jackson to Vermilion Upgrade does not Create Python 3.10 Virtual Environment
078DH-16058: Add Memory Printing to Tailer
077DH-16286: MultiViewBuilder Test Must not Depend on Static Inheritance
076DH-16150: Add widget plugins to handle widgets in Web
075DH-14646: improvements after testing
074Merge updates from 1.20230511.462:
  • DH-16265: Make LocalMetadataIndexer methods public
  • DH-16278: Automation Should Detect "Stuck" PQ Tests
  • DH-16243: Configure high-cpu integration test box on j17 CI
  • DH-16128: Fix grouping propagation when inputs are filtered
  • DH-16130: Ensure blank line in changelog is handled consistently.
  • DH-16202: QA cluster maintenance script usability
  • DH-15913: Segment parquet tests to an isolated high-CPU box
  • DH-16200: Fix Automation/src/test/resources/testScript/engine/updateby directory duplicity
  • DH-16131: update DH revision name map for QA results analysis query
  • DH-16087: Add HTTP security headers to Envoy configuration
  • DH-16192: Always set DH_ETCD_IMPORT_NODE to a single machine
  • DH-16181: Fixed MapCodec ignoring offset and length params
  • DH-16176: Backport of DH-15469 (Use external SSH executable for git)
  • DH-15157: CART Error Propagation and Reconnect Counting Fixes
  • DH-16128: Fix grouping propagation when inputs are filtered
  • DH-15876: Add Test Automation support for configuring java tests
  • DH-15493: Enable version suffixes for DbInternal tables
  • DH-16114: Test Automation: revert bad test case that was released
  • DH-16055: Fix sed substitution when numbers and wildcards overlap in vm-tools README
  • DH-16103: Remove etcd passwords from log output
  • DH-16014: Test Automation: add test case updates for December
  • DH-16108: Test Automation: fix NPE on template lookup
  • DH-16090: Test Automation: pull back integration logs even on fatal condition
  • DH-16078: Test Automation: run locally via installer
  • DH-15875: Allow disabled tests to run in testAutomation - control by config only
  • DH-15653: Add tagging to Test Automation
  • DH-15157: CART skipping reconnection attempts
  • DH-16098: update to test analysis query to remove duplicate data and add MinorVersion field
  • DH-16096: Better check for anonymous mysql users before we attempt to fix them
  • DH-16039: Reenable rhel9 installer test
  • DH-16096: Fix nightly installer test mysql error (anonymous user problem)
  • DH-14113: Use irisrw instead of root when possible in dbacl_init.sh
  • DH-16074: update controller tool tests to sudo use consistent with client env
  • DH-15988: fix logging error
  • DH-15275: Add release-focused testcases to Jackson July-Dec 2023
  • DH-16061: update controller_tool test for null pointer message
  • DH-16234: Publish PQ details into session scope
073DH-16247: Update Core+ to Core 0.32.0
DH-16270: Fix update_by liveness
072DH-16280: ACL Editor - Reset tablename selection when namespace changes
DH-16281: ACL Editor - Input table ACLs should not have "Columns" column in table view
071DH-16258: iris-querymanagers should still see special queries in web
070Update Web UI to v0.59.0
  • DH-16225: Fix TimeInput not triggering onChange
  • DH-16267: Light theme
  • DH-16056: "Query Types" filter doesn't show when it has been modified
069DH-15857: Handle async due to gRPC internal state after Controller client subscription shutdown
068DH-16261: Extra DIS routing backups
DH-16274: Add DIS routing integration tests
067DH-16264: Fix unthemed legacy worker plots
066DH-16258, DH-16259: Frontends display non displayable config types for non-admin users.
065DH-15794: Add status dashboard helm chart
064DH-16157: Core+ Cart should maintain a reference and manage the lifecycle of the ManagedChannel
063DH-16266: Add Javadoc for Protobufs
062DH-16209: Add dedicated volume for git repo in k8s envs
061DH-16189: Pass all session objects to the Web UI
060DH-16248: etcd/admin_init.sh should retry user existence check
059DH-16250: Add deephaven.ui 0.1.0 to Core+ Workers
058DH-16023: Allow enable.auto.commit check on Boolean.
057DH-16211: Fix Controller shutdown held up
056DH-16155: Support ACLs for non-existing namespaces and table names
055DH-16246: Make DbAclCorsFeature use standard cors props as backup
054DH-16003: Make management-shell a deployment, worker label value safeguards
053DH-15840: Error in CART after controller restart
DH-16156: Inaccurate Error Message when using PQ+ resolver
DH-16157: Error message being logged when using RemoteTableBuilder
052DH-16222: Add KafkaTableWriter.disNameWithStorage
051Merge updates from 1.20230511.451:
  • DH-16212: Add CORS filter to workers / web api server
050DH-16122: Refresh Query Monitor user and group lists on ACL Editor changes
049DH-16236: Prevent possible improper labels in k8s metadata
048DH-16147: Update DHE C++ and R client for DHC 0.31.0/0.32.0
047Merge updates from 1.20230511.450:
  • DH-16204: Refactor Core+ locations to better support Hive format
  • DH-15857: Controller now allows clients to re subscribe.
  • DH-16201: Fix intellij-only error in buildSrc/build.gradle
  • DH-16141: Sort ProcessEventLog in performance queries
  • DH-16138: Backport relevant csv import fixes from grizzly to vermilion
046DH-16226: Fix Grid panel state persistence
045DH-16223: Don't wrap query summary lines if there is enough space
044DH-15591: Fix QueryMonitor recovery from web api service/controller restart
043DH-16231: Fix scheduling issues in restored PQs after controller restart
042DH-16150: Support for loading module plugins from workers, deephaven.ui from Code Studio
041DH-16227: Fix an attempt to log a null Throwable from PresenceLeaseHandlerEtcd.abort
040DH-16203: Improper Global State in Core+ Python Client
DH-16215: Rename Python Core+ Client Wheel
DH-16057: Core+ python client exception when closing session after manager
039DH-16218: Set WorkerProtocolRegistry host and ports for Core+ workers
038DH-16100: Fix getObject failing after web api service/controller restart
037DH-16210: Fix usage of paste command with explicit /dev/stdin
036DH-10941: Make DH_FORCE_NEW_CERTS work correctly
035DH-16196: KafkaTableWriter Transformation should take UpdateGraph lock (fix duplicate graph names)
034DH-16085: Add new fields to the Query Summary screen
033DH-16196: KafkaTableWriter Transformation should take UpdateGraph lock
032DH-16168: Update routing.yml for kubernetes installations
031DH-16185: UpdatePerformanceLogCoreV2 is missing UpdateGraph
030DH-16179: Rename ProcessUniqueId to ProcessInfoId in Core Performance Tables
029DH-16177: Controller PQ ensureShutdown avoids trying to cancel processing requests never sent
028DH-16182: Fix Python wrapper bypassing liveness defaults
027DH-16163: SystemTableLogger Error in V+
DH-16169: PerformanceOverview Fails on Core+ Workers without Updates
026DH-15912: Improve worker startup consistency in Kubernetes when cert-manager is enabled
025DH-14646: dynamic dis management
024DH-16158: Fix scheduled jobs loop on scheduled stop spamming the controller log file
023DH-16154: Web ACL Editor is Failing Over Envoy
022DH-16151: Fix stopping PQ after controller restart doesn't work.
021DH-15890: Insure persistent query pod labels are always populated in k8s environments
020DH-16137: Fix RemoteQueryDispatcher.workerServerPorts port range conflict with Linux ephemeral ports
019Merge updates from 1.20230511.445
  • DH-16111: Allow Flight Put requests for exports to support input tables an
018DH-16094: Core+ workers survive controller restart step 3
017DH-15800: Automatic Allocation of Kafka Resources in Kubernetes
DH-15695: Automatic Allocation of Kafka (In-Worker DIS) Resources in Kubernetes
016Merge updates from 1.20230511.444
  • DH-16135: Core+ workers should report update errors
  • DH-16136: Core+ performanceOverviewByPqName Timestamp Filter is Broken
  • DH-16120: Allow core+ pip to leverage pipcache
  • DH-16009: Fix auto-capitalization of field names in ProtobufDiscovery
  • DH-16123: Allow queryviewonly users to restart queries they have permissions to
  • DH-16054: Fix HierarchicalTables from Core+ workers not opening
015DH-15890: Add persistent query info to worker pod labels in k8s
014DH-16132: Upload installer next to tar/rpm in jfrog
013DH-16110: ServiceRegistry.writers should include iris-dataimporters and iris-datamanagers by default
012DH-16132: Delete obsolete installer upload task
011DH-16116: Fix query monitor theme
010DH-14599: Build launchers externally and download into iris
009DH-16125: Add deephaven.remote_table to sphinx output
008DH-16126: Fix config_packager to use -s instead of -d on web key files
007DH-16105: Fix core+ nightly tests
006Changelog format fix for deephaven.io
005Javadoc fix.
004Javadoc fix.
003Fix Python patch versions starting with "0"
002DH-16037: CART needs to maintain AuthContext for internal Barrage subs
001Initial release creation

Detailed Release Candidate Version Log: Deephaven v1.20230512beta

PatchDetails
225Merge updates from 1.20230511.438
  • Spotless application.
  • Correct Javadoc error differently.
  • Correct Javadoc error.
  • DH-16106: Index for coreplus:hive Formatted Partitions
  • DH-16004: Fixed Csv import error that happened when value in last row, last column is empty
224DH-14968: Fix typo in python, add missing live_table parameter.
223DH-15708: Function Transformations on Core+ Kafka Ingestion
222DH-15189: Allow annotations for Envoy service in values.yaml.
221DH-15883: Update web UI packages to 0.57.1
DH-15883: Wired up theme providers
DH-15883: Added theme selector if more than 1 theme
DH-15883: Updated references to renamed saveSettings redux action
DH-15883: Updated all rgba css references to be hsla + some additional css variable mapping
DH-15864: Scroll position StuckToBottom shouldn't trigger sharing dot
DH-16020: Added theme selector if more than 1 theme
220DH-15989: Update Performance Schema to Handle Core 0.31.0
DH-15690: Core+ Performance Overview should use Index Tables
219DH-16037: Add a CART for core+ workers
218DH-15935: Use worker node name as internal partition value in k8s
217DH-16102: Fix CME in ArrayParser that resulted in Csv Import failure
216DH-16095: Derive Worker Name from ProcessInfoId
215DH-13179: Add "PQ Creation Date" as a column in the Query Config/Query Monitor
214DH-16068: Core+ workers survive controller restart step 2 (and done)
213Merge updates from 1.20230511.433
  • DH-15598: Fix DataQualityTestCaseTest failure
  • DH-16049: InteractiveConsole cannot start Merge worker
  • DH-15598: Fix integration test failures from validation fixes
  • DH-16086: Update Core+ to 0.30.4
  • DH-15993, DH-15997, DH-15950: Fix dhcVersion handling and dnd publishing
  • DH-15598: Schema validation fixes from QA cluster monitoring
  • DH-16066: update generate_loggers for consistent user
  • DH-15871: Etcd upgrade release note clarifications
  • DH-16057: Core+ Python Client Fixes
  • DH-16059: Pin Core+ Python Client Requirements
  • DH-16067: Add ping() method to Python session manager
  • DH-16064: Core+ R and C++ clients README improvements
  • DH-16064: Core+ R and C++ clients README improvements
  • DH-16047: Allow Arbitrary pydeephaven.Session Arguments
  • DH-16048: Add Frozen Core+ Requirements to Build
212Javadoc fix; correct merge of Grizzly image.
211DH-16079: Update logging in CustomSetter tests
210DH-15262: Improve the new Controller unit test timeouts
209DH-15353: audit log entries for authentication service events
208DH-12597: Support for CustomSetters in DHC CsvImport
207DH-15781: Status dashboard follow-on work after community fixes
206DH-15980: Core+ workers survive controller restart step 1
205DH-16062: Fix crcat exit code in test.
204DH-16060: Port configuration for local plugin dev
203DH-15814: Direct server process configuration and startup in Kubernetes environments.
202DH-16040: EKS Helm Chart Problems
201DH-15727: Make audit event logs code-driven
200Merge updates from 1.20230511.421
  • DH-16046: Use latest iris-plugin-tools version
199DH-16043: Minimum tornado version supported for python3.10 is 6.2
198DH-16042: Minimum wrapt version supported for python3.10 is 1.13.3
197Merge updates from 1.20230511.420
  • DH-16038: Disable flaky rhel9 installer test
  • DH-15964: Fix python3.6 in centos 7 base image
  • DH-15964: Additional tweaks for base image creation
  • DH-15964: Improve base image creation process
  • DH-16005: Test Automation: improve readme and env var passthrough
  • DH-15933: Test Automation: add Nov testcases to Bard
  • DH-15964: Build and consume per-release base images
  • DH-15964: Add rhel8/9 base images
196Merge updates from 1.20230511.419
  • DH-16036: Fix Core+ only able to read table.parquet files when using Extended layouts
195Merge updates from 1.20230511.418
  • DH-16031: Update Core+ to 0.30.3.
194DH-15262: New ETCD layout for Controller, added resync capability for inconsistent storage
193DH-16029: Fixes for K8s Image Build with New Filenames
192Merge updates from 1.20230511.417
  • DH-15825: Reject auth server and controller reload requests in Kubernetes
  • DH-16028: Fix logic bug in decision for etcd lease recreation in Dispatcher
  • DH-15276: Test Automation: add Sept-Nov testcases to Vermilion
  • DH-16017: Fix Kubernetes setup-nfs-minimal script directories after NFS4
191DH-15594: Add import-driven lastBy capability to Kafka DIS
DH-16023: Turn off enable.auto.commit for Core+ Kafka Ingester
190DH-16024: Avoid duplicate etcd config handling code
189DH-468: Put jdk version into rpm and tar filenames
188Merge updates from 1.20230511.413
  • DH-16006: Allow ERROR with UNAVAILABLE to pass dispatcher liveness.
  • DH-16002: Do not show "Preserve Owner" option for non-superusers
  • DH-15920: Add overview to Core+ javadocs with links to DHE/DHC docs and javadocs
187DH-15848: DH-15848: Fix field value for empty password in JDBC Import
186DH-16013: Fix Forward merge of Dispatcher Liveness Test to Grizzly
185DH-13377: refactor MergeData builder/constructor ecosystem
DH-10253: Merge builder should have an option to specify TDS mode
184DH-15978: Add import-driven lastBy capability to Core+
183Merge updates from 1.20230511.410
  • DH-16006: Write Integration Test for Dispatcher Liveness
  • Release note updates.
  • DH-16008: Dispatcher allows workers to miss TTL
182DH-15999: DispatcherClient resiliency
181Formatting fix from merge.
180Merge updates from 1.20230511.407
  • DH-15976: Formatting Rule Doesn't use default set by use
  • DH-15830: Fixed upgrade-nfs-minimal.sh and DbPackagingTools.groovy
179Merge updates from 1.20230511.405
  • DH-15939: Option to restrict exported objects by default
  • DH-15951: Permit multiple PQs per TestCase
178Formatting fix from merge.
177Merge updates from 1.20230511.404
  • DH-15998: Core+ Kafka Ignore Columns Test should not display all queries
  • DH-15991: Fix integration test issues from forward merge
  • DH-15830: Changed Helm chart to use NFS 4.1 for RWX PVCs
  • DH-13351: correct default value in release note
  • DH-15610: Allow Staging test results to segment exit code
  • DH-15940: Integration Test Logs have Wrong Paths
  • DH-15936: fix bash3 + PS4-subshell bug for mac installer
  • DH-15763: Test case updates for Oct 2023
  • DH-15866: Set republishing to use jdk8 by default
  • DH-14989: Use self-signed "internal" PKI for nightly installer tests
  • DH-15897: Fix JDBC testcases
  • DH-15886: Fix controller stop scheduling issue
  • DH-15475 Segment test automation for more timely FB run completion
  • DH-15854: Test Automation: logging usability tweaks
  • DH-15641: Add Reset, close disconnected child panels
  • DH-15882: Pandas data frame view breaks when data frame is empty
  • DH-15949: Fix bug introduced by .396. Fallback to plain encoding breaks with arrays.
  • DH-15762: Improve matplotlib and seaborn testcase queries
  • DH-15985: Fix export not respecting superuser mode
  • DH-15957: updated release notes
  • DH-15949: Fix ParquetTableWriter writing empty dictionaries resulting in unreadable parquet files.
176DH-15948: Typo in CPU share denial message
175DH-15848: Web support for JDBC Import query type
174Merge updates from 1.20230511.395
  • DH-15977: Update Vermilion to Community Core 0.30.1
  • DH-15728: Rework DDR tests for flakiness
  • DH-15957: Make flag available in up/down actions available in start, start, restart actions
173Merge updates from 1.20230511.393
  • DH-15788: Fix Java 11 only logback dependency.
  • DH-15788: Unicode Characters Crash DnD Worker
172Merge updates from 1.20230511.391
  • DH-15132: Clear selection in Query Monitor before creating a new draft
  • DH-15792: Unclear IncompatibleTableDefinitionException with Core+ Kafka ingestion
  • DH-15942: Fix etcd config clobber with helm upgrade in k8s
  • DH-15937: Update web UI packages to 0.41.4
171DH-15961: Initializing Groovy Session creates Table without Auth Context
170DH-15936: Fix mac+bash3 bug in locally-run installer code
169DH-15273: Add DnD Groovy worker support for loading other groovy scripts
168DH-15531: Update performanceOverview Messages to say "Core+" and "Legacy"
167DH-15932: Change JsFigure.getErrors to a property
166DH-10076, DH-14452: ensure resources are released for removed table locations, fix logging error
165DH-15712: Fix Console Creator crash when clearing the heap size input
164Merge updates from 1.20230511.387
  • DH-15931: Update Core+ to Community Core 0.30.0
163Javadoc correction.
162Merge updates from 1.20230511.386
  • DH-15869: Tool to log system tables from Core+ workers
  • DH-15929: Fix bug with user-provided etcd password in k8s environments
  • DH-15928: R dockerized build tries to download from wrong github URL
  • DH-15911: Logging improvements in k8s workers.
  • DH-15354: Fix usage of outdated Java flags like 'PrintGCTimeStamps'
  • DH-15789: Backport DH-15840 and handle failed reconnect properly.
  • DH-15910: Load balance accross etcd endoints for etcd clients
  • DH-15894: Add helm chart toggle for envoy admin port in k8s deployments
  • DH-15908: Linux/MacOS launcher tar not available for download
  • DH-15903: update generate_loggers tests for vermillion inconsistency
  • DH-15902: Update controller tools test for vermillion inconsistencies
161DH-15873: Mac only has -f flag for rm, not --force
160DH-15919: Remove Old AccessController and other Deprecated APIs
159DH-15738: Fix Java 8 compilation
158DH-15738: Allow restricting WorkerKinds by ACL group
157Merge updates from 1.20230511.374
  • DH-15862: Include sbom when republishing to libs-customer
  • DH-11431, DH-15855: Equate parquet:kv with coreplus:hive
  • DH-15874: Support writing Deephaven format tables from Core+ workers
  • DH-15893: DH-15884 release notes in wrong file
  • DH-15880: Relabel workers as "Legacy" and "Core+"
  • DH-15884: Support CURL_CA_BUNDLE env var for curl cmdline in Core+ C++ client Session Manager
  • DH-15855: Java8 Fix
  • DH-15855: Add support for multi level partitioned DH format tables in Core+
  • DH-15877: Fix core+ table-donor/getter test for Envoy
  • DH-15861: Fix double-start of persistent queries
  • DH-15860: Arrays.sort in Predicate Pushdown is Not Thread Safe
  • DH-15856: cleanupAfterYourself Task is Too Aggressive for Core+ Javadoc/Pydoc
156DH-15770: Refactor WebClientData query to use per-user controller connections.
155DH-15895: Float column statistics should have correct stddev, random test data should follow specified bounds
154DH-15813: Add fsGroup to k8s worker pod security context
153DH-15858: Vite config for local SSL
152DH-15813: Easier in-worker DIS configuration in k8s deployments
151DH-15686: Updated DH packages to 0.52.0
DH-15686: Wired up theming and aligned small loading spinners
150DH-15840: Fixed another double notify in CART. Fixed CART resource leaks on close
149DH-15899: Convert Rest of .jsx files to .tsx in main
148DH-15847: Web support for CSV Import query type
147Revert .145
146DH-15857: ControllerHashtableClient should clear its hashmap when connection is lost and notify listeners
145DH-15686: Update Grizzly to Community 0.51.0
DH-15686: Wired up theming and aligned small loading spinners
144Merge updates from 1.20230511.361
  • DH-15838: remove obsolete jvm args from csharp open-api client
  • DH-15141: Save and apply on next restart banner not showing
  • DH-15815: Java 8 javadoc fix.
  • DH-15815: Java 8 compilation fix.
  • DH-15815: Update Vermilion to Community 0.29.1
143DH-15852: Fix DBAclServiceProviderTest
142DH-15849: Fix unit test failure from .120 merge.
141DH-14961: Allow legacy client to double-subscribe.
140DH-15698: clearly prohibit user tables in intraday truncate/delete operations
138Fix broken Javadoc.
137DH-15783: improve table data service log messages
136Merge updates from 1.20230511.356
  • DH-13351: corrections to readme and default value
  • DH-15827: ReadOnlyIndex should return refcount for tests.
  • DH-15827: UpdateBy incorrectly copies Index without clone
  • DH-15755: Re-enable simplified input table test case
  • DH-15809: Avoid duplicating contents of etcd configuration files
  • DH-13351, DH-11285, DH-15821: make Tailer more resilient to user data storms
  • DH-15808: ShiftedColumnSource Context Reuse
  • DH-15812: TableUpdateValidator result should be composable
  • DH-15806: ReplicatedTable RedirectionIndex shift uses updates linear in table size not viewport size
  • DH-15703: Test Automation: use REPLACE mode for serials to ensure updated test scripts
  • DH-15474: Ensure stderr and stdout are populated in jenkins and binary log for command line tests
  • DH-15614: Test Automation: test case improvements for Sept 2023
  • DH-15761: Backport excludedFilters in test automation
  • DH-15772: Improve Error Messages in PropertyRetriever
  • DH-15819: Fix ETCD ACL provider using shared message builders
  • DH-15586: Teach iris_keygen to pass -legacy flag when openssl version > 3.0
  • DH-15672: Deephaven Launcher 9.07 - DeephavenUpdater honors command line URL over appbase in existing getdown.txt
  • DH-15737: More resilient etcd lease and kv error handling in the Dispatcher
  • DH-15652: Refactor legacy remote client test cases
  • DH-15635: Ensure test automation cluster scripts configured consistently.
  • DH-15684: Developer readme: allow Dnd version to be auto-calculated during upgrade.
  • DH-15600: Fixed Table leak when filtering Pivot widget
  • DH-15739: Re-enable forward-merged unit tests
  • DH-15607: create tests to validate controller_tool
  • DH-15719: added tests for dhconfig:properties
  • DH-15697: Update Jackson jetcd to 0.7.5. Configure waitForReady and deadlines for etcd RPCs
  • DH-15586: Official support for RHEL9 in installer
  • DH-15677: generate-iris-keys and generate-iris-rsa should not overwrite existing files.
  • DH-15660: ShiftedColumns must end in Underscore
135DH-15837: Status dashboard shouldn't double-subscribe to persistent queries
134DH-15786: Add a required test configuration property
133DH-15693: Demote onResolved warning to info.
132Merge updates from 1.20230511.355
  • DH-15777: Configurable github URL prefix for DnD client builds
  • DH-15811: Add release note
  • DH-15789: Fix CART double notification on disconnect
  • DH-15132: Fix new draft selection reset when viewport contains a single row
  • DH-15683: Bump plugin tools to 1.20221001.008
  • DH-15683: Support DH_DND_VERSIONS=auto|none in installer, to automatically use iris' version of DnD
  • DH-15787: Upgrade seaborn from 0.12.2 => 0.13.0
  • DH-15785: DnD workers break with -verbose:class
  • DH-15746: Tokenize values to helm chart for Kubernetes deployments
  • DH-15441: CUS reload may not show success message
  • DH-15779: Hooks for SAML group synchronization.
  • DH-15776: Speed up digest generation for CUS via doing digests in a thread pool
  • DH-15735: Add kafka dnd manual test steps
  • DH-15743: Fix error propagation of source()
131DH-15675: Remove Controller and Console from DnD shadow jar
DH-14961: Separate Controller Client into a gRPC base and expose that separately
130DH-15786: Prep work for simpler creation of data ingestion workers in Kubernetes environments
129DH-15764, DH-15765, DH-15766: Web support for DataValidate, DataMerge, ReplayScript
128DH-15782: Fixed Controller client reauth during resubscription attempts
127DH-15807: Update rc/grizzly dependencies.
126DH-14057: Add status dashboard process
125DH-15787: Upgrade seaborn from 0.12.2 => 0.13.0
124DH-15537: Create Python wrappers for DnD user table API
123Fix Unit test failures from previous forward merge
122DH-14914: Automated DnD Python venv Installation
121DH-15750: Update Kubernetes Images to Ubuntu 22.04 and Python 3.10
DH-14473: Update Python to 3.10, drop 3.7
120Merge updates from 1.20230511.342
  • DH-15743: Fix error propagation of source()
  • DH-15751: Revert DH-15141, fix query draft switching to Settings on update
  • DH-15713: Test uplifts
  • DH-15734: BatchQuery hangs when creating input tables
  • DH-15667: Improve Table Location Creation Speed
  • DH-15742: Add very verbose TableDataExporter and RemoteTableLocation logging
  • DH-15736: Add missing wait_for_ready for auth ping in python DnD client
  • DH-15718: Allow KafkaTableWriter to ignore committed offsets and seek based on partitionToInitialOffsetFallback
119DH-14413: Web server should use separate PQC clients per user
118DH-15741: Fix db.live_table
117DH-14837: Improve int tests
116Merge updates from 1.20230511.335
  • DH-15732: Always run publishToMavenLocal before invoking any DnD GradleBuild tasks
  • DH-15733: Set gRPC calls to use wait_for_ready in the DnD python client
  • DH-15725: Input Table editors are broken in Swing
  • DH-15141: Show "Save and apply on next restart" banner immediate after picking "Save and apply on next restart"
  • DH-15716: Fixed a race condition in the controller server to client gRPC
115DH-14837: Move static method outside of inner class
114DH-14837: Add DnD centrally appended user table writing
113DH-15478, DH-15694, DH-15669, DH-15479, DH-15670: tailer shutdown fixes
112DH-15715: Fix grizzly installer tests
111Merge updates from 1.20230511.329
  • DH-15705: Automatically clean old-versioned artifacts out of development environments.
110Merge updates from 1.20230511.328
  • DH-15704: read value of javax.net.ssl.trustStore as a file when possible
  • DH-15696: Fix DnD shadowJar + intellij IDE breakage
  • DH-15691: Integration test for Kafka offset column name.
  • DH-15394: Remove overeager reauth code from controller client
  • DH-15674: ArrayBackedPositionTable index coalescer misuse and index errors.
  • DH-15638: Include Barrage Clients in DnD Worker
  • DH-15639: DndSessionFactory should allow authentication using a token
  • DH-3139: Add capability for tailers to clean up processed files
  • DH-15691: Allow changing KafkaOffset column name in DnD Ingester
  • DH-15499: Add automation test cases for matplot lib and other tests
  • DH-15640: allow user table lock files to be bypassed
  • DH-15663: DnD AuditEventLog fixes including for KafkaTableWriter.
109DH-15619: Fix type of NamespaceSet column in catalog table
108DH-15666: Wire TypeSpecificFields and SupportsCommunity through the API
107DH-15699: Add developer notes on shadow versioning
106DH-15619: Add system namespaces and JDBC drivers to Web API
105DH-15633: ACL API: Allow null password, support overwrite, propagate all errors to authenticated acl editor
DH-15661: ACL API: Validate all input for non-printable characters
104Javadoc fix for 102 merge.
103Compile fix for 102 merge.
102Merge updates from 1.20230511.315
  • DH-15654: Fix for worker-to-worker table resolution
  • DH-15644: Allow testcases to auto-select engine.
  • DH-15681: Fix bundle script on MacOS.
  • DH-15687: Publish EngineTestUtils so customers/plugins can write better tests
  • DH-15577: Publish DnD jars whenever we publish iris jars
  • DH-15673: Use RPC timeouts in the DnD python client
  • DH-15681: Upload R and C++ Bundles to GCloud
  • DH-15542: C++ Client should propagate errors from server when contained in trailers
  • DH-15395: Improve documentation of ControllerClientGrpc
  • DH-15628: Break up large audit event log messages into multiple log calls
  • DH-15625: Fix link to config file when upgrading in k8s
  • DH-15469: Update jgit SshSessionFactory to a more modern/supported version (changing iris_admin docker file for k8s to include ssh)
  • DH-15627: Promote stable QA tests to released
  • DH-15606: Envoy integration fails in environments where IPv6 DNS is enabled
  • DH-15626: Improve qa-results dashboard query
  • DH-15649: Provide a dockerized DnD R client build for RHEL 8
  • DH-15643: Creating source bundles for R and cpp should force a well defined filename
  • DH-15637: Fix C++ client terminating process if AuthClient fails initial Ping
  • DH-15636: Update Fix DnD historicalPartitionedTable fetches intraday data
  • DH-15563: Enterprise R client including SessionManager and DndClient
  • DH-15488: Test Automation: add option to run scripts from community docs
  • DH-15546: Add testcase for nightly snapshot monitoring
  • DH-15596: DeephavenCombined needs to merge service files
  • DH-15636: Fix DnD historicalPartitionedTable fetches intraday data
101DH-15570: ACL Editor - Trim whitespace in inputs that create data
100DH-15621: Fix queries not appearing correctly if one has failed to start
099Merge updates from 1.20230511.293
  • DH-15616: Fix a race condition in RegionedPageStore
  • DH-15609: Fix JsTable leaking table handles
  • DH-15540: better support for loggers with generics
  • DH-15505: Only close DnD Worker channels on worker shutdown.
  • DH-15629: Fix race conditions with DnD Mark / Sweep
  • DH-15617: Disable Transactions for DnD Kafka Table Writer
  • DH-15469: Update jgit SshSessionFactory to a more modern/supported version
  • DH-15587: Fix broken README link in cluster setup
  • DH-15274: July 2023 TestCase updates for qa
  • DH-15451: Fixed Wrong Parenthesis on Console Attachment Option
  • DH-15501: Fixed whereDynamicNotIn forwards to wrong method
  • Back-porting DH-15246: Allow commas in ticket list for github PR title
  • DH-15584: Create tests to validate generate_loggers script
098DH-15259: ACL API: Should validate white spaces in column names for ColumnAcl
DH-15569: ACL API: Replace SQLException for ACL operations with more appropriate Exception that is agnostic to backing store
DH-15615: ACL API: Add validations for null, empty, and trim whitespace where applicable
DH-15213: ACL API: Protect groupname matching user
097DH-15630: TrackedFileHandleFactory Should Warn When Files are Cycling Quickly
096DH-14968: Add TableOptions to DnD Database fetches for live, blink, and internal partition columns
095DH-15527: Updated dh packages to ^0.48.0
DH-15527: Removed code that was moved to Community
094Merge updates from 1.20230511.287
  • DH-15605: Avro Kafka ingestion hasField performance improvement
  • Fix typo in DnD relocation string
  • DH-15473: Implement PartitionedTable fetches for DnD Database. Handle Location addition and removal
  • DH-15519: Removed Create Console and Attach Console option from Swing for DnD Workers
  • DH-15589: Fixed Help About Dialog display
  • DH-15451: Wrong Parenthesis on Console Attachment Option
  • DH-15592: Type of ShiftedColumn results in view are incorrect
093DH-14143: Add Kubernetes control fields to Web UI
092DH-15482: Add Csv Parser Formats to Web API
091Merge fix.
090Merge updates from 1.20230511.279
  • DH-12084: officially support rhel8
  • DH-14983: Add DH_USE_EPEL flag to allow disabling epel repo
  • DH-15541: Percolate integration test exit codes back to jenkins
  • DH-15352: add release notes for .331 change
  • DH-15562: Make internal deployer use apt update before apt install
  • DH-15545: Don't use symbol tables when rollups and constiuents
  • DH-15414: Only use fully qualified /usr/bin/systemctl to control monit, never use service monit
  • DH-15544: make NullLoggerImpl pool sizes configurable
  • DH-15556: increase robustness and diagnostics in db.replaceTablePartition
  • DH-15574: Fix creation JSON field parsing.
  • DH-15581: Dictionary MatchFilter with invertMatch Returns no Results when Keys not Found
  • DH-15510: Allow customers to provide supplemental requriements during image build
  • DH-15574: Option to Create Ephemeral Venvs for DnD Workers
  • DH-15561: Cannot Create DnD Kubernetes Merge Worker
  • DH-15560: Fix DND Ability to read enterprise DbArray columns
  • DH-14479: Add specific instructions for auth client manager migration
  • DH-15458: Move all cert-manager conditional configs to iris-endpoints.prop in K8S envs
  • DH-15171: Fix issue with CSV Import using DHC parser failing to recognize Import sourceColumn attribute
  • DH-14660: CSV importer ignores/mishandles ImportColumn sourceName attributes
  • DH-15265: Fix issue with use of SinglePartition when Partition column is in source
  • DH-14489: Fix issue with SchemaEditors Preview Table functionality
  • DH-15559: Truststore population fix for certain K8S environments
  • DH-15530: Add a SessionManager to the C++ client
089DH-15411: Fixed interaction issues with SystemUserMapSelector
088DH-15387: ensure initialization end time is set on failures.
087DH-15231: Remove with JDK Installer, Build using 8 Toolchain
086DH-15438: Updated vite community alias for @deephaven/icons
085DH-15558: Correct Version of Grizzly DnD Client Wheel Build
084DH-15518: DIS.createSimpleProcess rejects stream keys it does not handle
083Merge updates from 1.20230511.267
  • DH-15513: print less of QueryScope in MergeData
  • DH-15160: Avoid calling sudo in prepare_filesystem if we can test files without it
  • DH-15524: add code path to lenient schema import
  • DH-14639: Automatically fix jars which lack an embedded pom, for sbom completeness
  • DH-15552: Publish DnD Pydoc
  • DH-15516: Publish Javadocs on DnD Java Client
  • Make release note edited on deephaven.io consistent.
  • DH-15491: Dynamic Kafka Partition Rollover
  • DH-15301: Fix error upon closing DnD Python client session manager
  • DH-15528: DnD Python Client Integration Test
  • DH-15428: Cannot log in to Swing client on a cluster with a private PKI certificate and Envoy
  • DH-15384: In a Kubernetes cluster created with a PKI cert, iris_db_user_mod will time out and fail
  • DH-15385: After switching a Kubernetes install from a public cert to a private PKI cert, the launcher fails with a PKIX error
  • DH-15463: JdkInternals getUnsafe() doesn't work with ManifestClassPath jar (Windows IntelliJ) and Java 8
  • DH-15517: Fix DnD Python Client
  • DH-13736: update digest algorithm to sha256 during private key generation
  • DH-15506: Subscription test for cpp controller client; fix for ControllerHashTableServer SE_PUT msg
  • DH-15507: Make /db/Users mount writeable in K8S
082DH-15527: Split out common util code from ACL Editor code
081DH-15471: SearchableCombobox UX improvements
080DH-15452: Fix broken RunDataQualityTests
079DH-12433: Support multiple config / auth servers, and increase installer security
078DH-15452: Data Merge query type JS API support
DH-15453: Data Validation query type API support
077DH-15464: Case insensitive name checks
076DH-15391: Dropdowns now scroll to selection on open
075DH-15411: The system users panel is now dynamically added / removed based on server config.
074Merge updates from 1.20230511.255
  • DH-15497: Test Automation README improvements
  • DH-12216: Use new QA sql server for JDBC import test
  • DH-15425: Improve automation test README for developer workflows
  • DH-13869: Enable more test cases in automation
  • DH-15454: Do not let npm write bin-links (preventing jenkins build instability)
  • DH-14671: Write shell test stdErr stdOut to file
  • DH-15399: Ensure test case metadata is not overwritten by default.
  • DH-15352: port Bessel correction from community to Enterprise
  • DH-15413: Add Logging for newInputTable Fails Silently
  • DH-15160: Allow installing as irisadmin if irisadmin is also DH_MONIT_USER
  • DH-15326: Missing lzma from Python 3.9 built by installer on CentOS
  • More release note updates.
  • DH-15457: K8S pod startup contingent on dependent service availability
  • Correct release note heading.
  • Changelog update.
  • DH-15426: Initial R wrapping for Auth and Controller clients.
  • DH-15367: BYO PVC, allow configs to mount secrets in k8s workers, other containerization improvements
  • DH-15477: Add javadoc to DnD build.
  • DH-15416: StackOverflow in CatalogTable where()
073DH-15402: Cleaned up unit test console errors and re-enabled skipped test
072DH-15438: Updated dh packages to ^0.46.0
071DH-15411: ACL Editor: Run as system user tab
070Merge updates from 1.20230511.245
  • DH-15446: close DIS index binlog files on last release
  • DH-14982: DnD Kafka Ingestion
  • Merge updates from 1.20230131.193
  • DH-15470: Fix superusers unable to create some query types from Web UI
  • DH-15322: Allow customer provided JARs for DnD Workers
  • DH-15468: Switch out deprecated community class in DnD initialization.
  • Add release notes link to DHC.
  • DH-15424: Can not download Swing launcher on kubernetes installation
  • DH-15461: Fix dispatcher registration race
  • DH-15460: enable strict yaml parsing to avoid duplicate map keys in data routing file
  • Changelog update.
  • Update release note text.
  • DH-11431: Add DnD support for Parquet hierarchical and fragmented file sets
  • DH-15444: Update DHC version to 0.27.1
069DH-14538: Filter table name selector by namespace selection
068DH-15064: Display Temporary Schedule Details for InteractiveConsole Queries
067Merge updates from 1.20230511.232
  • DH-15440: Use temurin (adoptium) jdk repos for ubuntu installs
  • DH-15419: Use packages.adoptium.net instead of adoptopenjdk.jfrog.io
  • DH-15432: Fix broken syntax in installer's new TarDeployer()
  • DH-15369: Fix MultiSourceFunctionalColumn Prev issue
  • DH-15403: Reenable BinaryStoreWriter C# publishing
  • DH-15433: Fix republishing job for sbom extension
  • DH-15434: Update deephaven-csv-fast-double-parser-8 dependency
  • DH-15397: Controller client should clone PQ before returning it
  • DH-15404: Fix related broken integration tests
  • DH-15388: Initial DnD C++ client: Controller client
066DH-15437: Add START_WORKERS_AS_SYSTEM_USER to ServerConfigValues
065DH-14538: Check for existing group before creating
064DH-14537: Added .git-blame-ignore-revs
063Merge updates from 1.20230511.227
  • DH-15422: Prevent admin_init from being executed twice
  • DH-15404: Use Java library for Throwable logging
  • DH-15234: Controller duplicate PQ exception improvements
062DH-15347: Run Prettier
061DH-14538: Updated searchTextFilter to use containsIgnoreCase
060DH-15347: Upgrade Jest to ^29.6.2
059DH-14847: Update data routing template files to use new features
058DH-15347: Upgrade Prettier to 3.0.0
057Merge updates from 1.20230511.224
  • DH-15389: Dictionary Columns need to have unbounded FillContexts
  • DH-15348: Fix issue with forward merge for web
  • DH-15271: Test Automation: allow skip-dependencies mode
  • DH-14688: re-enable csharp with updated dockerfile / dotnet version
  • DH-15280: One click ranges cause illegal argument range exception
  • DH-15348: Allow admins to view script of query types they can't edit
  • DH-15294: Do not overwrite user configuration files when reinstalling deephaven
  • DH-15383: Fix controller crash during community worker shutdown
  • DH-15325: Add ACLs for DnD Index tables
  • DH-15344: Update integration tests to use DIS claims
  • DH-15155: Fix issue with console settings being undefined sometimes
056DH-14538: Updated dh packages to ^0.45.0
055DH-14538: Table ACLs Comboboxes
054Merge updates from 1.20230511.217
  • DH-15328: Add a simple-to-use Java DnD client.
  • DH-15376: Connected to the web UI in a second tab results in losing authentication for the original tab
  • DH-15373: Ensure running dnd tests skips java8
  • DH-15165: Add initial set of dnd test scripts
  • DH-15346: Fix extra JVM arg propagation to DnD workers, configure SNI workaround for DnD workers in k8s env
  • DH-15321: Use Index tables for process info id extraction and error messaging
  • DH-15355: Additional log entries for login and logout in WebApiServerImpl
  • DH-15298: Add filter by partition to metadata tool
  • DH-14787: Add release notes
  • DH-15314: Fix failing automation test for addManySchemas
  • DH-14167: Plots sometimes do not draw when they have ranges set with OneClick
  • DH-15202: ACL Editor Namespace/Table ComboBoxes are aware of additions and removals (swing)
  • DH-15252: Add instrumentation to Input Tables
  • DH-15318: Do not use swing-components to calculate max viewport in non-swing processes
  • DH-15309: Allow removal of "Help / Contact Support ..." via property (swing)
  • DH-15305: Avoid using RecomputeState.PROCESSING to determine viewport row staleness (swing)
  • DH-15310: Optimize allocations and copies for SortedRanges.insert when it is effectively an append
  • DH-15302: Add a stand-alone SBE Java in StandaloneJavaSbeClient.jar
  • DH-15333: Update java generated from forms to match IJ generated format
  • DH-15178: correct TDCP's handling of removed data - remove locations on subscribe, during rescan
  • DH-15026: correct TDCP's handling of removed data - remove locations on error
  • DH-15325: Create Index tables for DbInternal Community tables
053DH-15349: exclude DIS with disabled tableDataPort from 'all dises' specified by dataImportServers keyword
052DH-14901, DH-15350: optional ACL group for ServiceRegistry writers
051DH-14538: Table ACLs Panel
050Merge updates from 1.20230511.206
  • DH-15312: Dedicated certs for controller and aclwriter processes in k8s deployments
  • DH-15337: Improve logging in WebApiServer and GrpcAuthenticationClientManager
  • DH-15334: Update java generated from forms to match IJ generated format
  • DH-15324: Remove deadsnakes ppa from Dockerfile
  • DH-15301: Fix error upon closing DnD Python client session manager
  • DH-15250: Initial DnD C++ client: Auth client
  • DH-15323: Fix NPE when controller disconnects gracefully from client
  • DH-15316: Fix silverheels VM deployment
  • DH-14742: May-June 2023 test case updates for qa
  • DH-11925: ofAlwaysUpdate not setting MCS Correctly
  • DH-15299: Improve SortedRanges.insert for append case
  • DH-15256: Update USNYSE 2025 calendar
  • DH-15306: GroupingBuilder should return empty grouping for empty input index
  • DH-14482: trim values in user and password config files
  • DH-15291: Remove parallelism bug in dh_install
  • DH-11758: Add installer tests for customer users + plugins
  • DH-15270: Use manually-recursive chown in some prepare_filesystem.sh calls
  • DH-5698: dhconfig support exporting single tables
  • DH-15219: Launcher 9.06 - correct error in prop file location
  • DH-15251: Remove unused logic from DeephavenInstallScript.groovy
  • DH-15290: Make db part of the database module so it can be imported from scripts
  • DH-15239: Added a note for attaching container registry for AKS install
  • DH-14888: Log operation user in DnD DbInternal tables
  • DH-15296: delete cookie failed message in auth server log
  • DH-15287: Disable writing ProcessMetrics by default
  • DH-15076: Kafka ingester that worked under Jackson fails after upgrade to Vermilion
  • DH-15311: ControllerConfig grpc serialization serialized tds config 3 times instead of tds, dis and tdcp
  • DH-15091: Use dynamically generated services and certificates for workers in k8s
  • DH-13169: Fix reactstrap not being usable in JS plugins
  • DH-15164: Fix contains ignore filter in JS API
  • DH-15168: Cache TDCP Query Filtering Decisions
  • DH-15192: Adjust heap overhead parameters for Kubernetes
  • DH-15254: Updates to README.md, Helm tar build script, and buildAllForK8s script for Helm deployments
  • DH-15258: Fix potential NPE when using multiple ingesters in a Kafka in worker DIS
  • DH-15235: Fix error with matplotlib receiving data in incorrect format
  • DH-15180: Fix unbounded thread growth in GrpcAuthClientManager
  • DH-15243: Include SQL In DnD Build
  • DH-15248: Fixed bug resulting in FB build failure introduced in DH-14659
049DH-14538: Update Web UI to @latest (v0.44)
048DH-15338: Fix schema import failure-scenario tests
047DH-14538: Fixed useACLEditorAPI failing test
046DH-15269: Configure CORS headers when Envoy is not setup to access ACLWriteServer
045DH-9573, DH-3149, DH-3154, DH-5698: dhconfig schema handling improvements
DH-9573, DH-3149: add delete and list namespaces to dhconfig
DH-3154: handle same-file overlap when specifying schemas to import
DH-5698: dhconfig support exporting single tables
044DH-13759: Change .039 used a language feature not present in Java 8.
043DH-14538: Consuming DbAclWriter host and port from ServerConfigValues
042DH-15308: Convert console/client from JS to TS
041DH-15317: Add DbAclWriter host and port to JS API ServerConfigValues
040DH-15286: Convert querylist and querymonitor from JS to TS
039minor improvements to controller_tool command line argument parsing
  • DH-13759: intercept --help early so we don't report exception in that case
  • DH-15145: don't skip "disabled" queries for delete mode
038DH-15092: Avoid refresh overhead in RunAndDone Queries (DnD)
037DH-15179: Fix up info not appearing for queries in Safe Mode
036DH-15233: Fix Web UI after issue with JS to TS conversion
035DH-15233: Convert main/tabs from JS to TS
034Merge updates from 1.20230511.180
  • Changelog and release note fixes.
  • Fix release note typo.
  • DH-15246: Allow commas in ticket list for github PR title
  • DH-15244: DnD Wheel Should use dhcVersion not Hardcoded Value
  • DH-15245: DndSession Needs Certificates Passed Through
  • DH-15215: Add DataCodeGenerator additional interfaces
  • Launcher 9.05
  • DH-15099: script change to allow Deephaven Launcher to exist in a path with spaces
  • DH-14600: accept new custom certs on an existing instance
  • DH-14496: better reporting when "new" PKCS12 file cannot be parsed by "old" java version
  • DH-15082: --insecure command line option for Deephaven Updater, to accept self-signed certificates
  • DH-15019: command line options to set instance and workspace roots
  • DH-15216: Add options button to show hidden context menu choices
  • DH-15219: add IrisConfigurationLauncher.connectionTimeoutMs property
  • DH-13649: Fix the etcd dispatcher user/config migration scripts.
  • DH-13651: Make readonly etcd keys usable by dbquerygrp.
  • DH-14659: Fixed CSV Importer hangs on very small file
  • DH-15158: Fixed OOM error when doing a large CSV import in vermilion
  • DH-15143: Add basic python time lib test for dnd to test automation
  • DH-15229: Fix python installer test
  • DH-15090: Use cert-manager for Deephaven services in Kubernetes
  • DH-15229: Always supply defaults for DH_PYTHON_VERSION variable(s)
  • DH-15227: Fix monitrc installation modifications for Ubuntu 22.04
  • DH-14041: Fix mysql-connector file privileges
  • DH-15146: SortedClockFilter does not handle empty tables
  • DH-14661: Add support for Ubuntu 22.04; add DH_PYTHON_VERSION flag for installer
  • DH-14824: Safe Mode should show status even if script code is unavailable
  • DH-15217: Disable flaky grpc test case
  • DH-15057: Add live, historical and catalog tables to resolvable flight tickets
033DH-15238: Add Catalog Table to WebClientData Query
032DH-14836: Add DnD partitioned user table writing
031DH-14738: ACL Editor - error handling
DH-15172: Web UI: Enabled ACL Users Tab
030DH-14738: ACL editor refresh button
029DH-14738: ACL Editor - Group trash action
028Merge updates from 1.20230511.163
  • DH-15137: Unable to connect to Community worker in Safe Mode
  • DH-15201: Expose staticUrl from Python Client Session pqinfo() for Envoy
  • DH-15210: Integrated CUS does not digest plugin files that are sym links
  • DH-15193: Make DEEPHAVEN_UPDATE_DATE build in CI w/ newer versions of git
  • DH-15190: Fix internal installer bug for mac os
  • DH-15081: Add basic time lib test for dnd to test automation
027Merge updates from 1.20230511.157
  • DH-15193: Make IRIS_VCS_VERSION build in CI w/ newer versions of git
  • DH-15123: Avoid hang when filtering from bottom of large table (swing)
  • DH-15191: Reduce max table display size (swing)
  • DH-14593: Fix duplicate unit test enum class names
  • DH-15095: Prevent incorrect "Worker disconnected" messages on console reconnect (swing)
  • DH-15156: Fix spotless failure in .154 version
  • DH-15156: Audit truststore usage to verify empty and null checks for TrustStorePath
  • DH-15187: Install DnD Python on K8s
  • DH-15188: iris_db_user_mod needs truststore set on K8s
026Merge updates from 1.20230511.151
  • DH-15182: Revert DH-14818: allow spaces in PQ names at command line
  • DH-15169: Fix bad quoting in internal deployer
  • DH-15096: Produce better log output for null socket getAddress result
  • DH-14994: Wait for query to be running before fetching API
  • DH-15163: Port exec_notebook and import functionality to DnD workers.
  • DH-15183: Always Run buildDnd Gradle Task
  • DH-14833: Correctly serialize FatalException
  • DH-15147: Discard client side subscription when heartbeats are lost to allow for clean resubscription.=
  • DH-15170: Fix build-info-extractor build issues
  • DH-15173: Update DnD to DHC 0.25.3
  • DH-15089: Automatically build DnD when deploying internal machines
  • DH-14897: Run dnd tests nightly against community latest
  • DH-14749: Display community port in Query Summary
  • DH-15154: Remove DHC worker port in worker information tooltip
  • DH-15014: Remove the "Open Community IDE" button on the QM Summary tab
  • DH-15142: Web fails to disable viewer queries without error
025DH-14738: ACL Editor - Group Assignment
024Fix merge issue
023Merge updates from 1.20230511.136
  • DH-15166: Update testcontainers dependency
  • DH-15131: Fix internal installer typo
  • DH-14948: Internal deployer learn deephaven install needs more sudo -u irisadmin
  • DH-15139: Unit test should just use assertSorted
  • DH-15139: Don't mark grouped partitions as sorted ever.
  • DH-14639: Generate SBOM with each build
  • DH-15126: Display engine version in Code Studio info popup
  • DH-15140: When workers are shut down they should gracefully shutdown the gRPC status stream
  • DH-15152: dhctl Logging Integration Test
  • DH-15102: Improve the metadata indexer tool with validation and list capabilities
  • DH-15149: Fix failing CompressedFileUtils Unit Tests
  • Backport DH-14821: Make Dnd use web’s npm executable
  • Merge updates from 1.20221001.204
  • DH-15085: Don't hold merged intraday partitions in WorkspaceData queries
  • DH-14949: Use Rocky8 in Jenkins
  • DH-15093: ConstructSnapshot Logging is Too Verbose
  • DH-15080: Potential Race in satisfied() lastCompletedStep Set
  • DH-15078: Backport DbArray toArray should use fillChunk (DH-13881)
  • DH-15062: writeTable with out of order grouping fails
  • DH-14999: Ensure PQs identify stability correctly.
  • DH-14951: Null Status fails to Write Test Information
  • DH-15128: Update package-lock.json for jupyter-grid
  • DH-15075: Fix failing test introduced as part of DH-15022
  • DH-15071: Fix Grouped AbstractColumnSource#match() breaking with empty input list.
  • DH-15022: Fixed java 8 compilation error introduced in previous version .199
  • DH-15022: Add support for .zst (ZeeStandard) file compression
  • DH-15039: instance and workspace roots not correctly read from prop files
  • DH-15124: Make prcheck jenkins job use jdk11
  • DH-13577: Add Release Notes for Web UI subplots support
  • DH-15141: Fix query apply on restart option not appearing in Web
  • DH-15046: Fix blue sharing dot in Query Monitor
  • DH-15113: Python Client DnD: Errors when using invalid authentication are too verbose/not informative
  • Spotless application.
  • DH-15116: DnD workers do not configure DispatcherClient Trust Store
  • DH-15105: Update to DHC 0.25.2
  • DH-15098: Improve KafkaIngester performance by storing RowSetters in an array
  • DH-15103: ACL Editor Fails to Launch with Empty Truststore Property
  • DH-15084: Kafka ingestion errors hidden by NPE
  • DH-15047: Return better feedback when response status is 500 for a user who is an acl-editor
  • DH-15066: Permit Run-As for DnD workers
  • DH-14705: enable routing import/export when existing routing file has errors
  • Update Web UI to 0.41.2
  • DH-14657: Disconnect handling increase debounce timeout
  • DH-14972: Remove setSearch debounce in CommandHistoryViewportUpdater
  • DH-15032: Fix incorrect warning about updated shared state
  • DH-15063: DnD tables and plots would not reconnect after restarting query
022Merge updates from 1.20230511.115
  • DH-15079: Implement pq:// uri for DHE
  • DH-15072: Stop building in jdk13
  • DH-15077: Update Python README.md.
  • DH-15077: DnD Python client type hinds break on Python 3.8
  • DH-15074: Fix typo in PerformanceTools init.py
  • DH-14966: Update PPQ child panel error message
  • DH-15068: Build DnD Python Client Wheel into distTar
  • DH-15041: DnD Python Client (Raw Version)
  • DH-14870: Resolve lack of DnD formula cache cleanup
  • DH-15030: Fix DnD Object Column Regions not implementing gatherDictionaryValuesRowSet
  • DH-15012: Remove unit GB from the Data Memory Ratio error message
  • DH-15049: Fix Controller not noticing DnD workers dying after initialization completes.
021DH-14738: ACL Editor User and Group Lists
020DH-14738: Windowed list state
019DH-14849: Fix DnD build
018DH-14849: Add DnD audit event logging for user table writing and deleting
017Fix bad merge.
016Merge updates from 1.20230511.103
  • DH-14974: Widen Byte and Short jpy CallableWrapper Returns to Integer
  • DH-12299: Added Release notes for updated readCsv features introduced in DH-12299
  • DH-15035: DnD is too eager to read location data when snapshot backed.
  • DH-14821: Make Dnd use web's npm executable
  • DH-14868: Close worker on console session disconnect
  • DH-14731: Fix null query status showing blank in query monitor
  • DH-14978: Setting correct trust store on DbAclWriteClient
  • DH-15009: Fix typo in Controller gRPC message
  • DH-14987: DnD Authorization did not check superusers or supervisors
  • DH-14998: Rename Enterprise Proto Files to be Python Friendly.
  • DH-14981: Update DnD Tests to DHC 0.25.1
  • DH-14890: Separate Enterprise/Community command histories
  • DH-14981: Update DnD to DHC 0.25.1
015DH-14996: Drop JDK13 From Grizzly Builds
014Merge updates from 1.20230511.091
  • DH-14964: Support Barrage to Barrage authentication for DnD workers
  • Fix gRPC logging integration test being flaky due to timing issues
  • DH-14991: Pin versions of JS plugins, add Deephaven plotly express plugin
  • DH-14988: Engine should default to the console settings when creating a persistent query from a code studio
  • DH-14970: Fixed bug with User delete not removing runAs mapping
  • DH-14963: Make testDnd run on java 11 and have a checkbox in feature test ui
  • DH-14953: Default Engine selected for Persistent Queries does not match first worker kind provided by server
  • DH-14965: Default persistent queries do not have a worker kind
  • DH-14967: Add engine to query monitor summary panel
  • Merge updates from 1.20230131.168
  • DH-14960: TestingAutomation needs to count released vs unreleased tests separately
  • DH-14916: Improve installer docs on upgrade
  • DH-14844: Patch DnD AEL logging
  • DH-14743: Remove System Acls Tab from Swing AclEditor UI
  • DH-14655: Remove all references to Unsupported SystemAcl API
  • DH-14891: Cache DHC JSAPI in Code Studios
  • DH-14928: Fix custom objects from queries not exporting type correctly
  • Changelog update.
  • DH-14947: Custom Formatting Long Column Loses Precision
  • DH-14764: Make jdk8 jenkinsfile actually use jdk8
  • DH-14874: backport DH-11489 (IntradayLoggerFactory shouldn't write listener classes)
  • Deephaven Launcher 9.03
  • DH-14942: Deephaven Launcher uses corrected URL after creating a new instance
  • DH-14945: Correct installation java validation error
  • DH-14850: require instance in DeephavenUpdater.sh and DeephavenUpdatesr.bat
  • DH-14807: better feedback when launcher fails to start
  • DH-11466: add release note about script improvements
  • DH-13936: Fix broken jenkinsfile
  • DH-13936: Use the installer for integration tests
  • DH-12662: add upgrade support for server_java_version
  • DH-14841: Check Sharing permissions from New Tab Screen
013DH-14738: ACL Editor hooks + utils
012Merge updates from 1.20230511.077
  • DH-13759: improve --status command line options processing
  • DH-14818: pass arguments with quoted spaces through to command_tool
  • DH-14817: status report table was limited to ten lines* .065 DH-14931: WebSocket Message Size Too Low for Complicated Workspace Build Blessing
  • DH-14844: Add AuditEventLogger to DnD DatabaseImpl
  • DH-14907: Make query script error reports less terrible
  • DH-14922: Permit Python shared console sessions (swing)
  • DH-14881: Web UI - cookies are unexpectedly expiring
  • DH-14828: Multiple auth servers are broken
  • DH-14563: Load JS Plugins from workers on login
  • DH-14852: Make Community Workers the Default for New Installations
  • DH-14915: Prevent inaccurate "worker disconnected" exceptions after connecting to worker (swing)
  • DH-14952: gRPC Integration Test Failed to Import
  • DH-14941: Panels menu not showing in correct location
  • Release notes update.
  • DH-14827: Display the Engine type in the Console Status Bar tooltip
  • DH-14931: WebSocket Message Size Too Low for Complicated Workspace Build Blessing
  • DH-14926: Update DHC to 0.24.3
  • DH-14924: disallow duplicate storage location in data routing file
  • DH-14925: catch 'except:' errors at parse time
  • DH-14880: Make controller aware of Shared-Console / Auto-Deleting queries
011DH-14950: Add Unit Test Demonstrating Constant Column Conflict Behavior
010DH-14829: Add copy button to PQ exceptions summary tab
009Merge updates from 1.20230511.060
  • DH-13936: Fix broken jenkinsfile
  • DH-13936: Use the installer for integration tests
  • DH-12662: add upgrade support for server_java_version
  • DH-14841: Check Sharing permissions from New Tab Screen
  • DH-14900: improve handling of failover groups in data routing
  • DH-14911: Fix Enabled filter in Query Monitor
  • DH-14860: Configuration Server Does not Properly Die when etcd is down on startup
  • DH-14904: Fix controller dropping ProcessInfoId and worker name when workers fail.
  • DH-14905: Pin Web Plugins in DnD Python requirements.txt
008Merge updates from 1.20230511.054
  • DH-14919: Fix broken build after forward-merge
  • DH-14899: Do not write jenkins cache for PR Check jobs
  • DH-14018: Min/max values are ignored when doing a redraw plot
  • DH-14699: Fixed NPE on selecting Copy ProcessInfoId from the status bar context menu when worker is disconnected
  • DH-14873: Make TestPidFileUtil test more deterministic
  • DH-14841: Do not display user list when sharing a dashboard
  • Fix Java 8 compilation issue.
  • DH-14816: Fix DnD performance overview as-of time
  • DH-14896: Make ILF Defer Compilation Directory Creation
  • DH-14696: DnD Python not installed gives an unclear error message
  • DH-14153: Label Deephaven Worker Containers with Users
  • DH-14902: PEL should capture K8s worker Stdout
  • DH-14903: Properly Set Workspace for DnD Workers
  • DH-14861: Dispatcher Should not Send Back Unserializable Exceptions
  • DH-14634: Provide helpful error on DnD startup when missing install
  • DH-14898: Fix Query server ordering in configuration
  • DH-14872: Fix typescript CI build
  • DH-14879: Make DnD SNI Host Check and Community PQ Client Authority Configurable
  • DH-14889: Fix etcd executable ownership
  • DH-14738: Update Web UI to ^0.40.1
  • Fix export useTableUtils
  • DH-14805: Add default ACLs for new DbInternal tables
  • DH-14864: Made Controller feedback to clients less opaque. Fix Script language getting lost on shared consoles. Fixed controller not serializing query state update presence.
  • DH-14738: Utils supporting ACL Editor
007Merge updates from 1.20230511.039
  • DH-14797: Fix controller PQ/dispatcher failure deadlock
  • DH-14878: Disable flaky test - slow update call
  • DH-14887: Cleanup unnecessary reference to caller in InteractiveConsoleSetupQuery.getControllerClient
  • DH-14711: Correct cancelJob call outside of lock.
  • DH-14706: data routing syntax improvements, validation improvements
  • Include Ability to access Importer ConstantColumnValue in Custom Field Writers
  • DH-14871: Update DHC to 0.24.2
  • DH-14802: Upgrade didn't correctly change getdown.global
  • DH-14863: Enable gRPC logs for etcd client in authentication server; add gRPC logging tests
  • DH-14865: Intraday index loggers were not shared correctly across listener threads
  • DH-14822: Fixed DnD workers not respecting PQ startup timeout setting
  • DH-14615: Update version of node/npm used by gradle plugin
  • DH-14842: Optionally Filter User and Group Lists for Web
  • DH-14708: Fix bug introduced earlier that failed to throw exception when pid file modification time was less than system uptime
  • DH-14708: Attempt to delete existing pid file when system uptime is less than file modification time
  • DH-14820: Prevent controller-connectivity hang in Swing telemetry
  • DH-14794: update IntelliJ code style
  • Fix Javadoc build break from merge.
  • DH-14808: Errors from DHC flight client to standard log; fix gRPC logging
  • DH-14855: Integrated Dashboard requires page-reload
  • DH-14247: Avoid refresh cookie retry in auth client when server says not authenticated
  • DH-14789: Fix NPE if DnD worker crashes during connection initiation crashing controller.
  • DH-14417: Remove subscripted list[string] type specification in DnD python, not supported by 3.8
  • DH-14845: prevent duplicate storage names in data routing file
006DH-14714: Add DnD unpartitioned user table writing
005Merge updates from 1.20230511.022
  • DH-14843: Turn remote Groovy in R on by default (see DH-14715).
  • DH-14839: Make controller client subscribe RPC server side streaming only
  • DH-14798: Additional fix for remote locations unable to handle unpartitioned tables
  • DH-14830: More parametrization for services on gRPC and Envoy
  • DH-14800: Cache DnD API instances and authenticated clients per engine/query
  • Fix Javadoc build break from merge.
  • DH-14798: Fix DnD Database handling of splayed user tables
004Merge updates from 1.20230511.015
  • DH-14796: Set new tests enabled by default
  • DH-14670: Use testcase id in log output
  • DH-14178: Feb-Apr test case updates for QA
  • DH-14799: installer quote escaping
  • DH-14759: fix installer log file permissions
  • DH-14715: Do not return published Table in remote mode.
  • Update Web UI to v0.39.0
  • DH-14787: Integrated DnD panels from Community PQs
  • DH-14788: Integrated DnD Console
  • DH-14657: Better disconnect handling
  • DH-14803: Link parsing in table cells to be more restrictive
  • DH-14656: Fix DnD installation on clusters
  • Spotless application.
  • DH-14795: NPE while updating Envoy Snapshot
  • DH-12231: Correct Kubernetes Upgrade With new Branding
  • DH-14362: Provide Database access to DHC autocomplete
  • DH-14793: DnD Python table_names and namespaces need to return Python collection
003Merge updates from 1.20230511.008
  • DH-14595: Correct MergeData future construction race.
  • DH-14752: Update Envoy to v1.24.7, add remote debug configs for k8s
  • DH-12231: Updating Copyright Info to 2023, includes changes to automate copyright year update where applicable
  • DH-13780: Fix Java 8 compat and spotless
  • DH-13780: Handle DHC Barrage boolean serialization changes
  • DH-14790: Web Node List Breaks Installer Tests
002Add DnD libSource for new release
001Initial release creation from 1.20230512

New API to format the log-format suffix of internal partitions

A new builder method IntradayLoggerBuilder#setSuffixInternalPartitionWithLogFormat(String) has been added that lets caller provide a single-argument String.format pattern. The formatted log-format value is appended to the internal-partition name. This overloads the existing IntradayLoggerBuilder#setSuffixInternalPartitionWithLogFormat(boolean) which, when true, appends the suffix using the default %d pattern.

Example:

For internal partition "ABC" and log-format version 4:

  • For setSuffixInternalPartitionWithLogFormat(true), the actual partition used would be ABC-4
  • For setSuffixInternalPartitionWithLogFormat("%02d"), the actual partition used would be ABC-04

Removed Jupyter Notebook integration

Server side Jupyter Notebook integration has been removed from Deephaven. The Legacy worker Jupyter Notebook is no longer supported and will not be updated. Use the Deephaven Core+ Python client from Jupyter notebooks beginning in Deephaven 1.20231218 and later.

Optional limit on appendCentral table size (client side)

Database.appendCentral(...) sends the given table to the Log Aggregator Service as an atomic update. A large enough table can cause the LAS to run out of memory.

You can now set a maximum table size (number of rows) that will be accepted by the appendCentral call, by setting the optional property LogAggregatorService.transactionLimit.rows. This check only looks at the number of rows and does not take the number of columns into account. Zero or unset means no limit is enforced.

To make updates larger than the configured limit, either break the table into smaller pieces, or use the RemoteTableAppender directly to make a non-atomic update:

rta = new com.illumon.iris.db.util.logging.RemoteTableAppender(log, table.getDefinition().getWritable(), namespace, tableName, columnPartitionValue)
rta.append(table)
rta.flush()
rta.close()

See also Optional server-side limit on appendCentral table size for related server-side changes.

Optional limit on appendCentral table size (server side)

Database.appendCentral(...) and RemoteTableAppender.appendAtomic(...) calls send the given table to the Log Aggregator Service as an atomic update. A large enough table can cause the LAS to run out of memory.

You can now set a maximum table size (number of rows or number of bytes) that will be accepted by the Log Aggregator, by setting the optional properties LogAggregatorService.transactionLimit.rows or LogAggregatorService.transactionLimit.bytes. Zero or unset means no limit is enforced. When the Log Aggregator accumulates more rows or more bytes in a transaction than the configured limit, it will abort the transaction and release the accumulated memory. The client will get an error.

To make updates larger than the configured limit, either break the table into smaller pieces, or use the RemoteTableAppender directly to make a non-atomic update:

rta = new com.illumon.iris.db.util.logging.RemoteTableAppender(log, table.getDefinition().getWritable(), namespace, tableName, columnPartitionValue)
rta.append(table)
rta.flush()
rta.close()

See also Optional client-side limit on appendCentral table size for related client-side changes.

Python 3.8 is the oldest supported Python version

Even though Python 3.8 has already reached EOL, on some versions of Deephaven, this is the newest built + tested version of Python.

As of Bard version 1.20211129.426, Python 3.8 is the only Python version built, and iris-defaults.prop changes the default from Python 3.6 to 3.8.

If you still have virtual environments setup with Python 3.6 or 3.7, you should replace them with Python 3.8 venvs. To use newer versions of Python, upgrade to a newer version of Deephaven.

For legacy systems, you can change the default back to Python 3.6 by updating your iris-environment.prop to set the various jpy.* props to the values found in iris-defaults.prop, inside the jpy.env=python36 stanza:

# Legacy python3.6 locations:
jpy.programName=/db/VEnvs/python36/bin/python3.6
jpy.pythonLib=/usr/lib64/libpython3.6m.so.1.0
jpy.jpyLib=/db/VEnvs/python36/lib/python3.6/site-packages/jpy.cpython-36m-x86_64-linux-gnu.so
jpy.jdlLib=/db/VEnvs/python36/lib/python3.6/site-packages/jdl.cpython-36m-x86_64-linux-gnu.so

The new iris-defaults.prop python props are now:

# New iris-defaults.prop python3.8 locations:
jpy.programName=/db/VEnvs/python38/bin/python3.8
jpy.pythonLib=/usr/lib/libpython3.8.so
jpy.jpyLib=/db/VEnvs/python38/lib/python3.8/site-packages/jpy.cpython-38-x86_64-linux-gnu.so
jpy.jdlLib=/db/VEnvs/python38/lib/python3.8/site-packages/jdl.cpython-38-x86_64-linux-gnu.so

Changes to Barrage subscriptions in Core+ Python workers

The methods subscribe and snapshotTable inside deephaven_enterprise.remote_table have been changed to return a Python deephaven.table.Table object instead of a Java io.deephaven.engine.table.Table object. This allows users to use the Python methods update_view, rename_columns, etc. as expected without wrapping the returned table.

Existing Python code that manually wrapped the table or directly called the wrapped Java methods must be updated.

Example of previous behavior:

from deephaven_enterprise import remote_table as rt
table = rt.in_local_cluster(query_name="SubscribePQ", table_name="my_table").snapshot()
table = table.updateView("NewCol = random()")

Example of new behavior:

from deephaven_enterprise import remote_table as rt
table = rt.in_local_cluster(query_name="SubscribePQ", table_name="my_table").snapshot()
table = table.update_view("NewCol = random()")

Vermilion+ Core+ updated to 0.35.2

Vermilion+ 1.20231218.440 includes version 0.35.2 of the Deephaven Core engine. This is the same version that ships with Grizzly in 1.20240517.189, enabling customers to have one Core engine version of overlap between major Deephaven Enterprise releases. Although the Core engine functionality is the same in 0.35.2, the Grizzly Core+ worker has several enhancements that are not available in the Vermilion+ Core+ worker. This change also updates grpc to 1.61.0

For details on the Core changes, see the following release notes:

Changes to vector support for Core+ user tables

Both the Legacy and Core engines have special database types to represent arrays of values. The Legacy engine uses the DbArray class, while the Core system uses the Vector class. While these implementations represent identical data, they pose challenges for interoperability between workers running different engines.

When a user table is written, the schema is inferred from the source table. Previously, Vectors would be recorded verbatim in the schema. This change explicitly encodes Vector types as their base java array types as follows.

Vector ClassConverted Schema Type
ByteVectorbyte[]
CharVectorchar[]
ShortVectorshort[]
IntVectorint[]
LongVectorlong[]
FloatVectorfloat[]
DoubleVectordouble[]
Vector<T>T[]

This makes it possible for the Legacy engine to read User tables written by the Core engine. Note that no conversion is made when the Legacy engine writes DbArray types because the Core+ engine already supports those types.

If you want your User table array columns to be Vector types, use an .update() or .updateView() clause to wrap the native arrays.

staticUserTable = db.historicalTable("MyNamespace", "MyTable")
                    .update("Longs = (io.deephaven.vector.LongVector)io.deephaven.vector.VectorFactory.Long.vectorWrap(Longs)")

Option close Tailer-DIS connections early, while continuing to monitor files

A new property is available to customize the behavior of the Tailer.

log.tailer.defaultIdlePauseTime

This property is similar to log.tailer.defaultIdleTime, but it allows the Tailer to close connections early while continuing to monitor files. When the idle time specified by log.tailer.defaultIdleTime has passed without any changes to a monitored file, the Tailer will close the corresponding connection to the DIS, and will not process any further changes to the file. The default idle time must therefore be as long as the default file rollover interval plus some buffer.

The new property enables a new feature. When the time specified by log.tailer.defaultIdlePauseTime has passed without any changes to a monitored file, the Tailer will close the corresponding connection to the DIS, but will continue to monitor the file for changes. If a change is detected, the Tailer will reopen the connection and process the changes. This will reduce or more quickly reclaim resources consumed for certain usage patterns.

Helm Chart Tolerations, Node Selectors and Affinity

You can now add tolerations, node selection, and affinity attributes to pods created by the Deephaven Helm chart. By default, no tolerations, selectors or affinity are added. To add tolerations to all created deployments, modify your values.yaml file to include a tolerations block, which is then copied into each pod. For example:

tolerations:
- key: "foo"
  operator: "Exists"
  effect: "NoSchedule"
- key: "bar"
  value: "baz"
  operator: "Equal"
  effect: "NoSchedule"

Adds the following tolerations to each pod (in addition to the default tolerations provided by the Kubernetes system):

Tolerations:                 bar=baz:NoSchedule
                             foo:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s

Similarly, you can add a nodeSelector or affinity block:

nodeSelector:
- key: "foo"
  operator: "Exists"
  effect: "NoSchedule"
- key: "bar"
  value: "baz"
  operator: "Equal"
  effect: "NoSchedule"

affinity:
  nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: label
            operator: In
            values:
            - value1

Which result in pods containing node selectors like:

Node-Selectors:              key1=value1
                             key2=value2

And affinity as follows:

  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
          - key: label
            operator: In
            values:
            - value1
        weight: 1

Ability to disable password authentication in front-end

A new property, authentication.client.disablePasswordAuth=true, may be used to remove the username/password authentication option from the Swing front-end. The property has no effect if there are no other login-options available.

This property does not disable username/password authentication at the server level (see Disabling password authentication).

Allow config to override ServiceRegistry hostname

The hostname which Data Import Server (DIS) registers with the service registry may now be defined in the host tag within the DIS' routing endpoint of the routing configuration; or using the new ServiceRegistry.overrideHostname system property. The precedence for the service registry host is from:

  • The routing endpoint configuration. Prior to this change, the host value within the endpoint configuration was ignored.
  • ServiceRegistry.overrideHostname property.
  • On Kubernetes, the worker's service's hostname.
  • On bare metal, it is the result of the Java InetAddress.getLocalHost().getHostName() function.

Optional lenient IOJobImpl to avoid write queue overflow

New behavior is available to avoid write queue overflow errors in the TDCP process. When a write queue overflow condition is detected, the process can be configured to delay briefly - giving the queue a chance to drain.

The following properties govern the feature:

IOJobImpl.lenientWriteQueue
IOJobImpl.lenientWriteQueue.retryDelay
IOJobImpl.lenientWriteQueue.maxDelay

Set IOJobImpl.lenientWriteQueue=true to enable this behavior. By default, the writer will wait up to IOJobImpl.lenientWriteQueue.maxDelay=60_000 ms in increments of IOJobImpl.lenientWriteQueue.retryDelay=100 ms.

This should address the following fatal error in the TDCP process:

ERROR - job:1424444133/RemoteTableDataService/10.128.1.75:37440->10.128.1.75:22015 write queue overflow: r=true, w=true, p=false, s=false, u=false, h=0, rcap=69632, rbyt=0, rmax=4259840, wbyt=315407, wspc=1048832, wbuf=4097, wmax=1048576, fc=0, allowFlush=true

Option to default all user tables to Parquet

Set the configuration property db.LegacyDirectUserTableStorageFormat=Parquet to default all direct user table operations, such as db.addTable, to the Parquet storage format. The default if the property is not set is DeephavenV1.

Deephaven processes log their heap usage

The db_dis, web_api_service, log_aggregator_service, iris_controller, db_tdcp, and configuration_server processes now periodically log their heap usage.

PersistentQueryController.log.current:[2024-05-10T15:00:32.365219-0400] - INFO - Jvm Heap: 3,972,537,856 Free / 4,291,624,960 Total (4,291,624,960 Max)
PersistentQueryController.log.current:[2024-05-10T15:01:32.365404-0400] - INFO - Jvm Heap: 3,972,310,192 Free / 4,291,624,960 Total (4,291,624,960 Max)

The logging interval can be configured using the property RuntimeMemory.logIntervalMillis. The default is one minute.

Disabling Password Authentication

To disable password authentication within the authentication server, set the configuration property authentication.passwordsEnabled to false. When the property is set to false, the authentication server rejects all password logins and you must use SAML or private key authentication to access Deephaven.

Note that even if the UI presents a password prompt, the authentication backend rejects all passwords.

Kubernetes Heap Overhead Parameters

When running Deephaven installations in Kubernetes, the originally-implemented JVM overhead properties don't prevent some workers being killed with out-of-memory exceptions.

  • Adding the BinaryStoreWriterV2.allocateDirect=false JVM parameter reduces direct memory usage, which is not counted towards dispatcher heap usage and can result in Kubernetes out-of-memory failures.
  • Adding the -Xms JVM parameter allocates all requested heap at worker creation time, reducing the likelihood of after-startup worker out-of-memory failures from later memory requests.
  • Adding the -XX:+AlwaysPreTouch JVM parameter to workers ensures that all worker heap is touched during startup, avoiding later page-faulting.

The following properties are being added to iris-environment.prop for new installations. Deephaven strongly suggests adding them manually to existing installations.

RemoteProcessingRequestProfile.Xms.G1 GC=$RequestedHeap
RemoteQueryDispatcher.JVMParameters=-XX:+AlwaysPreTouch
BinaryStoreWriterV2.allocateDirect=false

In addition, the property RemoteQueryDispatcher.memoryOverheadMB=500 is being updated in iris-defaults.prop, and this will automatically be picked up when the Kubernetes installation is upgraded.

Dispatcher Memory Reservation

The Remote Query Dispatcher (either db_query_server or db_merge_server) has a configurable amount of heap that can be dispatched to workers, which is controlled by setting the RemoteQueryDispatcher.maxTotalQueryProcessorHeapMB property. Setting this property requires accounting the other processes that may be running on the machine. If set too high, then workers may fail to allocate memory after being dispatched after dispatch or the kernel OOM killer may terminate processes. If set too low, then the machine may be underutilized.

As an additional safety check, the Remote Query Dispatcher can query the /proc/meminfo file for available heap. If a user requests more heap than the MemAvailable field indicates can be allocated to a new process, then the remote query dispatcher can reject scheduling the worker. By default, this new functionality is disabled.

There are two new properties that control this behavior:

  • RemoteQueryDispatcher.adminReservedAvailableMemoryMB; for users that are members of RemoteQueryDispatcher.adminGroups
  • RemoteQueryDispatcher.reservedAvailableMemoryMB: for all other users

When set to -1, the default, the additional check is disabled. When set to a non-negative value the dispatcher subtracts the property's value from the available memory, and verifies that the worker heap is less than this value before creating the worker.

You can examine the current status of properties, using the /config endpoint if RemoteQueryDispatcher.webserver.enabled is set to true. For example, navigate to `https://query-host.example.com:8084/config'. The available memory along with property values are displayed as an HTML table.

This property does not guarantee that workers or other processes are not terminated by the OOM killer. Running workers and processes may not have allocated their maximum heap size, and therefore can use system memory beyond what is available at dispatch time.

ILLUMON_JAVA is deprecated. Use DH_JAVA instead.

In the past, specifying which version of java to use with Deephaven was done with the ILLUMON_JAVA and it was applied inconsistently.

In this release, you can set DH_JAVA=/path/to/java_to_use/bin/java in your cluster.cnf to tell all Deephaven processes where to find the correct java executable regardless of your PATH.

DH_JAVA works correctly whether you point to a java executable or a java installation directory (like "JAVA_HOME") Both DH_JAVA=/path/to/java_to_use and DH_JAVA=/path/to/java_to_use/bin/java operate identically.

If different machines in your cluster have java executables located in different locations, it is your responsibility to set DH_JAVA correctly in /etc/sysconfig/deephaven/cluster.cnf on each machine, or (preferably) to use a symlink so you have a consistent DH_JAVA location on all machines.

Core+ Controller Python Imports

From Core+ Python workers, you may now import Python modules from repositories stored in the controller. To evaluate a single Python file:

import deephaven_enterprise.controller_import

deephaven_enterprise.controller_import.exec_script("script_to_execute.py")

To import a script as a module, you must establish a meta-import with a module prefix for the controller. The following example uses the default value of "controller" to load a module of the form "package1/package2.py" or "package1/package2/__init__.py":

import deephaven_enterprise.controller_import

deephaven_enterprise.controller_import.meta_import()

import controller.package1.package2

Refreshing Local Script Repositories

The Persistent Query Controller defines a set of script repositories that can be used from Persistent Queries or Code Studios. The repositories may be configured to use a remote Git repository or just a path on the local file system. The controller scans the repository on startup for the list of scripts that are available. Previously, only Git repositories could have updates enabled (once per minute); and local repositories would never be rescanned.

You can now set the property PersistentQueryController.scriptUpdateEnabled to true to enable script updates. If this property is not set, then the old PersistentQueryController.useLocalGit property is used (the old property has an inverse sense, meaning PersistentQueryController.useLocalGit=true stops updates and PersistentQueryController.useLocalGit=false permits updates) .

To mark a repository as local, the "uri" parameter must be set to empty. For example, if the repository was reffered to as "irisrepo" in the iris.scripts.repos property, then to mark the repository as local you would include a property like in your iris-environment.prop file:

iris.scripts.repo.irisrepo.uri=

Fixing etcd ACLs that broke after upgrading to URL encodings

Note that the following is only applicable to etcd ACLs.

In 1.20231218.116 and 1.20231218.132, Deephaven began URL encoding ACL keys to prevent special characters like '/' in keys from corrupting the ACL database. Although not all special characters corrupted the database, all of them are encoded, causing the unencoded database to be incompatible with the new version. A common occurrence of this pattern is the "@" character in usernames.

These ACL entries can be fixed using the EtcdAclEncodingTool.

First, back up your etcd database by reading our backup and restore instructions.

To rewrite these ACLs with proper encodings, run the following command as irisadmin:

sudo -u irisadmin /usr/illumon/latest/bin/iris_exec com.illumon.iris.db.v2.permissions.EtcdAclEncodingTool

To see what changes would occur without actually modifying the ACLs, run:

sudo -u irisadmin /usr/illumon/latest/bin/iris_exec com.illumon.iris.db.v2.permissions.EtcdAclEncodingTool -a --dry-run

Setting JVM JIT Compiler Options for Workers

The ability to set the maximum number of allowed JVM JIT compiler threads through the -XX:CICompilerCount JVM option has been added to JVM profiles using properties of the form RemoteProcessingRequestProfile.JitCompilerCount. See the remote processing profiles documentation for further information.

Upgrade etcd to 3.5.12

In past releases, we recommended upgrading etcd to 3.5.5.

It was later discovered that 3.5.5 has a known bug which can break your etcd cluster if you perform an etcdctl password reset.

As such, when upgrading etcd, you should prefer the Deephaven-tested 3.5.12 point release, which is the new default as of version 1.20231218.190.

All newly created systems will have 3.5.12 installed, but for existing systems, you must unpack new etcd binaries yourself.

You can find manual etcd installation instructions in the Reducing Root to Zero guide.

Configurable gRPC Retries

The configuration service now supports using a gRPC service configuration file to configure retries, and one is provided by default for the system.

{
  "methodConfig": [
    {
      "name": [
          {
              "service": "io.deephaven.proto.config.grpc.ConfigApi"
          },
          {
              "service": "io.deephaven.proto.registry.grpc.RegistryApi"
          },
          {
              "service": "io.deephaven.proto.routing.grpc.RoutingApi"
          },
          {
              "service": "io.deephaven.proto.schema.grpc.SchemaApi"
          },
          {
              "service": "io.deephaven.proto.processregistry.grpc.ProcessRegistryApi"
          },
          {
              "service": "io.deephaven.proto.unified.grpc.UnifiedApi"
          }
      ],

      "retryPolicy": {
        "maxAttempts": 60,
        "initialBackoff": "0.5s",
        "maxBackoff": "2s",
        "backoffMultiplier": 2,
        "retryableStatusCodes": [
          "UNAVAILABLE"
        ]
      },

      "waitForReady": true,
      "timeout": "120s"
    }
  ]
}

methodConfig has one or more entries. Each entry has a name section with one or more service/method sections that filter whether the retryPolicy section applies.

If the method is empty or not present, then it applies to all methods of the service. If service is empty, then method must be empty, and this is the default policy.

The retryPolicy section defines how a failing gRPC call will be retried. In this example, grpc will retry for just over 1 minute while the status code is UNAVAILABLE (e.g. the service is down). Note this applies only if the server is up but the individual RPCs are being failed as UNAVAILABLE by the server itself. It the server is down, the status returned is UNAVAILABLE but the retryPolicy defined here for the method does not apply; gRPC manages reconnection retries for a channel separately/independently as described here: https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md

There is no way to configure the parameters for reconnection; see https://github.com/grpc/grpc-java/issues/9353

If the service config file specifies waitForReady, then an RPC executed when the channel is not ready (server is down) will not fail right away but will wait for the channel to be connected. This, combined with a timeout definition will make the RPC call hold on for as long as the timeout giving the reconnection policy a chance to get the channel to ready.

For Deephaven processes, customization of service config can be done by (a) copying configuration_service_config.json to /etc/sysconfig/illumon.d/resources and modifying it there, or (b) renaming it and setting property configuration.server.service.config.json.

Note that the property needs to be set as a launching JVM argument because this is used in the gRPC connection to get the initial properties.

Note: The relevant service names are:

io.deephaven.proto.routing.grpc.RoutingApi
io.deephaven.proto.config.grpc.ConfigApi
io.deephaven.proto.registry.grpc.RegistryApi
io.deephaven.proto.schema.grpc.SchemaApi
io.deephaven.proto.unified.grpc.UnifiedApi

Add Core+ Calendar support and allow Java ZoneId strings in Legacy Calendars

Core+ workers can use the Calendars.resourcePath property to load customer provided business calendars from disk. To use calendars in Core+ workers, any custom calendars on your resource path must be updated to use a standard TimeZone value.

Legacy workers also support using ZoneId strings instead of DBTimeZone values.

Dynamic management of Data Import Server configurations

Creating a new Data Import Server configuration and integrating it into the Deephaven system requires several steps, including required adjustments to the data routing configuration. This final step can now be performed with a few simple commands, and no longer requires editing the data routing configuration file.

dhconfig dis

The dhconfig command has a new action: dis, which supports import, add, export, list, delete, validate actions. The commands themselves provide help, and more information can be found in the dhconfig documentation.

dhconfig dis import

Import one or more configurations from one or more files. For example:

/usr/illumon/latest/bin/dhconfig dis import /path/to/kafka.yml

kafka.yml

kafka:
  name: kafka
  endpoint:
    serviceRegistry: registry
    tailerPortDisabled: 'false'
    tableDataPortDisabled: 'false'
  claims:
  - {namespace: Kafka}
  storage: private

dhconfig dis add

Define and import a single configuration on the command line. For example (equivalent to the import example above):

/usr/illumon/latest/bin/dhconfig dis add --name kafka --claim Kafka

dhconfig dis export

Export one or more configurations to one or more files. These files are suitable for the import command. For example, to export all configured Data Import Servers:

/usr/illumon/latest/bin/dhconfig dis export --file /tmp/import_servers.yml

dhconfig dis list

List all configured Data Import Servers. For example:

/usr/illumon/latest/bin/dhconfig dis list
Data import server configurations:
    kafka
    kafka3

dhconfig dis delete

Delete one or more configurations. For example:

/usr/illumon/latest/bin/dhconfig dis delete kafka --force

dhconfig dis validate

Validate one or more configurations. This can validate proposed changes before committing them with the import command. This process verifies that the configuration as a whole will be valid after applying the new changes.

Caveats

"Data routing configuration" comprises the "main" configuration file (managed with dhconfig routing) and additional DIS configurations. The main routing configuration may contain DIS configurations in the dataImportServers section. These two sources of DIS configurations are managed separately and are not permitted to contain duplicates. If you want to manage an existing DIS configuration with the new commands, you must remove it from the main routing configuration.

This functionality will only be useful for querying data if the routing configuration includes "all data import servers" using the dataImportServers keyword. This is usually a source under the db_tdcp table data service:

    db_tdcp:
      host: *localhost
      port: *default-tableDataCacheProxyPort
      sources:
        - name: dataImportServers

A DIS configuration requires storage. The special value private indicates that the server will supply its own storage location. Any other value must be present in the storage section of the routing configuration.

Update jgit SshSessionFactory to a more modern/supported version

For our git integration, we have been using the org.eclipse.jgit package. Github discontinued support for SHA-1 RSA ssh keys, but jgit's ssh implementation (com.jcraft:jsch) does not support rsa-sha2 signatures and will not be updated. To enable stronger SSH keys and provide GitHub compatibility, we have configured jgit to use an external SSH executable by setting the GIT_SSH environment variable. The /usr/bin/ssh executable must be present for Git updates.

Restartable Controller

If the iris_controller process restarts quickly enough, Core+ workers that were already initialized and running normally by the time the controller restarted continue running without interruption. Legacy workers still terminate on controller restart.

  • The duration that workers can survive without the controller is defined by the property PersistentQueryController.etcdPresenceLeaseTtlSeconds, which defaults to 60 (seconds).
  • Only workers that have completed initialization and are in the Running state before the crashed controller died and which should still be running by that time, according to their query configuration at the time of controller restart.

If the iris_controller is stopped normally (e.g., via monit stop or a regular UNIX TERM signal), the value of the property PersistentQueryController.stopWorkersOnShutdown determines the desired behavior for workers.

  • When set to true, all controller-managed workers are stopped alongside the controller. This is consistent with the traditional behavior.
  • When set to false (the new default), workers do not stop alongside the controller, and have the time defined in the property PersistentQueryController.etcdPresenceLeaseTtlSeconds (defaults to 60 seconds) as a grace period where they wait for the controller to come back.

If the controller crashes (i.e., the iris_controller process stopped unexpectedly by an exception that crashes the process, a machine reboot, or a UNIX KILL signal), then workers are not proactively stopped even if the value of PersistentQueryController.stopWorkersOnShutdown is true. In this case, the dispatcher terminates those workers after the PersistentQueryController.etcdPresenceLeaseTtlSeconds timeout.

Note: irrespective of the value of the PersistentQueryController.stopWorkersOnShutdown property, if the dispatcher associated to a worker is shutdown, the worker stops.

Renamed Swing Launcher Archives

The downloadable swing launcher has been renamed as follows:
DeephavenLauncherSetup_123.exe is now deephaven-launcher-123.exe
DeephavenLauncher_123.tar is now deephaven-launcher-123.tgz

Reliable Barrage table connections

We have added a new library to provide reliable Barrage subscriptions within a Deephaven Core+ cluster. The new tables monitor the state of the source query and gracefully handle disconnection and reconnections without user intervention. This can be used to create reliable meshes of Core+ workers that are fault tolerant to the loss of other queries.

When using ResolveTools, PQ URLs (pq://MyQuery/scope/MyTable?columns=MyFirstColumn,SomeOtherColumn) use these new reliable tables.

To use this library see the following examples

import io.deephaven.enterprise.remote.RemoteTableBuilder
import io.deephaven.enterprise.remote.SubscriptionOptions

// Subscribe to the columns `MyFirstColumn` and `SomeOtherColumn` of the table `MyTable` from the query `MyQuery` 
table = RemoteTableBuilder.forLocalCluster()
    .queryName("MyQuery")
    .tableName("MyTable")
    .subscribe(SubscriptionOptions.builder()
        .addIncludedColumns("MyFirstColumn", "SomeOtherColumn").build())
from deephaven_enterprise import remote_table as rt

# Subscribe to the columns `MyFirstColumn` and `SomeOtherColumn` of the table `MyTable` from the query `MyQuery`
table = rt.in_local_cluster(query_name="MyQuery", table_name="MyTable") \
        .subscribe(included_columns=["MyFirstColumn", "SomeOtherColumn"])

Connecting to remote clusters

It is also possible to connect to queries on a different Deephaven cluster.

import io.deephaven.enterprise.remote.RemoteTableBuilder

table = RemoteTableBuilder.forRemoteCluster("https://other-server.mycompany.com:8000/iris/connection.json")
        .password("user", "password")
        .queryName("MyQuery")
        .tableName("MyTable")
        .subscribe(SubscriptionOptions.builder()
                .addIncludedColumns("MyFirstColumn", "SomeOtherColumn").build())
from deephaven_enterprise import remote_table as rt

# Subscribe to the columns `MyFirstColumn` and `SomeOtherColumn` of the table `MyTable` from the query `MyQuery`
table = rt.for_remote_cluster("https://other-server.mycompany.com:8000/iris/connection.json")
    .password("username", "password") \
    .query_name("MyQuery") \
    .table_name("MyTable") \
    .subscribe(included_columns=["MyFirstColumn", "SomeOtherColumn"])

ACLs for Update Core+ Performance Tables

Preexisting installs must manually add new ACLs for the new DbInternal tables.

First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:

-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessMetricsLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCoreV2Index -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCoreV2Index -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCoreV2Index -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCoreV2Index -overwrite_existing
exit

Then, run the following to add the new ACLs into the system:

sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt

Alternatively, the ACLs can be added manually one by one in the ACL Editor:

allusers | DbInternal | ServerStateLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessMetricsLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | ServerStateLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))

Worker name format change

Worker names are no longer assigned in an ascending manner beginning from "worker_1". Instead worker names begin with "worker_" followed by a prefix of the process info ID. Note that the worker name is not guaranteed to be unique, using process info id is the only way to reliably find a specific worker within logs.

The "request ID" field has been removed from the RemoteProcessingRequest. The client now assigns the process info id, therefore you can use that to search in logs both on the client and server.

Custom Setter Support

For CSV imports using the new Deephaven Community CSV parser CustomSetters are now supported.

The changes are backward compatible so existing CustomSetter implementations continue to work as is. However, it is recommended to use the new custom setter interface for new imports and consider transitioning existing imports to the new interface.

The new interface provides two key benefits

  • Avoids creating of CSVRecord objects
  • Column Data types are retained (extracted column values from CSVRecord would always be string)
  • Additionally, passed in constant values may directly be accessed as getConstantColumnValue()

CustomSetter example (Legacy)

Below is a simple example Custom Setter implementation using the legacy approach. The next section details how to convert this to the new interface.

The below example is of a Full Name column that is compiled using First Name, Last Name columns and optionally may include Name Prefix if the Name Prefix constant column is included

Legacy Schema for Full Name Column

<Table name="ConstCustomSetter" namespace="Test" storageType="NestedPartitionedOnDisk">
  <Partitions keyFormula="__PARTITION_AUTOBALANCE_SINGLE__" />

  <Column name="Partition" dataType="String" columnType="Partitioning" />
  <Column name="FullName" dataType="String" />
  <Column name="FirstName" dataType="String" />
  <Column name="LastName" dataType="String" />

  <ImportSource name="IrisCSV" type="CSV" arrayDelimiter="," >
    <ImportColumn name="NamePrefix" sourceType="CONSTANT" />
    <ImportColumn name="FullName" sourceName="FirstName" class="com.illumon.iris.importers.CsvConstColumnSetterExample" />
    <ImportColumn name="FirstName" />
    <ImportColumn name="LastName" />
  </ImportSource>
</Table>

Legacy Implementation CsvConstColumnSetterExample

package com.illumon.iris.importers;

import com.fishlib.io.logger.Logger;
import com.illumon.iris.binarystore.RowSetter;
import org.apache.commons.csv.CSVRecord;

import java.io.IOException;

/**
 * Example Custom Setter implementation for Full Name Column using Legacy approach
 */
public class CsvConstColumnSetterExample extends CsvFieldWriter{

    private final RowSetter setter;
    private final ImporterColumnDefinition column;

    /**
     * Constructor using the format that is required for custom CsvFieldWriters
     * 
     * @param log The passed in logger
     * @param strict The strict parameter as chosen for the Import
     * @param column The import column definition for the CustomSetter column
     * @param setter The RowSetter to be used to populate the Column value for the Row
     * @param delimiter The array delimiter used in the import
     */
    public CsvConstColumnSetterExample(final Logger log, final boolean strict, final ImporterColumnDefinition column, final RowSetter setter,
                                 final String delimiter) {
        super(log, column.getName(), delimiter);
        this.setter = setter;
        this.column = column;
    }

    @SuppressWarnings("unchecked")
    @Override
    public void processField(final CSVRecord record) throws IOException {
        setter.set(getConstantColumnValue("NamePrefix") + " " + record.get("FirstName") + " " + record.get("LastName"));
    }

}

New Interface implementation for Full Name

Below are the schema and implementation class of the CustomSetter for the same Full Name column using the new interface.

Schema
<Table name="NewFormatConstCustomSetter" namespace="Test" storageType="NestedPartitionedOnDisk">
  <Partitions keyFormula="__PARTITION_AUTOBALANCE_SINGLE__" />

  <Column name="Partition" dataType="String" columnType="Partitioning" />
  <Column name="FullName" dataType="String" />
  <Column name="FirstName" dataType="String" />
  <Column name="LastName" dataType="String" />

  <ImportSource name="IrisCSV" type="CSV" arrayDelimiter="," >
    <ImportColumn name="FullName" class="com.illumon.iris.importers.CsvDhcConstColumnSetterExample" />
  </ImportSource>
</Table>
Implementation Class

As shown below the key differences are

  1. The Base class is BaseCsvFieldWriter
  2. The method to implement is void processRow(@NotNull final Map<String, CustomSetterValue<?>> columnNameToValueMap)
    1. The columnNameToValueMap as name suggests the key in the map is the ColumnName and points to CustomSetterValue object, which holds the appropriate column value by type
    2. The CustomSetterValue is an implements RowSetter<?> interface but also supports getter allowing ability to save and retrieve values by their type
  3. In addition, the passed in constant can be retrieved using getConstantColumnValue(), though the legacy way of using getConstantColumnValue("NamePrefix") where NamePrefix is the ImportColumn name.
    1. XmlImports has support to pass in importProperties map which allows for multiple Constant columns in that case it would be preferred to use getConstantColumnValue(ColumnName)
package com.illumon.iris.importers;

import com.fishlib.io.logger.Logger;
import com.illumon.iris.binarystore.RowSetter;
import org.jetbrains.annotations.NotNull;

import java.util.Map;

/**
 * Example Custom Setter implementation for Full Name Column using New Format
 */
public class CsvDhcConstColumnSetterExample extends BaseCsvFieldWriter {

    private final RowSetter<String> setter;

   /**
    * Constructor required for custom BaseFieldWriter
    *
    * @param log       The passed in log
    * @param strict    The value of strict flag chosen for import
    * @param column    The import column definition for the CustomSetter column
    * @param setter    The RowSetter that will be used to set the property
    * @param delimiter The array delimiter used
    */
    public CsvDhcConstColumnSetterExample(final Logger log, 
                                          final boolean strict, 
                                          final ImporterColumnDefinition column,
                                          final RowSetter<?> setter, 
                                          final String delimiter) {
        super(log, column.getName(), delimiter);
        //noinspection unchecked
        this.setter = (RowSetter<String>) setter;
    }

    @Override
    public void processRow(@NotNull final Map<String, Object> columnNameToValueMap) {
        final String firstName = (String) columnNameToValueMap.get("FirstName");
        final String lastName = (String) columnNameToValueMap.get("LastName");
        final String fullName = getConstantColumnValue() + " " + firstName + " " + lastName;
        setter.set(fullName);
    }

}

IrisLogCreator constructor changes

The constructors in the IrisLogCreator class have been changed. Any uses of these constructors should add a new boolean parameter to the call, which is used to determine whether or not to create an audit event logger. The old constructors have been deprecated but are still available and do not create audit event loggers.

Automatically Provisioned Python venv Will Only Use Binary Dependencies

All pip installs performed as part of the automatic upgrade of Python virtual environments will now pass the --only-binary=:all: flag, which will prevent pip from ever attempting to build dependencies on a customer machine.

As part of this change, we automatically upgrade pip and setuptools in all virtual environments, and have upgraded a number of dependencies which for pip refused to use prebuilt binary dependencies:

For all virtual environments:
dill==0.3.1.1 is now dill==0.3.3
wrapt==1.11.2 is now wrapt==1.13.2

For jupyter virtual environments:
backcall==0.1.0 is now backcall==0.2.0
tornado==6.0.3 is now tornado==6.1

Product Installation File Rename

The Deephaven tar / RPM installation files have been renamed to include the Java version they are built for, and to better replace legacy names with modern product names.

The Enterprise installer tar now has a -jdkN classifier. For example, illumon-db-1.20231212.123.tar.gz is now deephaven-enterprise-1.20231212.123-jdk17.tar.gz.

The Enterprise rpm now has the jdk major version as deephaven-enterprise minor version. For example, illumon-db-1.20231212.123-1-1.rpm is now deephaven-enterprise-1.20231212.123-17-1.rpm.

The Core+ tar file has been gzipped and renamed with a jdkN classifier and a .tgz file extension.
For example, io.deephaven.enterprise.dnd-0.32.0-1.20231212.123.tar is nowdeephaven-coreplus-0.32.0-1.20231212.123-jdk17.tgz.

Note that ONLY the filenames and RPM package name have changed. All paths on the filesystem still reflect legacy locations, except for a single renamed file:

/usr/illumon/dnd/latest/bin/io.deephaven.enterprise.dnd has been renamed to /usr/illumon/dnd/latest/bin/deephaven-coreplus.

MergeDataBuilder refactored

The hierarchy of Java classes that manage merge operations had become unwieldy. In this release, we refactored the internals to consistently use the same MergeDataBuilder interface that has been the preferred mechanism for user scripts.

We expanded that Builder class to include settings needed to support operations previously accessed via special object classes and overloaded methods. One noteworthy example is the deleted MergeFromTable class, the functionality of which is now accessed directly using the sourceTable method of the builder.

The merge methods of the Data Merging Classes taking large numbers of parameters are gone. Scripts using them can be converted to use the builder pattern straightforwardly.

The details of the builder API have changed in several ways. All scripts and programs that directly initiate a merge operation will likely require attention. Merge Persistent Queries and the tools for interacting with them are not affected, but any Persistent Query that initiates a merge using script syntax will require attention.

Please refer to the Merge API Reference section of the Deephaven Enterprise user guide for full details and examples that illustrate the necessary changes.

Below are the "before" and "after" versions of the changed portion of the example script.

Before

new MergeFromTable().merge(
	log, // automatically available in a console worker
	com.fishlib.util.process.ProcessEnvironment.getGlobalFatalErrorReporter(),
	namespace,
	tableName,
	date,
	threadPoolSize,
  maxConcurrentColumns,
	lowHeapUsage,
	force,
	allowEmptyInput,
	sortColumnFormula,
	db, // automatically available in a console worker
	progress,
	null, // storageFormat not needed
	null, // parquetCodecName not needed
	null, // syncMode not needed
	lateCleanup,
	sourceTable)

After

MergeParameters params = MergeParameters.builder(db, namespace, tableName)
        .partitionColumnValue(date)
        .threadPoolSize(threadPoolSize)
        .maxConcurrentColumns(maxConcurrentColumns)
        .lowHeapUsage(lowHeapUsage)
        .force(force)
        .allowEmptyInput(allowEmptyInput)
        .sortColumnFormula(sortColumnFormula)
        .lateCleanup(lateCleanup)
        .sSourceTable(sourceTable)
        .build()
MergeData.of(params).run(com.fishlib.util.process.ProcessEnvironment.getGlobalFatalErrorReporter(), progress)

Switch to NFS v4 for Kubernetes RWX Persistent Volumes

NFS v3 Persistent Volume connections do not support locking. This manifests most obviously when attempting to work with user tables in Deephaven on Kubernetes. By default, user table activities will wait indefinitely to obtain a lock to read or write data. This can be bypassed by setting -DOnDiskDatabase.useTableLockFile=false; this work-around was provided by DH-15640.

This change (DH-15830) switches Deephaven Kubernetes RWX Persistent Volume definitions to use NFS v4 instead, which includes lock management as part of the NFS protocol itself. In order for this change to be made, the NFS server must be reconfigured to export the RWX paths relative to a shared root path (fsid=0), but the existing PVs must use the same path to connect, since PV paths are immutable.

There are two options to reconfigure the NFS server:

  1. The Deephaven Kubernetes install wrapper script (dh_helm) can be used for the upgrade; it automatically checks for an NFS Pod that was deployed as part of Deephaven Kubernetes setup, and runs an upgrade script to reconfigure it if it is not already exporting an NFS v4 path.

  2. In cases where the NFS server is not a Deephaven deployed Pod, or where you want to make other changes to the NFS configuration, you can manually run the upgrade-nfs-minimal.sh script against the NFS server. It is important to set the environment variable SETUP_NFS_EXPORTS to y before running the script.

    • To manually run the script against an NFS Pod:
      • Run kubectl get pods to get the name of your NFS server Pod and confirm that it is running.

      • Copy the setup script to the NFS pod by running this command, using your specific NFS pod name:

        # Run 'kubectl get pods' to find your specific nfs-server pod name and use that as the copy target host in this command.
        kubectl cp setupTools/upgrade-nfs-minimal.sh <nfs-server-name>:/upgrade-nfs-minimal.sh
        
      • Run this command to execute that script, once again substituting the name of your NFS Pod:

        kubectl exec <nfs-server-name> -- bash -c "export SETUP_NFS_EXPORTS=y && chmod 755 /upgrade-nfs-minimal.sh && /upgrade-nfs-minimal.sh"
        

The upgrade script:

  • replaces /etc/exports, and backs up the original file to /etc/exports_<epoch_timestamp>. The new file will have only one entry, which exports the /exports directory with fsid=0.
  • adds an exports sub-directory under /exports, and moves the dhsystem directory there. This is so clients will still find their NFS paths under /exports/dhsystem when connecting to the fsid=0 "root".

The existing PVs spec sections are updated with:

mountOptions:
    - hard
    - nfsvers=4.1

After upgrading to a version of Deephaven that includes this change (DH-15830), you should remove the -DOnDiskDatabase.useTableLockFile=false work-around, so normal file locking behavior can be used when working with user tables.

Requiring ACLs on all exported objects

When exporting objects from a Persistent Query, there are now two modes of operation controlled by the property PersistentQuery.openSharingDefault.

In either mode, when an ACL is applied to any object (e.g, tables or plots) within the query, then objects without an ACL are only visible to the query owner and admins (owners and admins never have ACLs applied).

When a viewer connects:

  • If PersistentQuery.openSharingDefault is set to true, persistent queries that are shared without specifying table ACLs allow all objects to be exported to viewers of the query without any additional filters supplied. This is the existing Deephaven behavior that makes it simple to share PQ work product with others.
  • If PersistentQuery.openSharingDefault is set to false, persistent queries that are shared without specifying table ACLs do not permit objects without an ACL applied to be exported to viewers. The owner of the persistent query must supply ACLs for each object that is to be exported.

Setting this property to false makes it less convenient to share queries, but reduces the risk of accidentally sharing data that the query writer did not intend. To enable this new behavior, you should update your iris-environment.prop property file.

Tailer configuration changes to isolate user actions

The tailer allocates resources for each connection to a Data Import Server for each destination (namespace, table name, internal partition, and column partition). System table characteristics are predictable and fairly consistent, and can be used to configure the tailer with appropriate memory.

User tables are controlled by system users, so their characteristics are subject to unpredictable variations. It is possible for a user to cause the tailer to consume large amounts of tailer resources, which can impact System data processing or crash the process.

This change adds more properties for configuration, and adds constraints on User table processing separate from System tables.

User table isolation

Resources for User table locations are taken from a new resource pool. The buffers are smaller by default, and the pool has a constrained size. This puts an upper limit on memory consumption when users flood the system with changed locations, which can happen with closeAndDeleteCentral or when back filling data. The resources for this pool are pre-allocated at startup. The pool size should be large enough to handle expected concurrent user table writes.

PropertyDefaultDescription
DataContent.userPoolCapacity128The maximum number of user table locations that will be processed concurrently. If more locations are created at the same time, the processing will be serialized.
DataContent.producerBufferSize.user256 * 1024The size in bytes of the buffers used to read data for User table locations.
DataContent.disableUserPoolfalseIf true, user table locations are processed using the same resources as system tables.

Tailer/DIS configuration options

The following properties configure the memory consumption of the Tailer and Data Import Server processes.

PropertyDefaultDescription
DataContent.producersUseDirectBufferstrueIf true, the Tailer will use direct memory for its data buffers.
DataContent.consumersUseDirectBufferstrueExisting property. If true, the Data Import Server will use direct memory for its data buffers.
BinaryStoreMaxEntrySize1024 * 1024Existing property. Sets the maximum size in bytes for a single data row in a binary log file.
DataContent.producerBufferSize2 * BinaryStoreMaxEntrySize + 2 * Integer.BYTESThe size in bytes of buffers the tailer will allocate.
DataContent.consumerBufferSize2 * producerBufferSizeThe size in bytes of buffers the Data Import Server will allocate. This must be large enough for a producer buffer plus a full binary row.

Revert to previous behavior

To disable the new behavior in the tailer, set the following property:

DataContent.disableUserPool = true

Added block flag to more dh_monit actions

This flag blocks scripting for the start, stop, and restart actions until the actions are completed. If any actions other than start, stop, restart, up or down are passed with the blocking flag, an error is generated. No other behaviors of the script have been changed.

These following options have been added:

/usr/illumon/latest/bin/dh_monit [ start | stop | restart ] [ process name | all ] [ -b | --block ]

These work as before:

/usr/illumon/latest/bin/dh_monit [ up | down ] [ -b | --block ]

Core Worker Notebook and Controller Groovy Script Imports

Users can now import Groovy scripts from their notebooks and the controller git integration from Community Core workers.

To qualify for such importing, Groovy scripts must:

  1. Belong to a package.
  2. Match their package name to their file location. For example, scripts belonging to package name com.example.compute must be found in com/example/compute.

If a script exists with the same name as a notebook and in the controller Git integration, the notebook is prioritized as it is easier for users to modify if needed.

Importing Notebook Groovy Scripts

Below is a Groovy script notebook at test/notebook/NotebookImport.groovy:

package test.notebook

return "Notebook"

String notebookMethod() {
  return "Notebook method"
}

static String notebookStaticMethod() {
    return "Notebook static method"
}

class NotebookClass {
    final String value = "Notebook class method"
    String getValue() {
        return value
    }
}

static String notebookStaticMethodUsingClass() {
  new NotebookClass().getValue()
}

Below is an example of importing and using the groovy script from a user's notebooks. Note that per standard Groovy rules, you can run the script's top-level statements via main() or run() or use its defined methods like a typical Java class:

import test.notebook.NotebookImport

NotebookImport.main()
println new NotebookImport().run()
println new NotebookImport().notebookMethod()
println NotebookImport.notebookStaticMethod()
println NotebookImport.notebookStaticMethodUsingClass()

You can also use these classes and methods within Deephaven formulas:

import test.notebook.NotebookImport

import io.deephaven.engine.context.ExecutionContext
import io.deephaven.engine.util.TableTools

ExecutionContext.getContext().getQueryLibrary().importClass(NotebookImport.class)
testTable = TableTools.emptyTable(1).updateView(
        "Test1 = new NotebookImport().run()",
        "Test2 = new NotebookImport().notebookMethod()",
        "Test3 = NotebookImport.notebookStaticMethod()",
        "Test4 = NotebookImport.notebookStaticMethodUsingClass()"
)

Importing Controller Git Integration Groovy Scripts

Importing scripts from the controller git integration works the same way, except that script package names don't necessarily need to match every directory. For example, if the following property is set:

iris.scripts.repo.<repo>.paths=module/groovy

Then the package name for the groovy script at module/groovy/com/example/compute must be com.example.compute, not module.groovy.com.example.compute.

Logging System Tables from Core+

Core+ workers can now log Table objects to a System table.

Many options are available using the Builder class returned by:

import io.deephaven.enterprise.database.SystemTableLogger
opts = SystemTableLogger.newOptionsBuilder().currentDateColumnPartition(true).build()

The only required option is what column partition to write to. You may specify a fixed column partition or use the current date (at the time the row was written, data is not introspected for a Timestamp). The default behavior is to write via the Log Aggregator Service, but you can also write via binary logs. No code generation or listener versioning is performed, you must write columns in the format that the listener expects. Complete Options are available in the Javadoc.

After creating an Options structure, you can then log the current table:

SystemTableLogger.logTable(db, "Namespace", "Tablename", tableToLog, opts)

When logging incrementally, a Closeable is returned. You must retain this object to ensure liveness. Call close() to stop logging and release resources.

lh=SystemTableLogger.logTableIncremental(db, "Namespace", "Tablename", tableToLog, opts)

The Python version does not use any options, but rather named arguments. If you specify None for the column partition, then the current date is used.

system_table_logger.log_table("Namespace", "Tablename", table_to_log, columnPartition=None)

Similarly, if you call log_table_incremental from Python; then you must close the returned object (or use it as context manager in a with statement)

Row by row logging is not yet supported in Core+ workers. Existing binary loggers cannot be executed in the context of a Core+ worker; because they reference classes that are shadowed (renamed). If row-level logging is required, then you must use io.deephaven.shadow.enterprise.com.illumon.iris.binarystore.BinaryStoreWriterV2 directly.

Only primitive types, Strings and Instants are supported. Complex data types cannot yet be logged.

Restrict available WorkerKinds with ACL groups

Use the new configuration parameter WorkerKind.<worker kind>.allowedGroups to set the ACLs for individual WorkerKinds. Groups are separated by commas. For example,

WorkerKind.DeephavenEnterprise.allowedGroups=iris-superusers,a_group

restricts the users that can create Enterprise (Legacy) workers to members of iris-superusers and a_group.

If not configured, the default is allusers.

Core+ support for multiple partitioning columns

Deephaven Core+ workers now support reading tables stored in the Apache Hive layout. Hive is a multi-level partitioned format where each directory is a Key=Value pair.

For example:

| Market                          -- A Directory for the Namespace
| -- EquityTrade                  -- A directory for the Table
|  | -- Region=US                 -- A Partition directory for the Region `US`
|  |  | -- Class=Equities         -- A Partition directory for the Class `Equities`
|  |  |  | -- Symbol=UVXY         -- A Partition directory for the Symbol `UVXY`
|  |  |  |  | -- table.parquet    -- A Parquet file containing data,
|  |  |  | -- Symbol=VXX          -- A Partition directory for the Symbol `VXX`
|  |  |  |  | -- table.size       -- A set of files for a Deephaven format table
|  |  |  |  | -- TradeSize.dat
|  |  |  |  | -- ...
|  | -- Region=Asia
|  |  | -- Class=Special
|  |  |  | -- Symbol=ABCD
|  |  |  |  | -- table.parquet
|  |  |  | -- Symbol=EFGH
|  |  |  |  | -- table.parquet

See the extended layouts documentation for more details on how to use this feature.

Core+ support for writing tables in Deephaven format

Deephaven Core+ workers now support writing tables in Deephaven format using the io.deephaven.enterprise.table.EnterpriseTableTools class in Groovy workers and the deephaven_enterprise.table_tools python module.

For example, to read a table from disk:

import io.deephaven.enterprise.table.EnterpriseTableTools
t = EnterpriseTableTools.readTable("/path/to/the/table")
from deephaven_enterprise import table_tools
t = table_tools.read_table("/path/to/the/table")

And to write a table:

import io.deephaven.enterprise.table.EnterpriseTableTools
EnterpriseTableTools.writeTable(qq, new File("/path/to/the/table"))
from deephaven_enterprise import table_tools
table_tools.write_table(table=myTable, path="/path/to/the/table")

See the Core+ documentation for more details on how to use this feature.

Core+ C++ client and derived clients support additional CURL options

When configuring a Session Manager with a URL for downloading a connection.json file, the C++ client and derived clients (like Python ticking or R) use libcurl to download the file from the supplied URL. SSL connections in this context can fail for multiple options and it is customary to support options to adjust SSL behavior and/or enable verbose output for supporting debugging. We now support the following environment variables from the clients:

  • CURL_CA_BUNDLE: like the variable of the same name for the curl(1) command line utility. Points to a file containing a CA certificate chain to use instead of the system default.
  • CURL_INSECURE: if set to any non-empty value, disable validation of server certificate.
  • CURL_VERBOSE: if set to any non-empty value, enable debug output.

New Worker Labels

The Deephaven Enterprise system supports two kinds of workers.

The first uses the legacy Enterprise engine that predates the release of Deephaven Community Core. These workers are now labeled "Legacy" in the Code Studio and Persistent Query "Engine" field. Previously, these workers were labeled "Enterprise".

The second kind uses the Deephaven Community Core engine with Enterprise extensions. These workers are now labeled "Core+" in the Code Studio and Persistent Query "Engine" field. Previously, these workers were labeled "Community".

Although these changes may create short-term confusion for current users, Deephaven believes they better represent the function of these workers and will easily become familiar. Both Legacy and Core+ workers exist within the Deephaven Enterprise system. The Core+ workers additionally include significant Enterprise functionality that is not found within the Deepahven Community Core product.

To avoid breaking user code, we have not yet changed any package or class names that include either "Community" or "DnD" (an older abbreviation which stood for "Deephaven Community in Deephaven Enterprise").

Logger overhead

The default Logger creates a fixed pool of buffers. Certain processes are fine with a smaller size.

The following properties can be used to override the default configuration of the standard process Logger. Every log message uses an entry from the entry pool, and at least one buffer from the buffer pool. Additional buffers are taken from the buffer pool as needed. Both pools will expand as needed, so the values below dictate the minimum memory that will be consumed.

PropertyDefaultDescription
IrisLogCreator.initialBufferSize1024The initial size of each data buffer. Buffers may be reallocated to larger sizes as required.
IrisLogCreator.bufferPoolCapacity1024The starting (and minimum) number of buffers in the buffer pool.
IrisLogCreator.entryPoolCapacity32768The initial (and minimum) size of the LogEntry pool.
IrisLogCreator.timeZoneAmerica/New_YorkThe timezone used in binary log file names.

The default value for IrisLogCreator.entryPoolCapacity has been reduced to 16384 for Tailer processes.

generate-iris-keys and generate-iris-rsa no longer overwrite output

The generate-iris-keys and generate-iris-rsa scripts use OpenSSL to generate public and private keys. If you have an existing key file, the scripts now exit with a failure and you must remove the existing file before regenerating the key.

Additional Kubernetes Worker Creation Parameters

The Query Dispatcher now supports changing more Kubernetes parameters when creating a worker, which include:

  • Persistent Volume Claim will mount an existing claim in your worker pod if it exists and is not already mounted elsewhere. If no claim exists, a new PersistentVolumeClaim will be created. If using a storage class that allows for dynamic volume creation, then the PersistentVolume will also be created. Note that creating a new claim is subject to a validation check that requires a configured validator which will allow it. See the link below for more.
  • Storage Class is the storage class to be used for a new persistent volume claim. Acceptable Values will vary depending on your Kubernetes provider, the default is inserted into iris-endpoints.prop by the Helm chart's global.storageClass value. If using an existing claim this has no effect.
  • Storage Size is the size of the volume to be requested if creating a new PersistentVolumeClaim, in bytes as documented here. If using an existing claim this has no effect.
  • Mount Path denotes where in the pod the PersistentVolumeClaim will be mounted.

These are in addition to the existing parameters described in a previous release note introducing Kubernetes worker creation parameters and validators.

Kubernetes Helm Chart Changes

Some settings have changed or have been explicitly provided in place of whatever the default value was for your Kubernetes platform provider. For example terminationGracePeriodSeconds is set to a default of 10 in the management-shell. To avoid possible errors, delete the management-shell pod prior to doing the helm upgrade if you have an older version already running. The pod can be deleted with this command: kubectl -n <your-namespace> delete pod management-shell --grace-period 1.

Note that any files you may have copied or created locally on that pod will be removed. However, in the course of normal operations such files would not be present.

Controller client for Community Workers

This change reorganizes dependencies so that Community workers do not require the shadowed Controller and Console modules. It also provides a io.deephaven.enterprise.dnd.controller.PersistentQueryControllerClient interface and implementation for DnD workers to use. Both Enterprise and DnD implementations now use the same shared underlying gRPC implementation.

Relocated Classes

The com.illumon.iris.controller.HeaderPopupProvider class was moved to the Gui module as com.illumon.iris.gui.table.HeaderPopupProvider

Dependency updates

Deephaven has updated several dependencies to more recent versions. If you are using these dependencies in your scripts or other code running in the worker, then your code may need updates.

DependencyOldNew
commons-codec1.151.16.0
commons-compress1.211.24.0
commons-io2.11.02.14.0
Groovy3.0.173.0.19
Jetty9.4.51.v202302179.4.53.v20231009
jgit5.8.1.202007141445-r5.13.2.202306221912-r
org.apache.sling.commons.json2.0.20Removed
org.xerial.snappy:snappy-java1.1.8.41.1.10.5
snakeyaml2.02.2

Shadowed dependencies, which generally should not be used directly have also been updated. Of note is that Jackson which must sometimes be referenced as io.deephaven.shadow.jackson.com.fasterxml to interface with Deephaven ingestion classes has been updated from 2.14.2 to 2.15.2.

Of particular note is that Groovy 3.0.19 includes at least one bug fix that changes the behavior of scripts. The Java Language Specification does not permit inheriting static members from parent classes, but Groovy versions prior to 3.0.18 did. GROOVY-8164 makes the Groovy language consistent with the JLS, but existing scripts that depend on inheriting static members fail at runtime.

Status Dashboard

A status dashboard process has been added to the Deephaven installation, providing data in a format that can be read by Prometheus. Full documentation is available in the System Administration section of the Deephaven documentation.

Updated Python Version

The default Python version is now 3.10, which is updated from 3.8. Python 3.8 and 3.9 are still supported, but Python 3.7 support has been dropped.

Python 3.10 drops support for numpy.object, numpy.bool and numpy.int. If you use these in your scripts, then you must use the corresponding built-in Python object, bool, and int types.

Of particular note is that Python 3.10 does not support OpenSSL 1.0, therefore you must install OpenSSL 1.1 to build Python 3.10. SSL support is required for the wheel package. CentOS 7 has OpenSSL 1.1 packages, which may be installed; but the default directory layout is not suitable for Python 3.10. If the installer root prepare script must build Python 3.10 on CentOS 7, then /usr/illumon/openssl11 is created with symlinks to the openssl11-devel yum package.

Kafka Offset Column Name

The default Community name for storing offsets is KafkaOffset. The Core+ Kafka ingester assumed this name, rather than using the name from the deephaven.offset.column.name consumer property.

If the default columns names of KafkaOffset, KafkaPartition, and KafkaTimestamp are not in your Enterprise schema, then the ingester ignores those columns. If you change column names for timestamp, offset, or partition; then you must also ensure that your schema contains a column of the correct type for that column.

Bypassing user table lock files

When a worker tries to write or read a User table, it will first try to lock a file in /db/Users/Metadata to avoid potential concurrency issues. If filesystem permissions are set up incorrectly, or if the underlying filesystem does not support file locking, this can cause issues.

The following property can be set to disable the use of these lock files:

OnDiskDatabase.useTableLockFile=false

Worker-to-worker table resolution configuration

Worker-to-worker table resolution now uses the Deephaven cluster's trust store by default. In some environments, there may be a SSL-related exception when when trying to resolve a table defined in one persistent query from another (see sharing tables for more). The property uri.resolver.trustall may be set to true globally in a Deephaven configuration file, or as a property in a Code Studio session as a JVM argument (e.g. -Duri.resolver.trustall=true). This will let the query worker sourcing the table trust a certificate that would otherwise be untrusted.

Added Envoy properties to allow proper operation in IPv6 or very dynamic routing environments

The new properties envoy.DnsType and envoy.DnsFamily allow configuration of Envoy DNS behaviors for xds routes added by the Configuration server.

  • envoy.DnsType configures the value to be set in dynamically added xds routes for type. The default if this property is not set is LOGICAL_DNS. If there is a scenario where DNS should be checked on each connection to an endpoint, this can be changed to STRICT_DNS. Refer to Envoy documentation for more details about possible settings.

  • envoy.DnsFamily configures the value to be set in dynamically added xds routes for dns_lookup_family. The default if this property is not set is AUTO. In environments where IPv6 is enabled the AUTO setting may cause Envoy to resolve IPv6 addresses for Deephaven service endpoints; since these service endpoints only listen on IPv4 stacks, Envoy will return a 404 or 111 when getting "Connection refused" from the IPv6 stack. Refer to Envoy documentation for more details about possible settings.

Since Deephaven endpoint services listen only on IPv4 addresses, and Envoy, by default, prefers IPv6 addresses, it may be necessary to modify the configuration in environments where IPv6 is enabled. To do this:

  1. add an entry to the iris-environment.prop properties file of envoy.DnsFamily=V4_ONLY

  2. edit the envoy3.yaml (or whichever configuration file Envoy is using) and add dns_lookup_family=V4_ONLY to the xds_service section:

    static_resources:
      clusters:
        - name: xds_service
          connect_timeout: 0.25s
          type: STRICT_DNS
          dns_lookup_family: V4_ONLY
    
  3. import the new configuration and restart the configuration server and the Envoy process for the changes to take effect.

Modified Bessel correction formula for weighted variance

The weighted variance computation formula has been changed to match that used in the Deephaven Community engine. We now use the standard formula for "reliability weights" instead of the previous "frequency weights" interpretation. This will affect statistics based on variance such as standard deviation.

Managing Community Worker Python Packages

When starting a Deephaven Python worker, it executes in the context of a Python virtual environment (venv). This environment determines what packages are available to Python scripts. Packages that are important systemically or for multiple users should be added to the permanent virtual environment. With Community workers, the administrator may configure multiple worker kinds each with distinct virtual environments to enable more than one environment with a simple drop-down menu. For legacy Enterprise workers, users must manually set properties to select different virtual environments.

For experimentation, it can be convenient to install a Python package only in the context of the current worker. Community Python workers now have a deephaven_enterprise.venv module, which can be used to query the current path to the virtual environment and to install packages into the virtual environment with via pip with the install method. On Kubernetes, the container images now permit dbquery and dbmerge to write to the default virtual environment of /usr/illumon/dnd/venv/latest; which has no persistent effects on the system.

On a bare-Linux installation, the /usr/illumon/dnd/venv/latest must not be writable by users to ensure isolation between query workers. To allow users to install packages into the virtual environment, the administrator may configure a worker kind to create ephemeral environments on worker startup by setting the property WorkerKind.<name>.ephemeralVenv=true. This process increases worker startup time as it requires executing pip freeze and then pip install to create a clone of the original virtual environment. With an ephemeral virtual environment, the user can use deephaven_enterprise.venv.install to add additional packages to their worker. There is currently no interface to choose ephemeral environments at runtime.

Kubernetes Image Customization

When building container images for Kubernetes, Deephaven uses a default set of requirements that provide a working environment. However, many installations require additional packages. To facilitate adding new packages to the default virtual environment, a customer_requirements.txt file can be added to the deephaven_python and db_query_worker_dnd subdirectories of the docker build. After installing the default packages into the worker's virtual environment, pip is called to install the packages listed in customer_requirements.txt. If these files do not exist, the Deephaven build script creates empty placeholder customer_requirements.txt files.

With JDK Launcher Discontinued

The Windows launcher package that included a JDK has been discontinued. You must install a JDK on the client machine before running the Swing launcher. Please note that a JRE is not sufficient to run the Swing console, you must use a JDK.

The launcher is now compiled with JDK8, so that even if you download it from a Deephaven instance that is running a later Java version you may use it with Deephaven instances running older versions of Java. The JDK of the Swing client still must match that of the Deephaven server.

Make /db/Users mount writeable in Kubernetes

This changes both the yaml for worker templates and the permissions on the underlying volume that is mounted as /db/Users in pods. If you are installing a new cluster, there is no action necessary. However, if you have an existing cluster installed then run this command to change the permissions: kubectl exec management-shell -- /usr/bin/chmod -vR 775 /db/Users

Helm improvements

A number of items have been added to the Deephaven helm chart, which allow for the following features:

  • Configuration options to use an existing persistent volume claim in Deephaven, to allow for use of historical data stored elsewhere.
  • Configuration options to mount existing secrets into worker pods.
  • Configurable storageClass options to allow for easier deployment in various Kubernetes providers.

Required action when upgrading from an earlier release

  1. Define global.storageClass: If you have installed an earlier version of Deephaven on Kubernetes then your my-values.yaml file used for the upgrade (not the Deephaven chart's values.yaml) should be updated to include a global.storageClass value, e.g.:

    global:
       storageClass: "standard-rwo"    # Use a value suitable for your Kubernetes provider
    

    The value should be one that is suitable for your Kubernetes provider; standard-rwo is a GKE-specific storage class used as an example. To see storageClass values suitable for your cluster, consult your provider's documentation. You can view your cluster's configured storageClass by running kubectl get storageClasses

  2. Delete management-shell pod prior to running helm upgrade: Run kubectl delete pod management-shell to delete the pod. Note that if you happen to have any information stored on that pod it would be removed, though in the normal course of operations that would not be the case. This pod mounts the shared volumes used elsewhere in the cluster, and so changes to the storageClass values might result in an error similar to the following if it is not deleted when the upgrade is performed:

    $ helm upgrade my-deephaven-release-name ./deephaven/ -f ./my-values.yaml --set image.tag=1.20230511.248 --debug
    
    Error: UPGRADE FAILED: cannot patch “aclwriter-binlogs” with kind PersistentVolumeClaim: PersistentVolumeClaim “aclwriter-binlogs”
    is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
      core.PersistentVolumeClaimSpec{
        ... // 2 identical fields
        Resources:        {Requests: {s”storage”: {i: {...}, s: “2Gi”, Format: “BinarySI”}}},
        VolumeName:       “pvc-80a518f6-1a24-4c27-93b5-c7e9bd25d824”,
    -   StorageClassName: &“standard-rwo”,
    +   StorageClassName: &“default”,
        VolumeMode:       &“Filesystem”,
        DataSource:       nil,
       DataSourceRef:    nil,
    }
    

Ingesting Kafka Data from DnD

The Deephaven Community Kafka ingestion framework provides several advantages over the existing Enterprise framework. Notably:

  • The Community Kafka ingester can read Kafka streams into memory and store them to disk.
  • Key and Value specifications are disjoint, which is an improvement over the io.deephaven.kafka.ingest.ConsumerRecordToTableWriterAdapter pattern found in Enterprise.
  • The Community KafkaIngester uses chunks for improved efficiency compared to row-oriented Enterprise adapters.

You can now use the Community Kafka ingester together with an in-worker ingestion server in a DnD worker. As with the existing Enterprise Kafka ingestion, you must create a schema and create a data import server within your data routing configuration. After creating the schema and DIS configuration, create an ingestion script using a Community worker.

You must create a KafkaConsumer Properties object. Persistent ingestion requires that auto commit is disabled in order to ensure exactly once delivery. The next step is creating an Options builder object for the ingestion and passing it to the KafkaTableWriter.consumeToDis function. You can retrieve the table in the same query, or from any other query according to your data routing configuration.

import io.deephaven.kafka.KafkaTools
import io.deephaven.enterprise.kafkawriter.KafkaTableWriter

final Properties props = new Properties()
props.put('bootstrap.servers', 'http://kafka-broker:9092')
props.put('schema.registry.url', 'http://kafka-broker:8081')
props.put("fetch.min.bytes", "65000")
props.put("fetch.max.wait.ms", "200")
props.put("deephaven.key.column.name", "Key")
props.put("deephaven.key.column.type", "long")
props.put("enable.auto.commit", "false")
props.put("group.id", "dis1")

final KafkaTableWriter.Options opts = new io.deephaven.enterprise.kafkawriter.KafkaTableWriter.Options()
opts.disName("KafkaCommunity")
opts.tableName("Table").namespace("Namespace").partitionValue(today())
opts.topic("demo-topic")
opts.kafkaProperties(props)
opts.keySpec(io.deephaven.kafka.KafkaTools.FROM_PROPERTIES)
opts.valueSpec(io.deephaven.kafka.KafkaTools.Consume.avroSpec("demo-value"))

KafkaTableWriter.consumeToDis(opts)

ingestedTable=db.liveTable("Namespace", "Table").where("Date=today()")

Customers can now provide their own JARs to Community in Enterprise (i.e. DnD) workers

Customers can now provide their own JARs into three locations that DnD workers can load from:

  1. Arbitrary locations specified by the "Extra Classpaths" field from e.g. a console or Persistent Query configuration
  2. A user-created location specific to a DnD Worker Kind configuration, specified by the WorkerKind.<Name>.customLib property
  3. A default directory found in every DnD installation, e.g. /usr/illumon/dnd/latest/custom_lib/

Data routing file checks for duplicate keys

The data routing file is a YAML file. The YAML syntax includes name:value maps, and like most maps, cannot contain duplicate keys. Data routing file validation now raises an error when duplicate map keys are detected. The prior behavior was for the duplicate keys to silently replace the value in the map.

Reading Hierarchical Parquet Data

Deephaven Community workers can now read more complex Parquet formats through the db.historical_table method (or db.historicalTable from Groovy). Three new types of Parquet layouts are supported:

  1. metadata: A hierarchical structure where a root table_metadata.parquet file contains the metadata and paths for each partition of the table.
  2. kv: A hierarchical directory with key=value pairs for partitioning columns.
  3. flat: A directory containing one or more Parquet files that are combined into a single table.

To read a Parquet table with the historical_table, you must first create a Schema that matches the underlying Parquet data. The Table element must have storageType="Extended", and a child element for ExtendedStorage that specifies a type. The valid type values are parquet:metadata, parquet:kv, and parquet:flat, corresponding to the supported layouts.

Legacy workers cannot read advanced Parquet layouts. If you call db.t with a table that defines Extended storage, an exception is raised.

com.illumon.iris.db.exceptions.ScriptEvaluationException: Error encountered at line 1: t=db.t("NAMESPACE", "TABLENAME")
...
caused by:
java.lang.UnsupportedOperationException: Tables with storage type Extended are only supported by Community workers.

Extended storage tables may have more than one partitioning column. The data import server can only ingest tables with a single partitioning column of type String. Attempts to tail binary files for tables that don't meet these criteria will raise an exception.

java.lang.RuntimeException: Could not create table listener
...
Caused by: com.illumon.iris.db.schema.SchemaValidationException: Tailing of schemas with multiple partitioning columns is not supported.

java.lang.RuntimeException: Could not create table listener
...
Caused by: com.illumon.iris.db.schema.SchemaValidationException: Tailing of schemas with a non-String partitioning column is not supported.

Discovering a Schema from an Existing Parquet Layout

You can read the Parquet directory using the standard community readTable function and create an Enterprise schema and table definition as follows:

import static io.deephaven.parquet.table.ParquetTools.readTable
import io.deephaven.enterprise.compatibility.TableDefinitionCompatibility
import static  io.deephaven.shadow.enterprise.com.illumon.iris.db.tables.TableDefinition.STORAGETYPE_EXTENDED

result = readTable("/db/Systems/PQTest/Extended/commodities")
edef = TableDefinitionCompatibility.convertToEnterprise(result.getDefinition())
edef.setName("commodities")
edef.setNamespace("PQTest")
edef.setStorageType(STORAGETYPE_EXTENDED)
ss=io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.SchemaServiceFactory.getDefault()
ss.authenticate()
schema=io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.xml.SchemaXmlFactory.getXmlSchema(edef, io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.NamespaceSet.SYSTEM)
// If this is a new namespace
ss.createNamespace(io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.NamespaceSet.SYSTEM, "PQTest")

// insert the ExtendedStorage type
schema.setExtendedStorageType("parquet:kv")
ss.addSchema(schema)

Read the table with:

db.historicalTable("PQTest", "Commodities")

Java Exception Logging

Deephaven logs now use the Java standard format for Exception stack traces, which includes suppressed exceptions and collapses repetitive stack trace elements, among other improvements.

ACLs for DbInternal CommunityIndex tables

Preexisting installs must manually add new ACLs for the new DbInternal tables.

First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:

-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCommunityIndex -overwrite_existing
exit

Then, run the following to add the new ACLs into the system:

sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt

Alternatively, the ACLs can be added manually one by one in the Swing ACL Editor:

allusers | DbInternal | ServerStateLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))

Seamless integration of Community panels in Deephaven Enterprise

Deephaven Enterprise now supports opening plots and tables from Community queries via the Panels menu. Community panels can be linked and filtered the same way as Enterprise.

Allow removal of "Help / Contact Support ..." via property

A new property, IrisConsole.contactSupportEnabled has been added, which may be used to remove the "Help / Contact Support ..." button from the swing front-end.

By default, this property is set to true in order to preserve current behavior. Setting this to false in properties will remove the menu-option.

db available via import in Community Python workers

In Community Python workers, the Database object db can now be imported into user scripts and modules directly using import statements, for example:

from deephaven_enterprise.database import db

my_table = db.live_table(namespace="MyNamespace", table_name="MyTable").where("Date=today()")

The db object is still available as a global variable for Consoles and Persistent Query scripts.

OperationUser columns added to DnD DbInternal tables

The internal performance tables for Community workers now have columns for OperationAuthenticatedUser and OperationEffectiveUser. This updates the schema for QueryPerformanceLogCommunity, QueryOperationPerformanceLogCommunity, and UpdatePerformanceLogCommunity. The operation user reflects the user that initiated an operation over the network, which is especially important for analyzing the performance of shared persistent queries. For example, filtering, sorting, or rolling up a table can require significant server resources.

No manual changes are needed. The Deephaven installer will deploy the new DbInternal schemas and the new data is ingested into separate internal partitions.

ProcessMetrics logging is now disabled by default

ProcessMetrics logging is now disabled by default in both Enterprise (DHE) and Community in Enterprise (DnD). To enable ProcessMetrics logging, set IrisLogDefaults.writeDatabaseProcessMetrics to true. If desired, you can control DnD ProcessMetrics logging separately from DHE via statsLoggingEnabled.

Kafka Version Upgrade

We have upgraded our Kafka code from version 2.4 to version 3.4.

Confluent Breaking Changes

Confluent code must be upgraded to version 7.4 to be compatible with version 3.4. https://docs.confluent.io/platform/current/installation/versions-interoperability.html

Clients using Avro or POJO for in-worker DISes must switch to the 7.4 versions of the required jars, as specified here: https://deephaven.io/enterprise/docs/importing-data/advanced/streaming/kafka/#generic-record-adapter

The following dependencies are now included in the Deephaven installation:

jackson-core-2.10.0.jar
jackson-databind-2.10.0.jar
jackson-annotations-2.10.0.jar

Users should remove these from their classpath (probably /etc/sysconfig/illumon.d/java_lib) to avoid conflict with the included jars.

Controller Tool "Status" Option

The new --status subcommand for the persistent query controller tool generates a report to standard output with details of selected persistent queries.

With --verbose, more details are included. If a query has a failure recorded and only one query is selected, the stack trace is printed after the regular report. Use the --serial option to directly select a specific query.

With --jsonOutput, a JSON block detailing the selected query states is emitted instead of the formatted report. Use --jsonFile to specify an output location other than standard output.

Possible breaking changes were introduced with this feature:

  • Previously (before Silverheels) the flag options --continueAfterError, --includeTemporary and --includeNonDisplayable required but ignored a parameter. For example, --includeTemporary=false and --continueAfterError=never were both accepted as "true" conditions. In Silverheels, the argument is still required, but only true and 1 will be accepted as true, false and 0 will be accepted as false, and anything else will be treated as a command line error.
  • Details of information log entries generated by command_tool have changed. Important functionality had previously been deferred to after the starting/finished log entries for the corresponding items had been emitted. Those actions are now bracketed by the log marker entries to better inform troubleshooting.
  • A warning message is emitted to the console when no queries are processed due to selection (filtering) criteria. An informational console message summarizing the filter actions has also been added.

Flight can now resolve Live, Historical and Catalog tables from the database

DnD workers now support retrieving live, historical and catalog tables through Arrow Flight. DnD's Python client has been updated with DndSession.live_table(), DndSession.historical_table() and DndSession.catalog_table() to support this.

For example, to fetch the static FeedOS.EquityQuoteL1 table

from deephaven_enterprise.client.session_manager import SessionManager

connection_info = "https://my-deephaven-host.com:8000/iris/connection.json"
session_mgr: SessionManager = SessionManager(connection_info)
session_mgr.password("iris","iris")

session = session_mgr.connect_to_persistent_query("CommunityQuery")
Quotes = session.historical_table("FeedOS", "EquityQuoteL1").where("Date=`2023-06-15`")

Flight ticket structure

Database flight tickets start with a prefix d and then are followed by a path consisting of three parts. The first part selects the type, the second is the namespace, and the third is the table name. Available types are catalog for the catalog table, live for live tables and hist for historical tables.

For example d/live/Market/EquityQuote would fetch a the live Market.EquityQuote table. Note that the catalog version does not use a namespace or tablename d/catalog will fetch the catalog table.

Reduce default max table display size

The maximum number of rows that may be displayed in the swing front-end before the red "warning bar" is displayed is now configurable. A new default maximum has been defined as 67,108,864 (64 x 1024 x 1024). Technical limitations cause rows beyond this limit to not properly update. When necessary, the Web UI is capable of displaying much larger tables than Swing.

The previous default max may be configured with the following property:

DBTableModel.defaultMaxRows=100000000

Note that the property-defined maximum may be programmatically reduced based on technical limits.

Improved Metadata Indexer tool

The Metadata Indexer tool has been improved so that it can now validate and list table metadata indexes on disk.
The tool can be invoked using the dhctl script with the metadata command.

Deephaven now supports subplotting in the Web UI

Users now have the ability to view multiple charts subplotted in one figure using the Web UI. Create subplots using the newChart, colSpan, and rowSpan functions available on a Figure. Details are available in the Plotting Cheat Sheet.

Example Groovy code of subplots

tt = timeTable("00:00:00.01").update("X=0.01*ii", "Y=ii*ii", "S=sin(X)", "C=cos(X)", "T=tan(X)").tail(1000)

// Figure with single plot
f1 = figure().plot("Y", tt, "X", "Y").show()

// Figure with two plots, one on top of the other
f2 = figure(2, 1)
    .newChart(0,0).plot("S", tt, "X", "S")
    .newChart(1,0).plot("C", tt, "X", "C")
    .show()

// Figure with 3 plots, one that takes up the full width and then two smaller ones
f3_c = figure(2, 2)
    .newChart(0,0).plot("T", tt, "X", "T").colSpan(2)
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(1,1).plot("C", tt, "X", "C")
    .show()

// Figure with 3 plots, one that takes up the full height and then two smaller ones
f3_r = figure(2, 2)
    .newChart(0,0).plot("T", tt, "X", "T")
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(0,1).plot("C", tt, "X", "C").rowSpan(2)
    .show()
    
// Figure with 4 plots arranged in a grid
f4 = figure(2, 2)
    .newChart(0,0).plot("Y", tt, "X", "Y")
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(0,1).plot("C", tt, "X", "C")
    .newChart(1,1).plot("T", tt, "X", "T")
    .show()

// Re-ordered operations from f4, should appear the same though
f5 = figure(2, 2)
    .newChart(1,1).plot("T", tt, "X", "T")
    .newChart(0,1).plot("C", tt, "X", "C")
    .newChart(1,0).plot("S", tt, "X", "S")
    .newChart(0,0).plot("Y", tt, "X", "Y")
    .show()

Improved validation of data routing configuration can cause errors in existing configurations

This Deephaven release includes new data routing features, and additional validation checks to detect possible configuration errors. Because of the additional validation, it is possible that an existing data routing configuration that was previously valid is now illegal and will cause parsing errors when the configuration server reads it.

If this occurs, the data routing configuration must be corrected, with the dhconfig tool in --etcd mode to bypass the configuration server (which fails to start when the routing configuration is invalid).

Export the configuration:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing export --file /tmp/routing.yml --etcd

Edit the exported file to correct errors, and import it:

sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing import --file /tmp/routing.yml --etcd

Additional details When the data import configuration is incorrect, the configuration_server process will fail with an error like this:

Initiating shutdown due to: Uncaught exception in thread ConfigurationServer.main io.deephaven.UncheckedDeephavenException: java.util.concurrent.ExecutionException: com.illumon.iris.db.v2.routing.DataRoutingConfigurationException:

In the rare case when this happens in a previous version of Deephaven, or if the solution above doesn't work, the following direct commands can be used to correct the situation:

Export:

sudo DH_ETCD_DIR=/etc/sysconfig/illumon.d/etcd/client/datarouting-rw /usr/illumon/latest/bin/etcdctl.sh get /main/config/routing-file/file > /tmp/r.yml

Import:

sudo DH_ETCD_DIR=/etc/sysconfig/illumon.d/etcd/client/datarouting-rw /usr/illumon/latest/bin/etcdctl.sh put /main/config/routing-file/file </tmp/r.yml

Python Integral Widening

In the 1.20211129 release, the jpy module that Deephaven's Python integration depends on converting all Python integral results into a Java integer. This resulted in truncated results when values exceed Integer.MAX_VALUE. In 1.20221001, Deephaven is using an updated jpy Integration that returns values in the narrowest possible type; so results that previously were an integer could be returned as a byte or a short. Moreover, a formula may have different types for each row. This prevented casting the result into a primitive type, as boxed objects may not be casted to another primitive.

In 1.20221001.196, Python calls in a formula now widen Byte and Short results to an Integer. If the value returned exceeds, Integer.MAX_VALUE, then the result is a Long. Existing formulas that would not have been truncated by conversion to an int in 1.20211129, behave as they would have in that release.

As casting from an arbitrary integral type to a primitive may be required, we have introduced a utility class com.illumon.iris.db.util.NumericCast that provides objectToByte, objectToShort, objectToInt, and objectToLong methods that will convert any Byte, Short, Integer, Long, or BigInteger into the specified type. If an overflow would occur, an exception is thrown.

Numba formulas (those that are surrounded in the nb function); have the narrowing behavior as in prior versions of 1.20221001.

Changed to use DHC Fast CSV parser for readCsv

TableTools.readCsv calls now use the new DHC High-Performance CSV Parser that uses a column oriented approach to parse CSV files.

The change to DHC parser includes the following visible enhancements

  1. Any column that is only populated with integer surrounded by white space will be identified as an integer column. The previous parser would identify the column as a double.

  2. Only 7-bit ASCII is supported as valid delimiters. This means characters such as € (euro symbol) are not valid. In these cases the following error will be thrown, delimiter is set to '€' but is required to be 7-bit ASCII.

  3. Columns populated wholly with only single characters will be identified as Character columns instead of String columns.

  4. Additional date time formats are automatically converted to DBDateTime columns. Previously, these formats were imported as String columns. All other date time behavior remains unchanged.


    | Format | Displayed Value in 1.20211129 | Data Type In 1.20211129 | Displayed Value in 1.20221001 | Data Type in 1.20221001 |

    | DateTimeISO_UTC_1 | 2017-08-30 11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_UTC_2 | 2017-08-30T11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MillisOffset_2 | 2017-08-30T11:59:59.000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MicrosOffset_2 | 2017-08-30T11:59:59.000000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime |

To use the legacy CSV parser, set the configuration property com.illumon.iris.db.tables.utils.CsvHelpers.useLegacyCsv to true.

Support Barrage subscriptions between DnD workers

DnD workers can now subscribe to tables in other DnD workers using Barrage.

This can be done using ResolveTools and a new URI scheme pq://<Query Identifier>/scope/<Table name>[?snapshot=true] The Query Identifier can be either the query name or the query serial. The Table Name is the name of the table in the server query's scope. The optional snapshot=true parameter indicates that a snapshot should be fetched instead of a live subscription.

import io.deephaven.uri.ResolveTools
TickingTable = ResolveTools.resolve("pq://CommunityQuery/scope/TickingTable?snapshot=true") 
from deephaven_enterprise.uri import resolve
TickingTable = resolve("pq://CommunityQuery/scope/TickingTable?snapshot=true") 

Improvements to command line scripts

Deephaven provides many maintenance and utility scripts in /usr/illumon/latest/bin. This release changes many of these tools to more consistently handle configuration files, setting java path and classpath, error handling, and logging.

Classpaths now include customer plugins and custom jars. This is important for features that can include custom data types, including table definitions and schemas.

For the tools included in this update, there is now a consistent way to handle invalid configuration and other unforeseen errors.

Override the configuration (properties) file

If the default properties file is invalid for some reason, override it by setting DHCONFIG_ROOTFILE. For example:

DHCONFIG_ROOTFILE=iris-defaults.prop /usr/illumon/latest/bin/dhconfig properties list

Add custom JVM arguments

Add java arguments to be passed into the java program invoked by these scripts by setting EXTRA_JAVA_ARGS. For example:

EXTRA_JAVA_ARGS="-DConfiguration.rootFile=foo.prop" /usr/illumon/latest/bin/dhconfig properties list

Scripts included in this update

The following scripts have been updated:

  • crcat
  • data_routing
  • defcat
  • delete_schema
  • dhconfig
  • dhctl
  • export_schema
  • iriscat
  • iristail
  • migrate_acls
  • migrate_controller_cache
  • validate_routing_yml

Code Studio Engine Display Order

When selecting the engine (Enterprise or Community) in a Code Studio, existing Deephaven installations show the Enterprise engine first for backwards compatibility. New installations show the Community engine first. This is controlled by a display order property defined for each worker kind. Lower values are displayed first by the Code Studio drop down.

By default, the Enterprise engine has a display order of 100 and Community engine has a display order of 200. For a new installation, the iris-environment.prop file sets the priority of the Community engine to 50 as follows:

WorkerKind.DeephavenCommunity.displayOrder=50

You may adjust the display order properties for community workers by changing the display order property as desired.

etcd ownership

In previous releases, if the Deephaven installer installed etcd, the etcd and etcdctl executables in /usr/bin were created with the ownership of the user who ran the installation. They should be owned by root.

ls -l /usr/bin/etcd*

If the ownership isn't root:

sudo chown root:root /usr/bin/etcd*

ACLs for DbInternal Index and Community tables

Preexisting installs must manually add new ACLs for the new DbInternal tables.

First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:

-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessEventLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessTelemetryIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessInfoLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessMetricsLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCommunity -overwrite_existing
exit

Then, run the following to add the new ACLs into the system:

sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt

Alternatively, the ACLs can be added manually one by one in the Swing ACL Editor:

allusers | DbInternal | ProcessEventLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessTelemetryIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | ProcessInfoLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessMetricsLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ServerStateLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))

DnD Now supports Edge ACLs

Query writers can now specify ACLs on derived tables. These ACLs will be applied when tables or plots are fetched by a client based upon the client's groups.

Edge ACLs are created using the EdgeAclProvider class in the io.deephaven.enterprise.acl package. Additionally, the io.deephaven.enterprise.acl.AclFilterGenerator interface contains some helpful factory methods for commonly used ACL types.

The following example assumes that a table "TickingTable" has already been created. Edge ACLs are created using a builder that contains a few simple methods for building up ACL sets. Once build() is called you have an ACL object which can then be used to transform one or more tables using the applyTo() method. Note that you must overwrite the scope variable with the result of the application, since Table properties are immutable.

import io.deephaven.enterprise.acl.EdgeAclProvider
import io.deephaven.enterprise.acl.AclFilterGenerator

def ACL = EdgeAclProvider.builder()
        .rowAcl("NYSE", AclFilterGenerator.where("Exchange in `NYSE`"))
        .columnAcl("LimitPrice", "*", AclFilterGenerator.fullAccess())
        .columnAcl("LimitPrice", ["Price", "TradeVal"], AclFilterGenerator.group("USym"))
        .build()

TickingTable = ACL.applyTo(TickingTable)
from deephaven_enterprise.edge_acl import EdgeAclProvider
import deephaven_enterprise.acl_generator as acl_generator

ACL = EdgeAclProvider.builder() \
    .row_acl("NYSE", acl_generator.where("Exchange in `NYSE`") \
    .column_acl("LimitPrice", "*", acl_generator.full_access()) \
    .column_acl("LimitPrice", ["Price", "TradeVal"], acl_generator.group("USym")) \
    .build()
    
TickingTable = ACL.apply_to(TickingTable)

See the DnD documentation for details on the AclFilterGenerator and EdgeAclProvider interfaces.

Remote R Groovy Sessions

The idb.init method now has an optional remote parameter. When set to TRUE, Groovy script code is not executed locally but rather on a remote Groovy session as is done in the Swing console or Web Code Studio. This eliminates a class of serialization problems that could otherwise occur with a local Groovy session serializing classes to the remote server. To use the old local Groovy session, you must pass the remote parameter as follows:

idb.init(devroot=devroot, workspace, propfile, keyfile=keyfile, jvmArgs=jvmLocalArgs, remote=FALSE)

Additionally, you may now call idb.close() to terminate the remote worker and release the associated server resources.