Detailed Version Log Deephaven 1.20231218
Note
For information on changes to Deephaven Community, see the Github release page.
Certified versions
Certified Version | Notes |
---|---|
1.20231218.534 | |
1.20231218.532 | |
1.20231218.528 | |
1.20231218.523 | |
1.20231218.491 | |
1.20231218.478 | Certification on the following tickets is incomplete: DH-17886, DH-17885, DH-17873, DH-17835 and DH-17824. |
1.20231218.446 | |
1.20231218.432 | The following caveat pertains to this release: DH-17557 (pre-built Kubernetes images) is not included in this certification. |
1.20231218.385 | |
1.20231218.345 | The following caveat pertains to this release:
|
1.20231218.289 | |
1.20231218.260 | The following caveats pertain to this release:
|
1.20231218.219 | Note that the deephaven_enterprise.notebook module in Core+ is not currently working. It is expected to be fixed in version .229. |
1.20231218.202 | The following caveats pertain this release:
|
1.20231218.153 | The following caveats pertain to this release:
|
1.20231218.115 |
Detailed Version Log: Deephaven v1.20231218
Patch | Details |
---|---|
534 | Merge updates from 1.20221001.417
|
533 | DH-19692: PQ backed dashboards shared with multiple users not always visible |
532 | DH-19360: remove unused property |
531 | DH-19360: convert LAS audit logging to use current features |
530 | Merge updates from 1.20221001.415
|
529 | DH-19304: Release notes fixes |
528 | Merge updates from 1.20221001.408
|
527 | DH-19179: Fix NPE in Kafka ingestion with transformation |
526 | DH-19135: Disable nightly podman tests (for vplus only) |
525 | DH-18984: Connected Legacy CodeStudios cause an unexpectedly large support log file |
524 | DH-19122: Release note documentation fixes |
523 | Merge updates from 1.20221001.402
|
522 | DH-17824: Fix docs for restart |
521 | DH-18954: updateBy ArrayIndexOutOfBoundsException |
520 | DH-18967: Do not use local -n in locally-run installer scripts |
519 | DH-17419: Show dashboard modifications with deephaven.ui changes DH-17418: Fix dashboard major/minor notifications |
518 | DH-18442: Fix Export Logs Fails with Large Number of Queries |
517 | DH-18830: Update internal VM images to version 7 |
516 | DH-18645: Fix XSS issue in file list drag and drop |
515 | Update UI packages to v0.78.9
|
514 | DH-18101: Adding keepalive seconds for win boxes |
513 | DH-18125: close the LAS logger in DatabaseImpl.appendLiveTable |
512 | DH-18708: Change gRPC logging exclusion list separator from semicolon to comma |
511 | DH-18701: Update web packages to v0.78.8 DH-18645: Fix panel titles using html instead of just text DH-18346: Fix partial holiday range breaks DH-16016: Fix xBusinessTime throwing errors |
510 | DH-18176: Suppress scary "non-fatal" warnings; only upload missing files DH-15878: Automatically upload etcd*tar.gz files to remote machines |
509 | DH-18422: Update generation of iris-endpoints.prop for Podman so Web ACL Editor will work correctly |
508 | Merge updates from 1.20221001.398
|
507 | DH-18468: Wire up kubernetes flag to jenkins for eggplant |
506 | DH-18519: Allow adding GrpcLogging exclusions via properties and env vars |
505 | DH-18510: Ensure the exclusions list class names for gRPC logging match inner classes as well |
504 | DH-18153: Fix bad substitution in installer script error handling function |
503 | DH-18426: Expose DHLog in global context to allow changing the log level via the browser console |
502 | DH-16191: Core+ Python Auth Context Methods |
501 | DH-16872: Fix Web not displaying PQ error restart count >10 correctly |
500 | DH-18329: Allow user calendars to override system calendars in Core+ |
499 | DH-18187: Fix console history not sticking to bottom |
498 | DH-18071: Add test to support DeephavenUI dashboards from a code studio. |
497 | DH-18175: Modified Podman start_command.sh to support Podman on MacOS and to fix --nohup always being applied DH-17696: Added A to start_command.sh dig call, to ensure IPv4 address is retrieved. |
496 | DH-17932: Change array handling and add label searches to dh_helm uninstall functions |
495 | DH-16189: Fix deephaven.ui panels when permissions change |
494 | DH-17936: Warn when DH_JAVA is set to an invalid path |
493 | DH-17798: Pin deephaven.ui version to 0.15.4 DH-16150: deephaven.ui in Enterprise DH-17292: Fix tables opened with deephaven.ui throw error when disconnected |
492 | DH-17880: Change Podman start_command.sh default behavior to preserve existing properties DH-17977: Add volume options to Podman start_command.sh for illumon.d/java_lib and illumon.d/calendars volumes DH-17999: Fix coreplus_test_query.py nightly test |
491 | DH-18025: Add missing gradle inputs for web dependencies |
490 | DH-18075: Disable certificate-validation script |
489 | DH-18054: Improve validate_certificates.sh script for older OSes |
488 | DH-18035: Remove local - from installer scripts |
487 | DH-17852: Add validation that truststore contains desired certs; ensure all web cert intermediates in new truststore |
486 | Merge updates from 1.20221001.394
|
485 | DH-18002: Move QA SAML Instructions into repo to sync with releases |
484 | DH-18004: Add explicit dependency from coreplus client to numpy to track upstream dependencies |
483 | DH-18003: Pull back username-as-group fix from DH-17754 |
482 | Changelog fix. |
481 | DH-17093: Discard failed promises in CompilerTools. |
480 | DH-17951: Make InternalDeployer stop using username for group in chown |
479 | DH-17949: Backport DH-17481 Core+ Python SystemTableLogger codec support |
478 | DH-17933: Fix java 8 compilation issues in JpyInit |
477 | DH-17873: add --nohup option to Podman start_command.sh DH-17886: add option to Podman start_command.sh to mount /db/IntradayUser volume DH-17885: add option to Podman start_command.sh to mount /db/Users volume DH-17835: remove writability check of volume directories in Podman start_command.sh |
476 | DH-17903: ClassCastException reading parquet file in Legacy |
475 | DH-17928: Fix QueryScheduler token warning |
474 | DH-17929: Fix extra character in TestDefinition |
473 | DH-17915: Fix kegacy barrage subscriptions for rows with empty object arrays of non-Object type |
472 | DH-17824: Fix podman redeployments when logs are stored on a volume |
471 | DH-17902: QA DNS name utility enhancement |
470 | DH-17791: Modified configurations lose creation time |
469 | DH-17909: Increase performance overview test wait time |
468 | DH-17901: Enable legacy python to lookup location of libpython.so |
467 | DH-17499: Fix several dh_helm problems and improve usability when used with values.yaml |
466 | DH-17894: Updates to QA DNS name utility |
465 | DH-17887: deephaven_enterprise.remote_table should return a python deephaven.table.Table object |
464 | DH-17883: Relocate QA DNS Utility to more appropriate location |
463 | DH-17811: Eggplant SUT setup - cleanup final cmds. |
462 | DH-17864: Fix missing tests on integration runs |
461 | DH-17589: Fix summary table on qa-results |
460 | DH-17811: Setup scripts for new SUT boxes to use for Eggplant tests |
459 | DH-17849: Set eggplant VM size in correct location |
458 | DH-17635: Create utility to manage virtual names for QA Test results servers |
457 | DH-17601: Setup auditable dashboards for junit tests on qa-results |
456 | DH-17830: Stop pip from attempting to check PyPI during container initialization |
455 | Merge updates from 1.20221001.389
|
454 | DH-17111: Add better handling for known error case |
453 | DH-17626: Add Eggplant nightly jenkins job |
452 | DH-17718: Add atexit handler to shutdown workers rapidly DH-17430: Handle trailing metadata to produce better error messages in python client DH-16939: More error message improvements |
451 | DH-17770: Installer jar needs to be republished to io.deephaven.enterprise |
450 | DH-17795: Fixed passing script text as script name to classloader in Core+ |
449 | DH-17697: Support volume for /var/log/deephaven and custom volumes in podman deployment |
448 | DH-17687: Allow incremental include filters in TestAutomation runs |
447 | DH-17589: Fix summary table on qa-results |
446 | DH-17688: Fix PQ imports for eggplant |
445 | DH-17664: Disable some inconsistent controller tests |
444 | DH-17657: Fix default DH_ETCD_USER value in dh_users script |
443 | DH-17654: PersistentQueryConfigTableFactory per-client tables must override satisfied. |
442 | DH-17030: Add single-server, non-root accounts, and Envoy support to podman deployments |
441 | DH-17638: Fixed WebClientData query Reconnects to Controller using incorrect UserContext |
440 | DH-17630: Update Core+ to 0.35.2 DH-17622: DeferredACLTable must copy filters (cherry-pick) |
439 | DH-17609: pushAll.sh should allow a source tag DH-17557: Include db_query and db_merge images. |
438 | Release note formatting fix. |
437 | DH-17634: Fixed Web API Server Reconnects to Controller using Incorrect Context |
436 | DH-17623: Core+ Performance Overview has Bad Error Message on V+ |
435 | DH-17604: Allow int-tests to setup gwt tests |
434 | DH-17559: Republish should capture Installation Media (tar files) |
433 | DH-17608: Tighten permissions for java plugins |
432 | DH-15896: Update build instructions for a qa-results system based on testing of Junit ticket |
431 | DH-17443: prohibit --password from being given more than once |
430 | DH-17496: Additional fixes to writing vectors for Core+ support |
429 | DH-17496: Fix writing Vectors to User tables and reading parquet arrays in legacy workers |
428 | Merge updates from 1.20221001.386
|
427 | DH-17583: Replace a stray jcenter() with mavenCentral() in gradle |
426 | DH-17557: Build and upload container images to GS Buckets |
425 | DH-15624: Correct tolerations applied to Envoy. |
424 | DH-17505: Allow data managers to command DIS truncate |
423 | DH-17568: Fix typos in cluster monitoring queries |
422 | DH-17550: EggplantIntTestSetup should not pass --prodTests flag |
421 | DH-17539: Do not use sudo with -g flag to invoke chgrp |
420 | DH-17551: Qa results metrics add new release |
419 | DH-17540: Update merge/validate queries for Test Automation |
418 | DH-17435: Improve installer test robustness/feedback |
417 | DH-17542: Fix test results to handle non-zero exit status |
416 | DH-17541: Update test results server build instructions |
415 | DH-15896: Track unit tests more accurately |
414 | DH-15624: Add support for tolerations, selectors, and affinity in Helm chart |
413 | DH-17483: Fix run counter logic on qa-results |
412 | DH-17343: Make installing from infra node as irisadmin work (plus test) |
411 | DH-14499: Containerized deployment with podman |
410 | DH-17120: Add qualified references to etcdctl in installer scripts |
409 | DH-16353: Ability to disable password authentication in front-end (swing) |
408 | DH-17498: Fix for dhconfig NPE introduced in 406. |
407 | Merge updates from 1.20221001.382
|
406 | DH-17055: dhctl checks for disabled tailer ports when scanning DH-17443: remove auth options from dhconfig checkpoint DH-17498: remove duplicate status and garbage logging from dhctl |
405 | DH-17506: Do not treat a default INSTALLER_ROUTE value as a user override value |
404 | DH-17504: Fix disabled context menu items for superusers in Query Monitor |
403 | DH-17493: Fix controller_tool test 11 for all supported java |
402 | DH-17485: Fix Web Temp Schedule |
401 | DH-17373: Add DH_NODE_N_INSTALLER_ROUTE for installs from bastion |
400 | DH-16001: Enforce logDirectory with zoneId in Core+ SystemTableLogger builder |
399 | DH-17120: Add DH_DIR_ETCD_BIN to control where etcd binaries are found |
398 | DH-17232: Do not call require_owner if DH_SSH_USER not set |
397 | DH-17272: Make V+ Core+ Python Client Compatible with non-Envoy Grizzly |
396 | DH-17463: Update Core+ to 0.33.6 |
395 | DH-17445: Allow config Property to override ServiceRegistry hostname DH-17056: Allow Endpoint config to override ServiceRegistry hostname |
394 | DH-17279: Add options to disable WebGL |
393 | DH-17420: Fix error with context menu filter on TreeTables |
392 | DH-17414: Dispatcher should log cancellation reason |
391 | DH-17395: Fixed an issue reading old parquet files with improper dictionary offsets. Fixed an issue reading nulls in INT96 encoding |
390 | DH-17400: Use --verbose flags when installation scripts invoke dhconfig |
389 | DH-17353: Remove centos test coverage |
388 | DH-17413: Fix bad string substitution when ssh keys have -v in them |
387 | DH-17288: Fix Exception When Importing a Jackson Query |
386 | Merge updates from 1.20221001.380
|
385 | DH-17164: Fix JsTreeTable Fails when Same Filter is Applied Twice |
384 | DH-17407: Fix Temurin repo setup for RHEL/Rocky |
383 | DH-16346: Fix Validate Settings Tab for View Only Mode |
382 | DH-17369: Convert qa-results scripts to corePlus and python |
381 | DH-17359: Fixed random Test failures noticed for csv Custom Setters |
380 | DH-17356: Core+ Logger not handling parameters appropriately |
379 | DH-16737: Fix package-lock.json file that was erroneously generated |
378 | DH-16346: Fix Query Monitor Right Click Menu for View Only Query |
377 | DH-17357: Core+ workers should listen on all interfaces in Bare Metal |
376 | DH-17347: Core+ kafka ingester NPE with transformation |
375 | DH-17334: Cherry pick CART improvements from 1.20211129.422 |
374 | DH-17303: Add primitive and String array support to Core+ SystemTableLogger |
373 | DH-17327: 'dhconfig dis export' handles empty set better |
372 | DH-17238: combine nested table filters in table data services |
371 | DH-17332: Fix for QA meta results bug in .362 |
370 | DH-16987: Fix client-only etcd update scripts |
369 | Merge updates from 1.20221001.374
|
368 | DH-16737: Reconnect deephaven.ui widgets upon PQ restart DH-16738: Report errors in deephaven.ui correctly to user DH-17311: Update Core+ to 0.33.5 |
367 | DH-17318: Update VPlus gen loggers test for env |
366 | DH-17314: QA Documentation only update path corrections |
365 | DH-16987: Prefer etcd config files from /etc/sysconfig/deephaven/etcd/client over etcd tar |
364 | DH-17081: Fix Pandas widgets from Core+ workers in dashboards |
363 | DH-17299: allow configuration server to start without routing file |
362 | DH-17170: Qa results - move testEvalStats to a 2col, 3 row table |
361 | DH-16394: Fix Query Summary Out of Sync |
360 | DH-16172: Show Engine, Community Port, and k8s information in Safe Mode |
359 | DH-17168: qa-results refactoring of audit metrics |
358 | DH-16504: ConstructSnapshot and PUT do not consistently handle Instant |
357 | DH-17154: QA meta results refactoring |
356 | DH-17281: Fix Padding for Dashboard Shortcut Titles |
355 | DH-16854: If Login Cancelled after Auth then Log Out |
354 | DH-17280: Make eggplant-api.sh properly update existing test case, fix installer tests |
353 | DH-16876: Fixed csv_import utility not respecting proper default or explicit SAFE flag |
352 | DH-16346: For View Only Query, hide the Save, Copy, and Delete Buttons |
351 | DH-17268: Correctly pad zeroes for JS datetime format |
350 | DH-17262: Better support for input and output cluster.cnf as separate files |
349 | DH-16747: Add eggplant gradle task and jenkinsfile |
348 | DH-16129: Update Instances of community language to Core+ in UI |
347 | DH-17264: Ensure cron on qa-results does not repeat unneccessary elements |
346 | DH-17098: Fix package-lock.json for Jupyter-Grid |
345 | DH-17236: Backport DH-16790 Controller test improvements |
344 | DH-17243: Script for rebumping changelog. |
343 | DH-17162: Use dh-mirror for internal VM images |
342 | DH-17219: Fix how installer handles comments in cluster.cnf. DH-17240: Reduce cluster.cnf parser warnings |
341 | DH-16766: Capture segmented results from nightly tests |
340 | DH-17229: Fix inaccuracies in filtering test cases |
339 | DH-17063: quiet dhconfig output when configuration server is down (logging npe) |
338 | DH-17228: Use mysql acls in some nightly tests |
337 | DH-16425: Vplus Feb-June 2024 test case updates for QA |
336 | Merge updates from 1.20221001.364
|
335 | DH-17211: Fix erroneous Core+ hist part table data discovery |
334 | DH-17197: Fix failing DeploymentGeneratorTest |
333 | DH-17076: Update Web to 0.78.1, fix LayoutHint groups on TreeTables |
332 | DH-17174: Update Core+ to 0.33.4 |
331 | DH-17145: Remove unused CUS and RTA installer roles and stop tracking ROLE_COUNT |
330 | DH-17157: Code Studio cannot set Kubernetes Container Image |
329 | DH-17160: Auth server must set authenticatedContext after successful external auth |
328 | DH-17137: Authentication Server not cleaning up all client state when client sessions expire |
327 | DH-17131: Dependencies must be built to Java 8 API, not just bytecode |
326 | DH-17104: Ensure worker overhead properties are applied by default for kubernetes |
325 | Merge updates from 1.20230511.506
|
324 | DH-17118: Improve cluster.cnf parsing logic Merge updates from 1.20230511.505 DH-16829: Update worker overhead properties DH-17072: Do not write temporary-state DH_ properties to cluster.cnf DH-17026: Publish EngineTestUtils (backport of DH-15687) DH-17058: Make pull_cnf disregard java version DH-16884: Add configuration for default user table format to be parquet DH-17048: Fix controller crash and shutdown issues DH-17014: Make cluster.cnf preserve original source and prefer environment variable DH-17045: Address Test Merge issue in Vermilion DH-17011: Forward Merge of promoted tests from Jackson and promotions in Vermilion Backport DH-16948: Always use same grpcio version for Core+ Python proto stub building DH-17031: Minor corrections and formatting for QA automation How-to DH-16936: make recreating schemas watch more efficient DH-16717: Add heap usage logging to web api, TDCP, DIS, LAS, controller, and configuration server DH-17004: change closeAndDeleteCentral to clean up tdcp subscriptions DH-17000: Correct improper test promotion in Jackson DH-16888: Preserve original cluster.cnf when regenerating cluster.cnf w/ defaults DH-16599: Bard Mar 2024 test case updates for qa DH-16986: Update for flaky results from merge test starting at Bard DH-16887: Fix test for DH-11284 starting at Bard DH-16797: Change git location on QA testing systems DH-16996: Forward merge of tests fixed in Bard to Jackson DH-16992: Promoting Jackson level tests to RELEASED DH-16979: Fix for CSV tests Jackson and later DH-16663: remove cached data when there are no active subscriptions DH-16934: Fix permissions check for writing workspace data DH-16908: Fix dry run in iris_keygen.sh DH-16851: Improve qa results setup docs DH-16826: Select/Deselect All for OneClick Lists in Export dialog (swing) DH-15247: Set DH_ETCD_IMPORT_NODE default value to the first config server DH-16675: Account for worker overhead in dispatcher memory utilization DH-16702: Vermilion April2024 test case updates for qa DH-16958: Backport DH-16868 - Check if shadow package already added before adding again DH-16875: Fix CSV import tests DH-16873: Update and correct "testScript" section of automated QA tests DH-16716: Parameterized logging broken in vermilion DH-16847: Update and correct Dnd testing scripts DH-16836: Fix forward merge anomaly DH-16813: QA testing git update to Jackson DH-16818: QA Testing System file relocation and documentation updates DH-16072: Jackson Dec2023 test case updates for qa DH-16480: Documentation and support for QA_Results system build DH-16794: better handle export of nonexistent routing file DH-16762: Fix C# docfx task (need to pin older version) DH-16584: Make internal installer use correct sudo when invoking iris_db_user_mod DH-16586: Improve qa cluster cleanup script DH-16640: fixes for tests failing on bard and later revisions DH-16708: Improve import script on qa results DH-16698: Update BHS images to fix a broken rhel8 test DH-16752: Fix installer tests getting null clustername DH-16605: Use grep before sudo sed to avoid root when migrating monitDH-16406: Improve jackson nightly installer test debugability DH-16718: Fix test cases based on CommonTestFunctions refactor DH-16706: ColumnsToRowsTransform.columnsToRows fillChunk does not set output size DH-16700: Ensure QA results setup is maintainable DH-16750: Fix temporary and auto-delete scheduling checks DH-16542: CUS should trim client_update_service.host - fix for Envoy DH-15013: Fix upload CSV from Web UI for timestamp columns |
323 | DH-17066: Apply Kubernetes Control to Legacy workers in ConsoleCreator |
322 | DH-17113: Fix permissions on test support files |
321 | DH-17063: Fix integration tests from quieter error output |
320 | DH-17070: AuthenticateByPublicKey misses state when different servers are involved |
319 | DH-17101: Update protobuf gradle plugin |
318 | DH-16557: Fixed DHC CSV Import not working with gzip files |
317 | DH-16955: Fix rollup rows and moved columns hydration |
316 | DH-16983: Test Automation - push git scripts to all controller nodes |
315 | DH-17098: Update package-lock in jupyter-grid |
314 | DH-17087: Minor test system documentation update for V+ |
313 | DH-17086: Fix Test Automation README |
312 | DH-17074: Controller Tool Status Should Use a Static Table |
311 | DH-14265: Make new PR check cancel any still-running PR check |
310 | DH-16255: Fix incorrect log message in python setup script |
309 | DH-17063: quiet dhconfig output when configuration server is down |
308 | DH-17057: add support for remote DataImportServiceConfig.asTableDataServiceConfig |
307 | DH-17049: Allow disabling password authentication |
306 | Update web version 0.78.0
|
305 | Java 8 build fix and changelog fixes. |
304 | DH-17035: Ensure BUILD_URL from jenkins is populated in Test Automation results |
303 | DH-17032: Deep linking can cause the wrong dashboard to open after logout |
302 | DH-17042: Forward-merge Test Automation |
301 | DH-17033: Combine JS API table ops on login to improve speed |
300 | DH-16978: Additional fixes for multiple auth servers |
299 | DH-17029: handle removed locations in the LTDS |
298 | DH-16866: Improve Test Automation to target cluster |
297 | DH-17023: Added "target version" parameter to update-dh-packages script |
296 | DH-17017: Skip staging tests on Feature Branch runs in jenkins |
295 | DH-16143: Update GWT-RPC to avoid websocket reuse bug DH-16642: Web UI should allow a second QM |
294 | DH-16164: dhconfig schema import -d does not handle symlinks properly |
293 | DH-16933: DH-16778: Fix dashboard export saving extra dashboards and queries |
292 | DH-16995: Plotly express does not work in Deephaven UI |
291 | DH-16997: Make internal installer detach install scripts from java process group to avoid getting killed on failures |
290 | DH-16950: Prevent ChunkerCompleter.resolveScopeType from getting into an infinite recursive loop and crashing |
289 | DH-16988: Ensure nightly test VM names are unique, and other test stability improvements |
288 | DH-16658: Hive layouts should return an empty table if the table base location does not exist |
287 | DH-16941: MergeParametersBuilder should have a default value for threadPoolSize |
286 | DH-16926: Fix test generation error on multi-PQs |
285 | DH-16914: Update DHC packages to ^0.77.0 DH-16914: ACL Editor crashes with error: No item found matching key: 0 |
284 | DH-16976: Fix java 11 compile from 283 |
283 | DH-16976: Fixed Core+ out of bounds errors when trying to unbounded fill |
282 | DH-16916: Pin Spectrum Dependencies for @adobe/react-spectrum 3.33.1 DH-16916: ACL Editor: unable to scroll the Namespace dropdown |
281 | DH-16978: Multiple auth server private-key validation failures |
280 | DH-16970: Ensure EXCLUDE filter in Test Automation is honoured on kafka |
279 | DH-16969: Allow RemoteTableBuilder to work with clusters behind envoy |
278 | DH-16971: Make internal installer clear failed systemd units so systemctl is-system-running works |
277 | DH-16907: Allow test automation with no FeedOS schemas |
276 | DH-16913, DH-16962: Make all nightly tests pass, and run stably |
275 | DH-16965: correct error message when LAS is not available |
274 | DH-16544: Bug fixes for dh_helm |
273 | DH-16890, DH-16779: fix java version on nightly tests, use internal java repositories |
272 | DH-16953: Put the version back into rpm package names |
271 | DH-16921: Fix DashboardOverride rewriting without changes |
270 | Update Web Version 0.76.0
|
269 | DH-14825: Java 8 compilation fix. |
268 | DH-16948: Always use same grpcio version for Core+ Python proto stub building |
267 | DH-16944: Add Cross Cluster test to Grizzly QA |
266 | DH-16925: Snapshot locations break multi-level pages for parquet regions |
265 | DH-16910: Adjust Kubernetes heap overhead parameters |
264 | DH-16923: make claims filter consistently accept user tables |
263 | DH-15984: better handle export of nonexistent routing file |
262 | DH-16907: Update FeedOS schemas for ticking source |
261 | DH-14825: CUS should ensure served files are accessed |
260 | DH-15824: Fix cluster.cnf backup commands |
259 | DH-16862: Core+ does not properly convert between Legacy and Core NULL_CHAR |
258 | DH-16883: Upgrade should import the new status-dashboard-defaults.json file |
257 | DH-16889: Fixed an NPE in ungroup with nulls in array native array columns |
256 | DH-16898: Fix configuration for high cpu tests |
255 | DH-16890: Fix imports in AbstractDeploymentTest.groovy |
254 | DH-16811: Config to support nightlies in test automation |
253 | DH-16868: Check if shadow package already added before adding again |
252 | DH-16189: Update deephaven.ui and plotly plugins DH-16189: Fix re-hydration of deephaven.ui plugins in dashboards plotly-express v0.7.0: https://github.com/deephaven/deephaven-plugins/releases/tag/plotly-express-v0.7.0 deephaven.ui v0.13.1: https://github.com/deephaven/deephaven-plugins/compare/ui-v0.8.0...ui-v0.13.1 |
251 | DH-16720: Support deephaven.ui dashboards from PQs |
250 | DH-16852: Do not permit scheduling a worker with a heap more than available memory. |
249 | DH-14975: Make DH_JAVA the ultimate source of truth for "where to find java" DH-15824: Backup previous /etc/sysconfig/deephaven/cluster.cnf whenever upgrading |
248 | DH-16420: More versatile configuration of status dashboard query monitoring DH-16850: Fix Kubernetes installation issues |
247 | DH-16832: Test built in Community Code should not be run in Java8 |
246 | DH-16842: Use Parameter instead of QueryTracker Config in Dispatcher Usage Update |
245 | DH-16821: Pull qa-results improvements forward to vplus |
244 | DH-16838: DELETEs handled incorrectly in Presence KV Monitor |
243 | DH-16822: ReplicatedTable doesn't handle all possible long backed time sources |
242 | DH-16544: dh_helm fixes and enhancements |
241 | DH-16835: Expose WorkerHandle through PersistentQueryHandle as well as connections. |
240 | DH-16224: Refresh ACL data when switching to Import, Merge, or Validate tabs |
239 | DH-16823: Controller client should not print scary error on graceful shutdowns |
238 | DH-16773: Web version bump to v0.72.0 |
237 | DH-16816: Failure to Cancel PQ Can Result in Controller Crash |
236 | DH-16783: Fix ChartBuilder in Web UI |
235 | DH-16804: Update deephaven.io version log generation script. |
234 | DH-14610: Use domain names to send files to etcd server machines DH-15749: Etcd server IP address should be configurable to support multiple network interfaces DH-14859: Never leave world-readable etcd config tars on disk |
233 | DH-16776: Fix errors when sorting symbol tables with mixed nulls |
232 | DH-16791: SystemTableLogger Checker is Timing Out |
231 | DH-16787: PresenceWatcher is started under lock |
230 | DH-16693: Run core+ integration tests during nightly installer testing |
229 | DH-16767: Core+ exec_notebook broken in .213 |
228 | DH-16633: Rebuild VM images with etcd 3.5.12 instead of 3.5.5 |
227 | DH-16805: Fix C# docfx task (need to pin older version) |
226 | DH-16655: Make internal installer replace certs that expire in 2 months or less |
225 | DH-16605: Use grep before sudo sed to avoid root when migrating monit |
224 | DH-16721: Core+ Python Client Should Reauthenticate to Controller |
223 | DH-16740: Share JS API cache between deferred loader and app |
222 | DH-15994: Fixed Core+ DictionaryRegionHelper incorrectly accounting null values |
221 | DH-16689: Core+ worker cannot read direct DbArray Columns |
220 | DH-14774: correct syntax error in update_workspace.py, update installer version |
219 | DH-16731: Republish coreplus java jars, and always use jdk11 for republishing |
218 | DH-16719: SAML Login From Core+ Python Client. DH-16695: Support io.StringIO as a private key in Core+ Python Client. |
217 | DH-16729: unbox primitive types even when specified as java.lang.Type in schema |
216 | DH-16728: correct error message diagnosing invalid listener |
215 | Release note fixes. |
214 | Merge updates from 1.20230511.488
|
213 | DH-16678: Add vermin check to Core+. DH-16705: Add meta import machinery for controller. DH-16709: Provide Mechanism to Refresh Controller Scripts without Git Configured DH-16710: git repository state is incorrectly serialized |
212 | DH-16703: Update Vermilion+ to 0.33.3 |
211 | DH-16687: Add etcd ACL encoding tool |
210 | DH-16670: FeedOS test support from Bard to VPlus |
209 | DH-16668: Refactor controller_tool tests to wait for logging to be done. |
208 | DH-16686: Update Vermilion+ to 0.33.2 |
207 | DH-16634: Fix dashboards migration issue |
206 | DH-16626: Support deephaven.ui dashboards from a code studio |
205 | DH-16621: Expose available query objects as a table to users |
204 | DH-16664: Fix Core+ cpp-client dockerized build after incompatible changes on DHC 0.33 |
203 | DH-16656: Fix listener reachability in TableMapTest, added integration test for DH16656 |
202 | DH-16656: ResolveTools sets empty columns on snapshot |
201 | DH-16659: tailer handles data routing impl that does not support change notification |
200 | DH-16652: Update automated tests on controller_tool for VPLus |
199 | DH-16644: Update copyright year in web launcher page |
198 | DH-16637: Fixed Core+ .toUri() stat'ng directories during discovery |
197 | DH-16462: Add profile JIT CPU options |
196 | DH-16582: upgrade etcd from 3.5.5 to 3.5.12 |
195 | DH-16616: Fix Safe Mode in Web UI |
194 | DH-16617: Fix line plots in Web UI |
193 | DH-16589: automated validation test for import driven kafka lastBy DIS |
192 | DH-16593: Fixed Legacy CART trying to reconnect even after good data was received. |
191 | DH-16189: Enable deephaven.ui widgets from PQs |
190 | DH-16575: Core+ Python Client Wheel Should be Usable in Worker VEnv DH-16530: Loosen Core+ Client Version Requirements |
189 | DH-16492: Fix javadoc |
188 | DH-16564: Package jupyter logged_session in iris repo |
187 | DH-16500: Update deephaven-plotly-express plugin to 0.5.0, update Web UI to v0.67.0 DH-16427: Web plotting should not ignore xRange for histPlots DH-16490: Fix deephaven.plot.express data |
186 | DH-16596: Reapply fix for DH-16221 (Controller allows resubscriptions) |
185 | Jdk8 Compilation Fix. |
184 | DH-16290: correct initial install condition |
183 | DH-16290: 'dhconfig routing' validate and import must consider existing extra dises DH-15984: improve 'dhconfig routing export' feedback when there is no routing file |
182 | DH-16492: create local cached DataRoutingService, use it in the tailer |
181 | DH-16249: Use correct API for widgets |
180 | DH-16364: If etcd is setup but not working correctly, fail the install instead of generating a new etcd cluster |
179 | DH-16579: URL encode groups in removeMembership for MySQL ACLs |
178 | DH-16537: Fix partition_by failing to render the table |
177 | DH-15918: correct unit test |
176 | DH-15918: tailer restarts on routing change DH-16148: create listener framework for data routing service |
175 | DH-16144: add writers group for data routing service writers |
174 | DH-16543: Add missing WorkspaceData data types, and all-types file, to backup_deephaven script |
173 | DH-16554: Update Web UI to v0.66.1 DH-16554: Upgrade React Spectrum to ^3.34.1 DH-16554: Removed some ACL Editor css classes |
172 | DH-16383: Remove all passwords from logs, automated test of no passwords in logs |
171 | DH-16483: Fix WindowCheck entry combination bug. |
170 | DH-16370: Update Core to version 0.33.1 |
169 | DH-16533: Fix dispatcher error response failure conditions |
168 | DH-16551: Link to Enterprise Javadoc from Core+ Javadoc |
167 | DH-16563: dhconfig dis add should mention --force when the dis already exists |
166 | DH-16556: allow export of core dises |
165 | DH-16535: Fix Persistence KVs not being cleaned up properly |
164 | DH-16220: Add DBNameValidator to namespace and tablename ACL inputs fields |
163 | DH-16529: SBOM coreplus artifact shouldn't use dnd in its name |
162 | DH-16524: Fix WorkerKind JSON generation from controller request |
161 | DH-16337: Update delete intraday data label to match swing |
160 | DH-16483: Fix javadoc build failure |
159 | DH-15771: Fixes for dh_helm script |
158 | DH-16508: Integration test update rocky compatibility |
157 | DH-16493: Make core+ builds leverage gradle task caching |
156 | DH-16483: Reduce WindowCheck memory usage. #1398 |
155 | DH-16417: Make manifest.json visible in k8s environments |
154 | DH-16282: Fixed CI build to fail on jest / junit errors not just failures DH-16282: ACL Editor - Table ACLs error when clicking "Update ACL" that will become "Add ACL" |
153 | DH-16494: Fix Swing ACL Editor requesting ACLs for null NS or TN |
152 | Merge updates from 1.20230511.475
|
151 | DH-16479: Integration tests added for core+ kafka transformations |
150 | DH-16489: Integration test for Python Core+ table groups. |
149 | DH-16488: Update Core to 0.32.1 |
148 | DH-16489: Core+ Python ACL Transformer not unwrapping Tables |
147 | DH-16438: Add time to installer dependencies / rocky VM images |
146 | DH-16475: Integration test fixes |
145 | Merge updates from 1.20230511.474
|
144 | DH-16472: NPE in PQWorkerServiceGrpcImpl |
143 | DH-16463: Update Web UI to v0.63.0 DH-16463: Fix false positives when detecting layout changes |
142 | DH-16471: Added shortcut for copy version info |
141 | DH-16460: fix poor contrast color of notice message in share modal |
140 | DH-16455: Fix Download CSV in Web UI |
139 | DH-16458: Fix Swing ACL Editor requesting ACLs for null NS and TN |
138 | DH-16452: Disable table name dropdown when * ns is selected |
137 | DH-14914: Test core+ auto install |
136 | DH-16127: Fix readme for dnd version |
135 | DH-16373: ACL write server should enforce system user limitations |
134 | DH-16446: Legacy Parquet does not interpret LocalDate stored as int in Parquet format |
133 | DH-16315: ACL Write Server Should Prohibit Namespace=* without Tablename=* |
132 | DH-16326: io.deephaven.kv.acl.AclJetcdProvider Needs to Escape Data |
131 | DH-16336: Consistent handling of whitespace typing / pasting |
130 | DH-16411: Integration test had duplicated serial number. |
129 | DH-16440: Fix Kubernetes restartAll script errors |
128 | DH-16025: Legacy BarrageTableResolver should return a table |
127 | DH-16437: Make rocky9 require rsync-3.2, same as RHEL 9 |
126 | DH-16426: Update Web UI to v0.61.0 DH-16426: Allow themes to use any srgb color for definitions |
125 | Release note updates. |
124 | DH-16413: Non-superusers should have access to WebClientData tables DH-16416: UserGroupArrayFilterGenerator should escape groups |
123 | DH-16277: When using Rollup Rows, ungrouped columns become sorted alphabetically and should not |
122 | DH-16302: Fix Merge/Validate queries adding an extra field to the PQ DH-16371: Fix PQ Start/Stop actions inconsistently enabled/disabled |
121 | DH-16385: Envoy Does not Have Cluster/Route for Multiple Auth Servers |
120 | DH-16411: Dispatcher crashes when invalid WorkerKind is requested |
119 | DH-15771: Create Kubernetes Deephaven install/uninstall/upgrade wrapper script DH-16217: Update buildAllForK8s.sh to use coreplus instead of dnd |
118 | DH-15955: Official installer support for rocky8/9 |
117 | DH-16327: Fix Java 8 incompatibility |
116 | DH-16327: Properly URL encode ACL requests |
115 | DH-16362: allow dises+routing for complete import of routing config |
114 | DH-16362: revert allow dises+routing for complete import of routing config |
113 | DH-16362: allow dises+routing for complete import of routing config |
112 | DH-16350: fix installer keygen script for controller and acl write server |
111 | DH-16321: Duplicated values in a cluster.cnf file should cause a validation error |
110 | DH-15803: Improve error messaging around partitioned user table location overlap |
109 | DH-16368: Add Support for remote clusters with RemoteTableBuilder |
108 | DH-16332: Ensure worker to controller notifications (eg table errors) are not lost if controller restarts |
107 | DH-16369: Make internal installer overwrite versions when using pull_cnf |
106 | DH-15934: Routing config change for RemoteTableAppender in k8s |
105 | DH-16288: Hide k8s-related fields in query monitor when not deployed in k8s |
104 | DH-16251: Allow Core+ workers to load calendars from disk |
103 | DH-16082: Don't show RunAndDone queries in the Panels menu |
102 | DH-16355: Kafka Community Test Fails After .079 |
101 | DH-16219: Disallow namespaces and table names containing spaces at ACL API endpoint |
100 | DH-16287: Web API Server Reconnections Preserve Code Studio |
099 | DH-16331: Make PULL_CNF work for jenkins and local vm deploys |
098 | Revert DH-16251: Allow Core+ workers to load calendars from disk |
097 | SH-15353: Add client IP address to audit log for authentication events in web_api_server |
096 | DH-16251: Allow Core+ workers to load calendars from disk |
095 | DH-16320: ACL Editor - url encoding |
094 | DH-16333: db.livePartitionedTable error message misspelled |
093 | DH-15415: Fix jdk8 javadoc task |
092 | DH-16269: Add support for Core+ queries in irisapi examples |
091 | DH-16312: ACL Editor - Close selectors on select DH-16314: ACL Editor - Only allow * table name when * ns is selected |
090 | DH-16139: Add cert expiry times to status dashboard |
089 | DH-15415: Improve ACL exceptions |
088 | DH-15521: Add official installer support for ubuntu 22.04 |
087 | DH-16324: Fix DbAclEditorTableQuery canedit logic |
086 | Merge updates from 1.20230511.464:
|
085 | DH-16235: Fix QM Summary out of sync with the Queries Grid |
084 | Merge updates from 1.20230511.463:
|
083 | DH-15864: Fix undefined partitions in IrisGridPanel state |
082 | DH-16318: Make iris_keygen.sh avoid adding to truststore when --skip-* flags are used |
081 | DH-16305: Fixes to get Deephaven working with IAP in Kubernetes |
080 | DH-16121: ACL Editor - Action tooltips |
079 | DH-16296: MySQL publickey table fails on Jackson to Vermilion+ Upgrade DH-16298: New Installations Should default to etcd ACLs DH-16307: DH_DND_VERSIONS should write "auto" not automatically selected version to cluster.cnf DH-16297: Jackson to Vermilion Upgrade does not Create Python 3.10 Virtual Environment |
078 | DH-16058: Add Memory Printing to Tailer |
077 | DH-16286: MultiViewBuilder Test Must not Depend on Static Inheritance |
076 | DH-16150: Add widget plugins to handle widgets in Web |
075 | DH-14646: improvements after testing |
074 | Merge updates from 1.20230511.462:
|
073 | DH-16247: Update Core+ to Core 0.32.0 DH-16270: Fix update_by liveness |
072 | DH-16280: ACL Editor - Reset tablename selection when namespace changes DH-16281: ACL Editor - Input table ACLs should not have "Columns" column in table view |
071 | DH-16258: iris-querymanagers should still see special queries in web |
070 | Update Web UI to v0.59.0
|
069 | DH-15857: Handle async due to gRPC internal state after Controller client subscription shutdown |
068 | DH-16261: Extra DIS routing backups DH-16274: Add DIS routing integration tests |
067 | DH-16264: Fix unthemed legacy worker plots |
066 | DH-16258, DH-16259: Frontends display non displayable config types for non-admin users. |
065 | DH-15794: Add status dashboard helm chart |
064 | DH-16157: Core+ Cart should maintain a reference and manage the lifecycle of the ManagedChannel |
063 | DH-16266: Add Javadoc for Protobufs |
062 | DH-16209: Add dedicated volume for git repo in k8s envs |
061 | DH-16189: Pass all session objects to the Web UI |
060 | DH-16248: etcd/admin_init.sh should retry user existence check |
059 | DH-16250: Add deephaven.ui 0.1.0 to Core+ Workers |
058 | DH-16023: Allow enable.auto.commit check on Boolean. |
057 | DH-16211: Fix Controller shutdown held up |
056 | DH-16155: Support ACLs for non-existing namespaces and table names |
055 | DH-16246: Make DbAclCorsFeature use standard cors props as backup |
054 | DH-16003: Make management-shell a deployment, worker label value safeguards |
053 | DH-15840: Error in CART after controller restart DH-16156: Inaccurate Error Message when using PQ+ resolver DH-16157: Error message being logged when using RemoteTableBuilder |
052 | DH-16222: Add KafkaTableWriter.disNameWithStorage |
051 | Merge updates from 1.20230511.451:
|
050 | DH-16122: Refresh Query Monitor user and group lists on ACL Editor changes |
049 | DH-16236: Prevent possible improper labels in k8s metadata |
048 | DH-16147: Update DHE C++ and R client for DHC 0.31.0/0.32.0 |
047 | Merge updates from 1.20230511.450:
|
046 | DH-16226: Fix Grid panel state persistence |
045 | DH-16223: Don't wrap query summary lines if there is enough space |
044 | DH-15591: Fix QueryMonitor recovery from web api service/controller restart |
043 | DH-16231: Fix scheduling issues in restored PQs after controller restart |
042 | DH-16150: Support for loading module plugins from workers, deephaven.ui from Code Studio |
041 | DH-16227: Fix an attempt to log a null Throwable from PresenceLeaseHandlerEtcd.abort |
040 | DH-16203: Improper Global State in Core+ Python Client DH-16215: Rename Python Core+ Client Wheel DH-16057: Core+ python client exception when closing session after manager |
039 | DH-16218: Set WorkerProtocolRegistry host and ports for Core+ workers |
038 | DH-16100: Fix getObject failing after web api service/controller restart |
037 | DH-16210: Fix usage of paste command with explicit /dev/stdin |
036 | DH-10941: Make DH_FORCE_NEW_CERTS work correctly |
035 | DH-16196: KafkaTableWriter Transformation should take UpdateGraph lock (fix duplicate graph names) |
034 | DH-16085: Add new fields to the Query Summary screen |
033 | DH-16196: KafkaTableWriter Transformation should take UpdateGraph lock |
032 | DH-16168: Update routing.yml for kubernetes installations |
031 | DH-16185: UpdatePerformanceLogCoreV2 is missing UpdateGraph |
030 | DH-16179: Rename ProcessUniqueId to ProcessInfoId in Core Performance Tables |
029 | DH-16177: Controller PQ ensureShutdown avoids trying to cancel processing requests never sent |
028 | DH-16182: Fix Python wrapper bypassing liveness defaults |
027 | DH-16163: SystemTableLogger Error in V+ DH-16169: PerformanceOverview Fails on Core+ Workers without Updates |
026 | DH-15912: Improve worker startup consistency in Kubernetes when cert-manager is enabled |
025 | DH-14646: dynamic dis management |
024 | DH-16158: Fix scheduled jobs loop on scheduled stop spamming the controller log file |
023 | DH-16154: Web ACL Editor is Failing Over Envoy |
022 | DH-16151: Fix stopping PQ after controller restart doesn't work. |
021 | DH-15890: Insure persistent query pod labels are always populated in k8s environments |
020 | DH-16137: Fix RemoteQueryDispatcher.workerServerPorts port range conflict with Linux ephemeral ports |
019 | Merge updates from 1.20230511.445
|
018 | DH-16094: Core+ workers survive controller restart step 3 |
017 | DH-15800: Automatic Allocation of Kafka Resources in Kubernetes DH-15695: Automatic Allocation of Kafka (In-Worker DIS) Resources in Kubernetes |
016 | Merge updates from 1.20230511.444
|
015 | DH-15890: Add persistent query info to worker pod labels in k8s |
014 | DH-16132: Upload installer next to tar/rpm in jfrog |
013 | DH-16110: ServiceRegistry.writers should include iris-dataimporters and iris-datamanagers by default |
012 | DH-16132: Delete obsolete installer upload task |
011 | DH-16116: Fix query monitor theme |
010 | DH-14599: Build launchers externally and download into iris |
009 | DH-16125: Add deephaven.remote_table to sphinx output |
008 | DH-16126: Fix config_packager to use -s instead of -d on web key files |
007 | DH-16105: Fix core+ nightly tests |
006 | Changelog format fix for deephaven.io |
005 | Javadoc fix. |
004 | Javadoc fix. |
003 | Fix Python patch versions starting with "0" |
002 | DH-16037: CART needs to maintain AuthContext for internal Barrage subs |
001 | Initial release creation |
Detailed Release Candidate Version Log: Deephaven v1.20230512beta
Patch | Details |
---|---|
225 | Merge updates from 1.20230511.438
|
224 | DH-14968: Fix typo in python, add missing live_table parameter. |
223 | DH-15708: Function Transformations on Core+ Kafka Ingestion |
222 | DH-15189: Allow annotations for Envoy service in values.yaml. |
221 | DH-15883: Update web UI packages to 0.57.1 DH-15883: Wired up theme providers DH-15883: Added theme selector if more than 1 theme DH-15883: Updated references to renamed saveSettings redux action DH-15883: Updated all rgba css references to be hsla + some additional css variable mapping DH-15864: Scroll position StuckToBottom shouldn't trigger sharing dot DH-16020: Added theme selector if more than 1 theme |
220 | DH-15989: Update Performance Schema to Handle Core 0.31.0 DH-15690: Core+ Performance Overview should use Index Tables |
219 | DH-16037: Add a CART for core+ workers |
218 | DH-15935: Use worker node name as internal partition value in k8s |
217 | DH-16102: Fix CME in ArrayParser that resulted in Csv Import failure |
216 | DH-16095: Derive Worker Name from ProcessInfoId |
215 | DH-13179: Add "PQ Creation Date" as a column in the Query Config/Query Monitor |
214 | DH-16068: Core+ workers survive controller restart step 2 (and done) |
213 | Merge updates from 1.20230511.433
|
212 | Javadoc fix; correct merge of Grizzly image. |
211 | DH-16079: Update logging in CustomSetter tests |
210 | DH-15262: Improve the new Controller unit test timeouts |
209 | DH-15353: audit log entries for authentication service events |
208 | DH-12597: Support for CustomSetters in DHC CsvImport |
207 | DH-15781: Status dashboard follow-on work after community fixes |
206 | DH-15980: Core+ workers survive controller restart step 1 |
205 | DH-16062: Fix crcat exit code in test. |
204 | DH-16060: Port configuration for local plugin dev |
203 | DH-15814: Direct server process configuration and startup in Kubernetes environments. |
202 | DH-16040: EKS Helm Chart Problems |
201 | DH-15727: Make audit event logs code-driven |
200 | Merge updates from 1.20230511.421
|
199 | DH-16043: Minimum tornado version supported for python3.10 is 6.2 |
198 | DH-16042: Minimum wrapt version supported for python3.10 is 1.13.3 |
197 | Merge updates from 1.20230511.420
|
196 | Merge updates from 1.20230511.419
|
195 | Merge updates from 1.20230511.418
|
194 | DH-15262: New ETCD layout for Controller, added resync capability for inconsistent storage |
193 | DH-16029: Fixes for K8s Image Build with New Filenames |
192 | Merge updates from 1.20230511.417
|
191 | DH-15594: Add import-driven lastBy capability to Kafka DIS DH-16023: Turn off enable.auto.commit for Core+ Kafka Ingester |
190 | DH-16024: Avoid duplicate etcd config handling code |
189 | DH-468: Put jdk version into rpm and tar filenames |
188 | Merge updates from 1.20230511.413
|
187 | DH-15848: DH-15848: Fix field value for empty password in JDBC Import |
186 | DH-16013: Fix Forward merge of Dispatcher Liveness Test to Grizzly |
185 | DH-13377: refactor MergeData builder/constructor ecosystem DH-10253: Merge builder should have an option to specify TDS mode |
184 | DH-15978: Add import-driven lastBy capability to Core+ |
183 | Merge updates from 1.20230511.410
|
182 | DH-15999: DispatcherClient resiliency |
181 | Formatting fix from merge. |
180 | Merge updates from 1.20230511.407
|
179 | Merge updates from 1.20230511.405
|
178 | Formatting fix from merge. |
177 | Merge updates from 1.20230511.404
|
176 | DH-15948: Typo in CPU share denial message |
175 | DH-15848: Web support for JDBC Import query type |
174 | Merge updates from 1.20230511.395
|
173 | Merge updates from 1.20230511.393
|
172 | Merge updates from 1.20230511.391
|
171 | DH-15961: Initializing Groovy Session creates Table without Auth Context |
170 | DH-15936: Fix mac+bash3 bug in locally-run installer code |
169 | DH-15273: Add DnD Groovy worker support for loading other groovy scripts |
168 | DH-15531: Update performanceOverview Messages to say "Core+" and "Legacy" |
167 | DH-15932: Change JsFigure.getErrors to a property |
166 | DH-10076, DH-14452: ensure resources are released for removed table locations, fix logging error |
165 | DH-15712: Fix Console Creator crash when clearing the heap size input |
164 | Merge updates from 1.20230511.387
|
163 | Javadoc correction. |
162 | Merge updates from 1.20230511.386
|
161 | DH-15873: Mac only has -f flag for rm, not --force |
160 | DH-15919: Remove Old AccessController and other Deprecated APIs |
159 | DH-15738: Fix Java 8 compilation |
158 | DH-15738: Allow restricting WorkerKinds by ACL group |
157 | Merge updates from 1.20230511.374
|
156 | DH-15770: Refactor WebClientData query to use per-user controller connections. |
155 | DH-15895: Float column statistics should have correct stddev, random test data should follow specified bounds |
154 | DH-15813: Add fsGroup to k8s worker pod security context |
153 | DH-15858: Vite config for local SSL |
152 | DH-15813: Easier in-worker DIS configuration in k8s deployments |
151 | DH-15686: Updated DH packages to 0.52.0 DH-15686: Wired up theming and aligned small loading spinners |
150 | DH-15840: Fixed another double notify in CART. Fixed CART resource leaks on close |
149 | DH-15899: Convert Rest of .jsx files to .tsx in main |
148 | DH-15847: Web support for CSV Import query type |
147 | Revert .145 |
146 | DH-15857: ControllerHashtableClient should clear its hashmap when connection is lost and notify listeners |
145 | DH-15686: Update Grizzly to Community 0.51.0 DH-15686: Wired up theming and aligned small loading spinners |
144 | Merge updates from 1.20230511.361
|
143 | DH-15852: Fix DBAclServiceProviderTest |
142 | DH-15849: Fix unit test failure from .120 merge. |
141 | DH-14961: Allow legacy client to double-subscribe. |
140 | DH-15698: clearly prohibit user tables in intraday truncate/delete operations |
138 | Fix broken Javadoc. |
137 | DH-15783: improve table data service log messages |
136 | Merge updates from 1.20230511.356
|
135 | DH-15837: Status dashboard shouldn't double-subscribe to persistent queries |
134 | DH-15786: Add a required test configuration property |
133 | DH-15693: Demote onResolved warning to info. |
132 | Merge updates from 1.20230511.355
|
131 | DH-15675: Remove Controller and Console from DnD shadow jar DH-14961: Separate Controller Client into a gRPC base and expose that separately |
130 | DH-15786: Prep work for simpler creation of data ingestion workers in Kubernetes environments |
129 | DH-15764, DH-15765, DH-15766: Web support for DataValidate, DataMerge, ReplayScript |
128 | DH-15782: Fixed Controller client reauth during resubscription attempts |
127 | DH-15807: Update rc/grizzly dependencies. |
126 | DH-14057: Add status dashboard process |
125 | DH-15787: Upgrade seaborn from 0.12.2 => 0.13.0 |
124 | DH-15537: Create Python wrappers for DnD user table API |
123 | Fix Unit test failures from previous forward merge |
122 | DH-14914: Automated DnD Python venv Installation |
121 | DH-15750: Update Kubernetes Images to Ubuntu 22.04 and Python 3.10 DH-14473: Update Python to 3.10, drop 3.7 |
120 | Merge updates from 1.20230511.342
|
119 | DH-14413: Web server should use separate PQC clients per user |
118 | DH-15741: Fix db.live_table |
117 | DH-14837: Improve int tests |
116 | Merge updates from 1.20230511.335
|
115 | DH-14837: Move static method outside of inner class |
114 | DH-14837: Add DnD centrally appended user table writing |
113 | DH-15478, DH-15694, DH-15669, DH-15479, DH-15670: tailer shutdown fixes |
112 | DH-15715: Fix grizzly installer tests |
111 | Merge updates from 1.20230511.329
|
110 | Merge updates from 1.20230511.328
|
109 | DH-15619: Fix type of NamespaceSet column in catalog table |
108 | DH-15666: Wire TypeSpecificFields and SupportsCommunity through the API |
107 | DH-15699: Add developer notes on shadow versioning |
106 | DH-15619: Add system namespaces and JDBC drivers to Web API |
105 | DH-15633: ACL API: Allow null password, support overwrite, propagate all errors to authenticated acl editor DH-15661: ACL API: Validate all input for non-printable characters |
104 | Javadoc fix for 102 merge. |
103 | Compile fix for 102 merge. |
102 | Merge updates from 1.20230511.315
|
101 | DH-15570: ACL Editor - Trim whitespace in inputs that create data |
100 | DH-15621: Fix queries not appearing correctly if one has failed to start |
099 | Merge updates from 1.20230511.293
|
098 | DH-15259: ACL API: Should validate white spaces in column names for ColumnAcl DH-15569: ACL API: Replace SQLException for ACL operations with more appropriate Exception that is agnostic to backing store DH-15615: ACL API: Add validations for null, empty, and trim whitespace where applicable DH-15213: ACL API: Protect groupname matching user |
097 | DH-15630: TrackedFileHandleFactory Should Warn When Files are Cycling Quickly |
096 | DH-14968: Add TableOptions to DnD Database fetches for live, blink, and internal partition columns |
095 | DH-15527: Updated dh packages to ^0.48.0 DH-15527: Removed code that was moved to Community |
094 | Merge updates from 1.20230511.287
|
093 | DH-14143: Add Kubernetes control fields to Web UI |
092 | DH-15482: Add Csv Parser Formats to Web API |
091 | Merge fix. |
090 | Merge updates from 1.20230511.279
|
089 | DH-15411: Fixed interaction issues with SystemUserMapSelector |
088 | DH-15387: ensure initialization end time is set on failures. |
087 | DH-15231: Remove with JDK Installer, Build using 8 Toolchain |
086 | DH-15438: Updated vite community alias for @deephaven/icons |
085 | DH-15558: Correct Version of Grizzly DnD Client Wheel Build |
084 | DH-15518: DIS.createSimpleProcess rejects stream keys it does not handle |
083 | Merge updates from 1.20230511.267
|
082 | DH-15527: Split out common util code from ACL Editor code |
081 | DH-15471: SearchableCombobox UX improvements |
080 | DH-15452: Fix broken RunDataQualityTests |
079 | DH-12433: Support multiple config / auth servers, and increase installer security |
078 | DH-15452: Data Merge query type JS API support DH-15453: Data Validation query type API support |
077 | DH-15464: Case insensitive name checks |
076 | DH-15391: Dropdowns now scroll to selection on open |
075 | DH-15411: The system users panel is now dynamically added / removed based on server config. |
074 | Merge updates from 1.20230511.255
|
073 | DH-15402: Cleaned up unit test console errors and re-enabled skipped test |
072 | DH-15438: Updated dh packages to ^0.46.0 |
071 | DH-15411: ACL Editor: Run as system user tab |
070 | Merge updates from 1.20230511.245
|
069 | DH-14538: Filter table name selector by namespace selection |
068 | DH-15064: Display Temporary Schedule Details for InteractiveConsole Queries |
067 | Merge updates from 1.20230511.232
|
066 | DH-15437: Add START_WORKERS_AS_SYSTEM_USER to ServerConfigValues |
065 | DH-14538: Check for existing group before creating |
064 | DH-14537: Added .git-blame-ignore-revs |
063 | Merge updates from 1.20230511.227
|
062 | DH-15347: Run Prettier |
061 | DH-14538: Updated searchTextFilter to use containsIgnoreCase |
060 | DH-15347: Upgrade Jest to ^29.6.2 |
059 | DH-14847: Update data routing template files to use new features |
058 | DH-15347: Upgrade Prettier to 3.0.0 |
057 | Merge updates from 1.20230511.224
|
056 | DH-14538: Updated dh packages to ^0.45.0 |
055 | DH-14538: Table ACLs Comboboxes |
054 | Merge updates from 1.20230511.217
|
053 | DH-15349: exclude DIS with disabled tableDataPort from 'all dises' specified by dataImportServers keyword |
052 | DH-14901, DH-15350: optional ACL group for ServiceRegistry writers |
051 | DH-14538: Table ACLs Panel |
050 | Merge updates from 1.20230511.206
|
049 | DH-14538: Update Web UI to @latest (v0.44) |
048 | DH-15338: Fix schema import failure-scenario tests |
047 | DH-14538: Fixed useACLEditorAPI failing test |
046 | DH-15269: Configure CORS headers when Envoy is not setup to access ACLWriteServer |
045 | DH-9573, DH-3149, DH-3154, DH-5698: dhconfig schema handling improvements DH-9573, DH-3149: add delete and list namespaces to dhconfig DH-3154: handle same-file overlap when specifying schemas to import DH-5698: dhconfig support exporting single tables |
044 | DH-13759: Change .039 used a language feature not present in Java 8. |
043 | DH-14538: Consuming DbAclWriter host and port from ServerConfigValues |
042 | DH-15308: Convert console/client from JS to TS |
041 | DH-15317: Add DbAclWriter host and port to JS API ServerConfigValues |
040 | DH-15286: Convert querylist and querymonitor from JS to TS |
039 | minor improvements to controller_tool command line argument parsing
|
038 | DH-15092: Avoid refresh overhead in RunAndDone Queries (DnD) |
037 | DH-15179: Fix up info not appearing for queries in Safe Mode |
036 | DH-15233: Fix Web UI after issue with JS to TS conversion |
035 | DH-15233: Convert main/tabs from JS to TS |
034 | Merge updates from 1.20230511.180
|
033 | DH-15238: Add Catalog Table to WebClientData Query |
032 | DH-14836: Add DnD partitioned user table writing |
031 | DH-14738: ACL Editor - error handling DH-15172: Web UI: Enabled ACL Users Tab |
030 | DH-14738: ACL editor refresh button |
029 | DH-14738: ACL Editor - Group trash action |
028 | Merge updates from 1.20230511.163
|
027 | Merge updates from 1.20230511.157
|
026 | Merge updates from 1.20230511.151
|
025 | DH-14738: ACL Editor - Group Assignment |
024 | Fix merge issue |
023 | Merge updates from 1.20230511.136
|
022 | Merge updates from 1.20230511.115
|
021 | DH-14738: ACL Editor User and Group Lists |
020 | DH-14738: Windowed list state |
019 | DH-14849: Fix DnD build |
018 | DH-14849: Add DnD audit event logging for user table writing and deleting |
017 | Fix bad merge. |
016 | Merge updates from 1.20230511.103
|
015 | DH-14996: Drop JDK13 From Grizzly Builds |
014 | Merge updates from 1.20230511.091
|
013 | DH-14738: ACL Editor hooks + utils |
012 | Merge updates from 1.20230511.077
|
011 | DH-14950: Add Unit Test Demonstrating Constant Column Conflict Behavior |
010 | DH-14829: Add copy button to PQ exceptions summary tab |
009 | Merge updates from 1.20230511.060
|
008 | Merge updates from 1.20230511.054
|
007 | Merge updates from 1.20230511.039
|
006 | DH-14714: Add DnD unpartitioned user table writing |
005 | Merge updates from 1.20230511.022
|
004 | Merge updates from 1.20230511.015
|
003 | Merge updates from 1.20230511.008
|
002 | Add DnD libSource for new release |
001 | Initial release creation from 1.20230512 |
New API to format the log-format suffix of internal partitions
A new builder method IntradayLoggerBuilder#setSuffixInternalPartitionWithLogFormat(String)
has been added that lets caller provide a single-argument String.format
pattern. The formatted log-format value is appended to the internal-partition name.
This overloads the existing IntradayLoggerBuilder#setSuffixInternalPartitionWithLogFormat(boolean)
which, when true
, appends the suffix using the default %d
pattern.
Example:
For internal partition "ABC"
and log-format version 4
:
- For
setSuffixInternalPartitionWithLogFormat(true)
, the actual partition used would beABC-4
- For
setSuffixInternalPartitionWithLogFormat("%02d")
, the actual partition used would beABC-04
Removed Jupyter Notebook integration
Server side Jupyter Notebook integration has been removed from Deephaven. The Legacy worker Jupyter Notebook is no longer supported and will not be updated. Use the Deephaven Core+ Python client from Jupyter notebooks beginning in Deephaven 1.20231218 and later.
Optional limit on appendCentral table size (client side)
Database.appendCentral(...)
sends the given table to the Log Aggregator Service as an atomic update. A large enough table can cause the LAS to run out of memory.
You can now set a maximum table size (number of rows) that will be accepted by the appendCentral call, by setting the optional property LogAggregatorService.transactionLimit.rows
.
This check only looks at the number of rows and does not take the number of columns into account. Zero or unset means no limit is enforced.
To make updates larger than the configured limit, either break the table into smaller pieces, or use the RemoteTableAppender directly to make a non-atomic update:
rta = new com.illumon.iris.db.util.logging.RemoteTableAppender(log, table.getDefinition().getWritable(), namespace, tableName, columnPartitionValue)
rta.append(table)
rta.flush()
rta.close()
See also Optional server-side limit on appendCentral table size for related server-side changes.
Optional limit on appendCentral table size (server side)
Database.appendCentral(...)
and RemoteTableAppender.appendAtomic(...)
calls send the given table to the Log Aggregator Service as an atomic update. A large enough table can cause the LAS to run out of memory.
You can now set a maximum table size (number of rows or number of bytes) that will be accepted by the Log Aggregator, by setting the optional properties
LogAggregatorService.transactionLimit.rows
or LogAggregatorService.transactionLimit.bytes
. Zero or unset means no limit is enforced. When the Log Aggregator accumulates more rows or more bytes in a transaction than the configured limit, it will abort the transaction and release the accumulated memory. The client will get an error.
To make updates larger than the configured limit, either break the table into smaller pieces, or use the RemoteTableAppender directly to make a non-atomic update:
rta = new com.illumon.iris.db.util.logging.RemoteTableAppender(log, table.getDefinition().getWritable(), namespace, tableName, columnPartitionValue)
rta.append(table)
rta.flush()
rta.close()
See also Optional client-side limit on appendCentral table size for related client-side changes.
Python 3.8 is the oldest supported Python version
Even though Python 3.8 has already reached EOL, on some versions of Deephaven, this is the newest built + tested version of Python.
As of Bard version 1.20211129.426, Python 3.8 is the only Python version built, and iris-defaults.prop changes the default from Python 3.6 to 3.8.
If you still have virtual environments setup with Python 3.6 or 3.7, you should replace them with Python 3.8 venvs. To use newer versions of Python, upgrade to a newer version of Deephaven.
For legacy systems, you can change the default back to Python 3.6 by updating your iris-environment.prop
to set the various jpy.*
props to the values found in iris-defaults.prop
, inside the jpy.env=python36
stanza:
# Legacy python3.6 locations:
jpy.programName=/db/VEnvs/python36/bin/python3.6
jpy.pythonLib=/usr/lib64/libpython3.6m.so.1.0
jpy.jpyLib=/db/VEnvs/python36/lib/python3.6/site-packages/jpy.cpython-36m-x86_64-linux-gnu.so
jpy.jdlLib=/db/VEnvs/python36/lib/python3.6/site-packages/jdl.cpython-36m-x86_64-linux-gnu.so
The new iris-defaults.prop
python props are now:
# New iris-defaults.prop python3.8 locations:
jpy.programName=/db/VEnvs/python38/bin/python3.8
jpy.pythonLib=/usr/lib/libpython3.8.so
jpy.jpyLib=/db/VEnvs/python38/lib/python3.8/site-packages/jpy.cpython-38-x86_64-linux-gnu.so
jpy.jdlLib=/db/VEnvs/python38/lib/python3.8/site-packages/jdl.cpython-38-x86_64-linux-gnu.so
Changes to Barrage subscriptions in Core+ Python workers
The methods subscribe
and snapshotTable
inside deephaven_enterprise.remote_table
have been changed to return a Python deephaven.table.Table
object instead of a Java io.deephaven.engine.table.Table
object. This allows users to use the Python methods update_view
, rename_columns
, etc. as expected without wrapping the returned table.
Existing Python code that manually wrapped the table or directly called the wrapped Java methods must be updated.
Example of previous behavior:
from deephaven_enterprise import remote_table as rt
table = rt.in_local_cluster(query_name="SubscribePQ", table_name="my_table").snapshot()
table = table.updateView("NewCol = random()")
Example of new behavior:
from deephaven_enterprise import remote_table as rt
table = rt.in_local_cluster(query_name="SubscribePQ", table_name="my_table").snapshot()
table = table.update_view("NewCol = random()")
Vermilion+ Core+ updated to 0.35.2
Vermilion+ 1.20231218.440 includes version 0.35.2 of the Deephaven Core engine. This is the same version that ships with Grizzly in 1.20240517.189, enabling customers to have one Core engine version of overlap between major Deephaven Enterprise releases. Although the Core engine functionality is the same in 0.35.2, the Grizzly Core+ worker has several enhancements that are not available in the Vermilion+ Core+ worker. This change also updates grpc to 1.61.0
For details on the Core changes, see the following release notes:
Changes to vector support for Core+ user tables
Both the Legacy and Core engines have special database types to represent arrays of values. The Legacy engine uses the DbArray class, while the Core system uses the Vector class. While these implementations represent identical data, they pose challenges for interoperability between workers running different engines.
When a user table is written, the schema is inferred from the source table. Previously, Vectors would be recorded verbatim in the schema. This change explicitly encodes Vector types as their base java array types as follows.
Vector Class | Converted Schema Type |
---|---|
ByteVector | byte[] |
CharVector | char[] |
ShortVector | short[] |
IntVector | int[] |
LongVector | long[] |
FloatVector | float[] |
DoubleVector | double[] |
Vector<T> | T[] |
This makes it possible for the Legacy engine to read User tables written by the Core engine. Note that no conversion is made when the Legacy engine writes DbArray types because the Core+ engine already supports those types.
If you want your User table array columns to be Vector types, use an .update() or .updateView() clause to wrap the native arrays.
staticUserTable = db.historicalTable("MyNamespace", "MyTable")
.update("Longs = (io.deephaven.vector.LongVector)io.deephaven.vector.VectorFactory.Long.vectorWrap(Longs)")
Option close Tailer-DIS connections early, while continuing to monitor files
A new property is available to customize the behavior of the Tailer.
log.tailer.defaultIdlePauseTime
This property is similar to log.tailer.defaultIdleTime
, but it allows the Tailer to close connections early while continuing to monitor files.
When the idle time specified by log.tailer.defaultIdleTime
has passed without any changes to a monitored file, the Tailer will close the corresponding connection to the DIS, and will not process any further changes to the file. The default idle time must therefore be as long as the default file rollover interval plus some buffer.
The new property enables a new feature. When the time specified by log.tailer.defaultIdlePauseTime
has passed without any changes to a monitored file, the Tailer will close the corresponding connection to the DIS, but will continue to monitor the file for changes. If a change is detected, the Tailer will reopen the connection and process the changes.
This will reduce or more quickly reclaim resources consumed for certain usage patterns.
Helm Chart Tolerations, Node Selectors and Affinity
You can now add tolerations, node selection, and affinity attributes to pods
created by the Deephaven Helm chart. By default, no tolerations, selectors or
affinity are added. To add tolerations to all created deployments, modify your
values.yaml
file to include a tolerations block, which is then copied into
each pod. For example:
tolerations:
- key: "foo"
operator: "Exists"
effect: "NoSchedule"
- key: "bar"
value: "baz"
operator: "Equal"
effect: "NoSchedule"
Adds the following tolerations to each pod (in addition to the default tolerations provided by the Kubernetes system):
Tolerations: bar=baz:NoSchedule
foo:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Similarly, you can add a nodeSelector
or affinity block
:
nodeSelector:
- key: "foo"
operator: "Exists"
effect: "NoSchedule"
- key: "bar"
value: "baz"
operator: "Equal"
effect: "NoSchedule"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: label
operator: In
values:
- value1
Which result in pods containing node selectors like:
Node-Selectors: key1=value1
key2=value2
And affinity as follows:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: label
operator: In
values:
- value1
weight: 1
Ability to disable password authentication in front-end
A new property, authentication.client.disablePasswordAuth=true
, may be used to remove the username/password authentication option from the Swing front-end. The property has no effect if there are no other login-options available.
This property does not disable username/password authentication at the server level (see Disabling password authentication).
Allow config to override ServiceRegistry hostname
The hostname which Data Import Server (DIS) registers with the service registry may now be defined in the host
tag within the DIS' routing endpoint
of the routing configuration; or using the new ServiceRegistry.overrideHostname
system property. The precedence for the service registry host is from:
- The routing endpoint configuration. Prior to this change, the
host
value within the endpoint configuration was ignored. ServiceRegistry.overrideHostname
property.- On Kubernetes, the worker's service's hostname.
- On bare metal, it is the result of the Java
InetAddress.getLocalHost().getHostName()
function.
Optional lenient IOJobImpl to avoid write queue overflow
New behavior is available to avoid write queue overflow errors in the TDCP process. When a write queue overflow condition is detected, the process can be configured to delay briefly - giving the queue a chance to drain.
The following properties govern the feature:
IOJobImpl.lenientWriteQueue
IOJobImpl.lenientWriteQueue.retryDelay
IOJobImpl.lenientWriteQueue.maxDelay
Set IOJobImpl.lenientWriteQueue=true
to enable this behavior.
By default, the writer will wait up to IOJobImpl.lenientWriteQueue.maxDelay=60_000
ms in increments of IOJobImpl.lenientWriteQueue.retryDelay=100
ms.
This should address the following fatal error in the TDCP process:
ERROR - job:1424444133/RemoteTableDataService/10.128.1.75:37440->10.128.1.75:22015 write queue overflow: r=true, w=true, p=false, s=false, u=false, h=0, rcap=69632, rbyt=0, rmax=4259840, wbyt=315407, wspc=1048832, wbuf=4097, wmax=1048576, fc=0, allowFlush=true
Option to default all user tables to Parquet
Set the configuration property db.LegacyDirectUserTableStorageFormat=Parquet
to default all direct user table operations, such as db.addTable
, to the Parquet storage format. The default if the property is not set is DeephavenV1
.
Deephaven processes log their heap usage
The db_dis
, web_api_service
, log_aggregator_service
, iris_controller
, db_tdcp
, and configuration_server
processes now periodically log their heap usage.
PersistentQueryController.log.current:[2024-05-10T15:00:32.365219-0400] - INFO - Jvm Heap: 3,972,537,856 Free / 4,291,624,960 Total (4,291,624,960 Max)
PersistentQueryController.log.current:[2024-05-10T15:01:32.365404-0400] - INFO - Jvm Heap: 3,972,310,192 Free / 4,291,624,960 Total (4,291,624,960 Max)
The logging interval can be configured using the property RuntimeMemory.logIntervalMillis
. The default is one minute.
Disabling Password Authentication
To disable password authentication within the authentication server, set the
configuration property authentication.passwordsEnabled
to false
. When the
property is set to false, the authentication server rejects all password logins
and you must use SAML or private key authentication to access Deephaven.
Note that even if the UI presents a password prompt, the authentication backend rejects all passwords.
Kubernetes Heap Overhead Parameters
When running Deephaven installations in Kubernetes, the originally-implemented JVM overhead properties don't prevent some workers being killed with out-of-memory exceptions.
- Adding the
BinaryStoreWriterV2.allocateDirect=false
JVM parameter reduces direct memory usage, which is not counted towards dispatcher heap usage and can result in Kubernetes out-of-memory failures. - Adding the
-Xms
JVM parameter allocates all requested heap at worker creation time, reducing the likelihood of after-startup worker out-of-memory failures from later memory requests. - Adding the
-XX:+AlwaysPreTouch
JVM parameter to workers ensures that all worker heap is touched during startup, avoiding later page-faulting.
The following properties are being added to iris-environment.prop
for new installations. Deephaven strongly suggests adding them manually to existing installations.
RemoteProcessingRequestProfile.Xms.G1 GC=$RequestedHeap
RemoteQueryDispatcher.JVMParameters=-XX:+AlwaysPreTouch
BinaryStoreWriterV2.allocateDirect=false
In addition, the property RemoteQueryDispatcher.memoryOverheadMB=500
is being updated in iris-defaults.prop
, and this will automatically be picked up when the Kubernetes installation is upgraded.
Dispatcher Memory Reservation
The Remote Query Dispatcher (either db_query_server
or db_merge_server
) has a configurable amount of heap that can be dispatched to workers, which is controlled by setting the RemoteQueryDispatcher.maxTotalQueryProcessorHeapMB
property. Setting this property requires accounting the other processes that may be running on the machine. If set too high, then workers may fail to allocate memory after being dispatched after dispatch or the kernel OOM killer may terminate processes. If set too low, then the machine may be underutilized.
As an additional safety check, the Remote Query Dispatcher can query the /proc/meminfo
file for available heap. If a user requests more heap than the MemAvailable
field indicates can be allocated to a new process, then the remote query dispatcher can reject scheduling the worker. By default, this new functionality is disabled.
There are two new properties that control this behavior:
- RemoteQueryDispatcher.adminReservedAvailableMemoryMB; for users that are members of
RemoteQueryDispatcher.adminGroups
- RemoteQueryDispatcher.reservedAvailableMemoryMB: for all other users
When set to -1
, the default, the additional check is disabled. When set to a non-negative value the dispatcher subtracts the property's value from the available memory, and verifies that the worker heap is less than this value before creating the worker.
You can examine the current status of properties, using the /config
endpoint if RemoteQueryDispatcher.webserver.enabled
is set to true. For example, navigate to `https://query-host.example.com:8084/config'. The available memory along with property values are displayed as an HTML table.
This property does not guarantee that workers or other processes are not terminated by the OOM killer. Running workers and processes may not have allocated their maximum heap size, and therefore can use system memory beyond what is available at dispatch time.
ILLUMON_JAVA is deprecated. Use DH_JAVA instead.
In the past, specifying which version of java to use with Deephaven was done with the ILLUMON_JAVA
and it was applied inconsistently.
In this release, you can set DH_JAVA=/path/to/java_to_use/bin/java
in your cluster.cnf
to tell all Deephaven processes where to find the correct java executable regardless of your PATH.
DH_JAVA
works correctly whether you point to a java executable or a java installation directory (like "JAVA_HOME")
Both DH_JAVA=/path/to/java_to_use
and DH_JAVA=/path/to/java_to_use/bin/java
operate identically.
If different machines in your cluster have java executables located in different locations,
it is your responsibility to set DH_JAVA correctly in /etc/sysconfig/deephaven/cluster.cnf
on each machine, or (preferably) to use a symlink so you have a consistent DH_JAVA location on all machines.
Core+ Controller Python Imports
From Core+ Python workers, you may now import Python modules from repositories stored in the controller. To evaluate a single Python file:
import deephaven_enterprise.controller_import
deephaven_enterprise.controller_import.exec_script("script_to_execute.py")
To import a script as a module, you must establish a meta-import with a module prefix for the controller. The following example uses the default value of "controller" to load a module of the form "package1/package2.py" or "package1/package2/__init__.py":
import deephaven_enterprise.controller_import
deephaven_enterprise.controller_import.meta_import()
import controller.package1.package2
Refreshing Local Script Repositories
The Persistent Query Controller defines a set of script repositories that can be used from Persistent Queries or Code Studios. The repositories may be configured to use a remote Git repository or just a path on the local file system. The controller scans the repository on startup for the list of scripts that are available. Previously, only Git repositories could have updates enabled (once per minute); and local repositories would never be rescanned.
You can now set the property PersistentQueryController.scriptUpdateEnabled
to true to enable script updates. If this property is not set, then the old
PersistentQueryController.useLocalGit
property is used (the old property has
an inverse sense, meaning PersistentQueryController.useLocalGit=true
stops
updates and PersistentQueryController.useLocalGit=false
permits updates) .
To mark a repository as local, the "uri" parameter must be set to empty. For
example, if the repository was reffered to as "irisrepo" in the
iris.scripts.repos
property, then to mark the repository as local you would
include a property like in your iris-environment.prop
file:
iris.scripts.repo.irisrepo.uri=
Fixing etcd ACLs that broke after upgrading to URL encodings
Note that the following is only applicable to etcd ACLs.
In 1.20231218.116 and 1.20231218.132, Deephaven began URL encoding ACL keys to prevent special characters like '/' in keys from corrupting the ACL database. Although not all special characters corrupted the database, all of them are encoded, causing the unencoded database to be incompatible with the new version. A common occurrence of this pattern is the "@" character in usernames.
These ACL entries can be fixed using the EtcdAclEncodingTool.
First, back up your etcd database by reading our backup and restore instructions.
To rewrite these ACLs with proper encodings, run the following command as irisadmin:
sudo -u irisadmin /usr/illumon/latest/bin/iris_exec com.illumon.iris.db.v2.permissions.EtcdAclEncodingTool
To see what changes would occur without actually modifying the ACLs, run:
sudo -u irisadmin /usr/illumon/latest/bin/iris_exec com.illumon.iris.db.v2.permissions.EtcdAclEncodingTool -a --dry-run
Setting JVM JIT Compiler Options for Workers
The ability to set the maximum number of allowed JVM JIT compiler threads through the -XX:CICompilerCount
JVM option has been added to JVM profiles using properties of the form RemoteProcessingRequestProfile.JitCompilerCount
. See the remote processing profiles documentation for further information.
Upgrade etcd to 3.5.12
In past releases, we recommended upgrading etcd to 3.5.5.
It was later discovered that 3.5.5 has a known bug which can break your etcd cluster if you perform an etcdctl password reset.
As such, when upgrading etcd, you should prefer the Deephaven-tested 3.5.12 point release, which is the new default as of version 1.20231218.190
.
All newly created systems will have 3.5.12 installed, but for existing systems, you must unpack new etcd binaries yourself.
You can find manual etcd installation instructions in the Reducing Root to Zero guide.
Configurable gRPC Retries
The configuration service now supports using a gRPC service configuration file to configure retries, and one is provided by default for the system.
{
"methodConfig": [
{
"name": [
{
"service": "io.deephaven.proto.config.grpc.ConfigApi"
},
{
"service": "io.deephaven.proto.registry.grpc.RegistryApi"
},
{
"service": "io.deephaven.proto.routing.grpc.RoutingApi"
},
{
"service": "io.deephaven.proto.schema.grpc.SchemaApi"
},
{
"service": "io.deephaven.proto.processregistry.grpc.ProcessRegistryApi"
},
{
"service": "io.deephaven.proto.unified.grpc.UnifiedApi"
}
],
"retryPolicy": {
"maxAttempts": 60,
"initialBackoff": "0.5s",
"maxBackoff": "2s",
"backoffMultiplier": 2,
"retryableStatusCodes": [
"UNAVAILABLE"
]
},
"waitForReady": true,
"timeout": "120s"
}
]
}
methodConfig
has one or more entries. Each entry has a name section
with one or more service/method sections that filter whether the
retryPolicy section applies.
If the method is empty or not present, then it applies to all methods of the service. If service is empty, then method must be empty, and this is the default policy.
The retryPolicy section defines how a failing gRPC call will be
retried. In this example, grpc will retry for just over 1 minute while
the status code is UNAVAILABLE
(e.g. the service is down). Note this
applies only if the server is up but the individual RPCs are being
failed as UNAVAILABLE
by the server itself. It the server is down,
the status returned is UNAVAILABLE
but the retryPolicy defined here
for the method does not apply; gRPC manages reconnection retries for a
channel separately/independently as described here:
https://github.com/grpc/grpc/blob/master/doc/connection-backoff.md
There is no way to configure the parameters for reconnection; see https://github.com/grpc/grpc-java/issues/9353
If the service config file specifies waitForReady, then an RPC executed
when the channel is not ready (server is down) will not fail right
away but will wait for the channel to be connected. This, combined
with a timeout
definition will make the RPC call hold on for as long
as the timeout giving the reconnection policy a chance to get the
channel to ready.
For Deephaven processes, customization of service config can be done
by (a) copying configuration_service_config.json to
/etc/sysconfig/illumon.d/resources and modifying it there, or (b)
renaming it and setting property
configuration.server.service.config.json
.
Note that the property needs to be set as a launching JVM argument because this is used in the gRPC connection to get the initial properties.
Note: The relevant service names are:
io.deephaven.proto.routing.grpc.RoutingApi
io.deephaven.proto.config.grpc.ConfigApi
io.deephaven.proto.registry.grpc.RegistryApi
io.deephaven.proto.schema.grpc.SchemaApi
io.deephaven.proto.unified.grpc.UnifiedApi
Add Core+ Calendar support and allow Java ZoneId strings in Legacy Calendars
Core+ workers can use the Calendars.resourcePath
property to load customer provided business calendars from disk. To use calendars in Core+ workers, any custom calendars on your resource path must be updated to use a standard TimeZone value.
Legacy workers also support using ZoneId strings instead of DBTimeZone values.
Dynamic management of Data Import Server configurations
Creating a new Data Import Server configuration and integrating it into the Deephaven system requires several steps, including required adjustments to the data routing configuration. This final step can now be performed with a few simple commands, and no longer requires editing the data routing configuration file.
dhconfig dis
The dhconfig
command has a new action: dis
, which supports import
, add
, export
, list
, delete
, validate
actions. The commands themselves provide help, and more information can be found in the dhconfig documentation.
dhconfig dis import
Import one or more configurations from one or more files. For example:
/usr/illumon/latest/bin/dhconfig dis import /path/to/kafka.yml
kafka.yml
kafka:
name: kafka
endpoint:
serviceRegistry: registry
tailerPortDisabled: 'false'
tableDataPortDisabled: 'false'
claims:
- {namespace: Kafka}
storage: private
dhconfig dis add
Define and import a single configuration on the command line. For example (equivalent to the import example above):
/usr/illumon/latest/bin/dhconfig dis add --name kafka --claim Kafka
dhconfig dis export
Export one or more configurations to one or more files. These files are suitable for the import command. For example, to export all configured Data Import Servers:
/usr/illumon/latest/bin/dhconfig dis export --file /tmp/import_servers.yml
dhconfig dis list
List all configured Data Import Servers. For example:
/usr/illumon/latest/bin/dhconfig dis list
Data import server configurations:
kafka
kafka3
dhconfig dis delete
Delete one or more configurations. For example:
/usr/illumon/latest/bin/dhconfig dis delete kafka --force
dhconfig dis validate
Validate one or more configurations. This can validate proposed changes before committing them with the import command. This process verifies that the configuration as a whole will be valid after applying the new changes.
Caveats
"Data routing configuration" comprises the "main" configuration file (managed with dhconfig routing
) and additional DIS configurations. The main routing configuration may contain DIS configurations in the dataImportServers
section. These two sources of DIS configurations are managed separately and are not permitted to contain duplicates. If you want to manage an existing DIS configuration with the new commands, you must remove it from the main routing configuration.
This functionality will only be useful for querying data if the routing configuration includes "all data import servers" using the dataImportServers
keyword. This is usually a source under the db_tdcp
table data service:
db_tdcp:
host: *localhost
port: *default-tableDataCacheProxyPort
sources:
- name: dataImportServers
A DIS configuration requires storage
. The special value private
indicates that the server will supply its own storage location. Any other value must be present in the storage
section of the routing configuration.
Update jgit SshSessionFactory to a more modern/supported version
For our git integration, we have been using the org.eclipse.jgit
package. Github discontinued
support for SHA-1 RSA ssh keys, but jgit's ssh implementation (com.jcraft:jsch) does not support
rsa-sha2 signatures and will not be updated. To enable stronger SSH keys and provide GitHub
compatibility, we have configured jgit to use an external SSH executable by setting the GIT_SSH
environment variable. The /usr/bin/ssh
executable must be present for Git updates.
Restartable Controller
If the iris_controller
process restarts quickly enough, Core+ workers that were already initialized and running
normally by the time the controller restarted continue running without interruption. Legacy workers still terminate on controller restart.
- The duration that workers can survive without the controller is defined by the property
PersistentQueryController.etcdPresenceLeaseTtlSeconds
, which defaults to 60 (seconds). - Only workers that have completed initialization and are in the
Running
state before the crashed controller died and which should still be running by that time, according to their query configuration at the time of controller restart.
If the iris_controller is stopped normally (e.g., via monit stop
or a regular UNIX TERM signal), the value of the property PersistentQueryController.stopWorkersOnShutdown
determines the desired behavior for workers.
- When set to
true
, all controller-managed workers are stopped alongside the controller. This is consistent with the traditional behavior. - When set to
false
(the new default), workers do not stop alongside the controller, and have the time defined in the propertyPersistentQueryController.etcdPresenceLeaseTtlSeconds
(defaults to 60 seconds) as a grace period where they wait for the controller to come back.
If the controller crashes (i.e., the iris_controller process stopped unexpectedly by an exception that crashes the process, a machine reboot, or a UNIX KILL signal), then workers are not proactively stopped even if the value of PersistentQueryController.stopWorkersOnShutdown
is true
. In this case, the dispatcher terminates those workers after the PersistentQueryController.etcdPresenceLeaseTtlSeconds
timeout.
Note: irrespective of the value of the PersistentQueryController.stopWorkersOnShutdown
property, if the dispatcher associated to a worker is shutdown, the worker stops.
Renamed Swing Launcher Archives
The downloadable swing launcher has been renamed as follows:
DeephavenLauncherSetup_123.exe
is now deephaven-launcher-123.exe
DeephavenLauncher_123.tar
is now deephaven-launcher-123.tgz
Reliable Barrage table connections
We have added a new library to provide reliable Barrage subscriptions within a Deephaven Core+ cluster. The new tables monitor the state of the source query and gracefully handle disconnection and reconnections without user intervention. This can be used to create reliable meshes of Core+ workers that are fault tolerant to the loss of other queries.
When using ResolveTools, PQ URLs (pq://MyQuery/scope/MyTable?columns=MyFirstColumn,SomeOtherColumn
) use these new reliable tables.
To use this library see the following examples
import io.deephaven.enterprise.remote.RemoteTableBuilder
import io.deephaven.enterprise.remote.SubscriptionOptions
// Subscribe to the columns `MyFirstColumn` and `SomeOtherColumn` of the table `MyTable` from the query `MyQuery`
table = RemoteTableBuilder.forLocalCluster()
.queryName("MyQuery")
.tableName("MyTable")
.subscribe(SubscriptionOptions.builder()
.addIncludedColumns("MyFirstColumn", "SomeOtherColumn").build())
from deephaven_enterprise import remote_table as rt
# Subscribe to the columns `MyFirstColumn` and `SomeOtherColumn` of the table `MyTable` from the query `MyQuery`
table = rt.in_local_cluster(query_name="MyQuery", table_name="MyTable") \
.subscribe(included_columns=["MyFirstColumn", "SomeOtherColumn"])
Connecting to remote clusters
It is also possible to connect to queries on a different Deephaven cluster.
import io.deephaven.enterprise.remote.RemoteTableBuilder
table = RemoteTableBuilder.forRemoteCluster("https://other-server.mycompany.com:8000/iris/connection.json")
.password("user", "password")
.queryName("MyQuery")
.tableName("MyTable")
.subscribe(SubscriptionOptions.builder()
.addIncludedColumns("MyFirstColumn", "SomeOtherColumn").build())
from deephaven_enterprise import remote_table as rt
# Subscribe to the columns `MyFirstColumn` and `SomeOtherColumn` of the table `MyTable` from the query `MyQuery`
table = rt.for_remote_cluster("https://other-server.mycompany.com:8000/iris/connection.json")
.password("username", "password") \
.query_name("MyQuery") \
.table_name("MyTable") \
.subscribe(included_columns=["MyFirstColumn", "SomeOtherColumn"])
ACLs for Update Core+ Performance Tables
Preexisting installs must manually add new ACLs for the new DbInternal tables.
First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessMetricsLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCoreV2 -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCoreV2Index -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCoreV2Index -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCoreV2Index -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCoreV2Index -overwrite_existing
exit
Then, run the following to add the new ACLs into the system:
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt
Alternatively, the ACLs can be added manually one by one in the ACL Editor:
allusers | DbInternal | ServerStateLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessMetricsLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCoreV2 | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | ServerStateLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCoreV2Index | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
Worker name format change
Worker names are no longer assigned in an ascending manner beginning from "worker_1". Instead worker names begin with "worker_" followed by a prefix of the process info ID. Note that the worker name is not guaranteed to be unique, using process info id is the only way to reliably find a specific worker within logs.
The "request ID" field has been removed from the RemoteProcessingRequest. The client now assigns the process info id, therefore you can use that to search in logs both on the client and server.
Custom Setter Support
For CSV imports using the new Deephaven Community CSV parser CustomSetter
s are now supported.
The changes are backward compatible so existing CustomSetter implementations continue to work as is. However, it is recommended to use the new custom setter interface for new imports and consider transitioning existing imports to the new interface.
The new interface provides two key benefits
- Avoids creating of
CSVRecord
objects - Column Data types are retained (extracted column values from CSVRecord would always be string)
- Additionally, passed in
constant
values may directly be accessed asgetConstantColumnValue()
CustomSetter example (Legacy)
Below is a simple example Custom Setter implementation using the legacy approach. The next section details how to convert this to the new interface.
The below example is of a Full Name
column that is compiled using First Name
, Last Name
columns and optionally may include Name Prefix
if the Name Prefix
constant column is included
Legacy Schema for Full Name Column
<Table name="ConstCustomSetter" namespace="Test" storageType="NestedPartitionedOnDisk">
<Partitions keyFormula="__PARTITION_AUTOBALANCE_SINGLE__" />
<Column name="Partition" dataType="String" columnType="Partitioning" />
<Column name="FullName" dataType="String" />
<Column name="FirstName" dataType="String" />
<Column name="LastName" dataType="String" />
<ImportSource name="IrisCSV" type="CSV" arrayDelimiter="," >
<ImportColumn name="NamePrefix" sourceType="CONSTANT" />
<ImportColumn name="FullName" sourceName="FirstName" class="com.illumon.iris.importers.CsvConstColumnSetterExample" />
<ImportColumn name="FirstName" />
<ImportColumn name="LastName" />
</ImportSource>
</Table>
Legacy Implementation CsvConstColumnSetterExample
package com.illumon.iris.importers;
import com.fishlib.io.logger.Logger;
import com.illumon.iris.binarystore.RowSetter;
import org.apache.commons.csv.CSVRecord;
import java.io.IOException;
/**
* Example Custom Setter implementation for Full Name Column using Legacy approach
*/
public class CsvConstColumnSetterExample extends CsvFieldWriter{
private final RowSetter setter;
private final ImporterColumnDefinition column;
/**
* Constructor using the format that is required for custom CsvFieldWriters
*
* @param log The passed in logger
* @param strict The strict parameter as chosen for the Import
* @param column The import column definition for the CustomSetter column
* @param setter The RowSetter to be used to populate the Column value for the Row
* @param delimiter The array delimiter used in the import
*/
public CsvConstColumnSetterExample(final Logger log, final boolean strict, final ImporterColumnDefinition column, final RowSetter setter,
final String delimiter) {
super(log, column.getName(), delimiter);
this.setter = setter;
this.column = column;
}
@SuppressWarnings("unchecked")
@Override
public void processField(final CSVRecord record) throws IOException {
setter.set(getConstantColumnValue("NamePrefix") + " " + record.get("FirstName") + " " + record.get("LastName"));
}
}
New Interface implementation for Full Name
Below are the schema and implementation class of the CustomSetter for the same Full Name
column using the new interface.
Schema
<Table name="NewFormatConstCustomSetter" namespace="Test" storageType="NestedPartitionedOnDisk">
<Partitions keyFormula="__PARTITION_AUTOBALANCE_SINGLE__" />
<Column name="Partition" dataType="String" columnType="Partitioning" />
<Column name="FullName" dataType="String" />
<Column name="FirstName" dataType="String" />
<Column name="LastName" dataType="String" />
<ImportSource name="IrisCSV" type="CSV" arrayDelimiter="," >
<ImportColumn name="FullName" class="com.illumon.iris.importers.CsvDhcConstColumnSetterExample" />
</ImportSource>
</Table>
Implementation Class
As shown below the key differences are
- The Base class is
BaseCsvFieldWriter
- The method to implement is
void processRow(@NotNull final Map<String, CustomSetterValue<?>> columnNameToValueMap)
- The
columnNameToValueMap
as name suggests the key in the map is theColumnName
and points toCustomSetterValue
object, which holds the appropriate column value bytype
- The
CustomSetterValue
is an implementsRowSetter<?>
interface but also supports getter allowing ability to save and retrieve values by their type
- The
- In addition, the passed in
constant
can be retrieved usinggetConstantColumnValue()
, though the legacy way of usinggetConstantColumnValue("NamePrefix")
whereNamePrefix
is theImportColumn
name.- XmlImports has support to pass in
importProperties
map which allows for multipleConstant
columns in that case it would be preferred to usegetConstantColumnValue(ColumnName)
- XmlImports has support to pass in
package com.illumon.iris.importers;
import com.fishlib.io.logger.Logger;
import com.illumon.iris.binarystore.RowSetter;
import org.jetbrains.annotations.NotNull;
import java.util.Map;
/**
* Example Custom Setter implementation for Full Name Column using New Format
*/
public class CsvDhcConstColumnSetterExample extends BaseCsvFieldWriter {
private final RowSetter<String> setter;
/**
* Constructor required for custom BaseFieldWriter
*
* @param log The passed in log
* @param strict The value of strict flag chosen for import
* @param column The import column definition for the CustomSetter column
* @param setter The RowSetter that will be used to set the property
* @param delimiter The array delimiter used
*/
public CsvDhcConstColumnSetterExample(final Logger log,
final boolean strict,
final ImporterColumnDefinition column,
final RowSetter<?> setter,
final String delimiter) {
super(log, column.getName(), delimiter);
//noinspection unchecked
this.setter = (RowSetter<String>) setter;
}
@Override
public void processRow(@NotNull final Map<String, Object> columnNameToValueMap) {
final String firstName = (String) columnNameToValueMap.get("FirstName");
final String lastName = (String) columnNameToValueMap.get("LastName");
final String fullName = getConstantColumnValue() + " " + firstName + " " + lastName;
setter.set(fullName);
}
}
IrisLogCreator constructor changes
The constructors in the IrisLogCreator
class have been changed. Any uses of these constructors should add a new boolean parameter to the call, which is used to determine whether or not to create an audit event logger. The old constructors have been deprecated but are still available and do not create audit event loggers.
Automatically Provisioned Python venv Will Only Use Binary Dependencies
All pip installs performed as part of the automatic upgrade of Python virtual environments will now pass the --only-binary=:all:
flag, which will prevent pip from ever attempting to build dependencies on a customer machine.
As part of this change, we automatically upgrade pip and setuptools in all virtual environments, and have upgraded a number of dependencies which for pip refused to use prebuilt binary dependencies:
For all virtual environments:
dill==0.3.1.1
is now dill==0.3.3
wrapt==1.11.2
is now wrapt==1.13.2
For jupyter virtual environments:
backcall==0.1.0
is now backcall==0.2.0
tornado==6.0.3
is now tornado==6.1
Product Installation File Rename
The Deephaven tar / RPM installation files have been renamed to include the Java version they are built for, and to better replace legacy names with modern product names.
The Enterprise installer tar now has a -jdkN classifier.
For example, illumon-db-1.20231212.123.tar.gz
is now deephaven-enterprise-1.20231212.123-jdk17.tar.gz
.
The Enterprise rpm now has the jdk major version as deephaven-enterprise minor version.
For example, illumon-db-1.20231212.123-1-1.rpm
is now deephaven-enterprise-1.20231212.123-17-1.rpm
.
The Core+ tar file has been gzipped and renamed with a jdkN classifier and a .tgz
file extension.
For example, io.deephaven.enterprise.dnd-0.32.0-1.20231212.123.tar
is nowdeephaven-coreplus-0.32.0-1.20231212.123-jdk17.tgz
.
Note that ONLY the filenames and RPM package name have changed. All paths on the filesystem still reflect legacy locations, except for a single renamed file:
/usr/illumon/dnd/latest/bin/io.deephaven.enterprise.dnd
has been renamed to /usr/illumon/dnd/latest/bin/deephaven-coreplus
.
MergeDataBuilder refactored
The hierarchy of Java classes that manage merge operations
had become unwieldy. In this release, we refactored the internals
to consistently use the same MergeDataBuilder
interface that has
been the preferred mechanism for user scripts.
We expanded that Builder class to include settings needed to support
operations previously accessed via special object classes and overloaded
methods. One noteworthy example is the deleted MergeFromTable
class, the functionality
of which is now accessed directly using the sourceTable
method of the builder.
The merge
methods of the Data Merging Classes taking large numbers of parameters are
gone. Scripts using them can be converted to use the builder pattern straightforwardly.
The details of the builder API have changed in several ways. All scripts and programs that directly initiate a merge operation will likely require attention. Merge Persistent Queries and the tools for interacting with them are not affected, but any Persistent Query that initiates a merge using script syntax will require attention.
Please refer to the Merge API Reference section of the Deephaven Enterprise user guide for full details and examples that illustrate the necessary changes.
Below are the "before" and "after" versions of the changed portion of the example script.
Before
new MergeFromTable().merge(
log, // automatically available in a console worker
com.fishlib.util.process.ProcessEnvironment.getGlobalFatalErrorReporter(),
namespace,
tableName,
date,
threadPoolSize,
maxConcurrentColumns,
lowHeapUsage,
force,
allowEmptyInput,
sortColumnFormula,
db, // automatically available in a console worker
progress,
null, // storageFormat not needed
null, // parquetCodecName not needed
null, // syncMode not needed
lateCleanup,
sourceTable)
After
MergeParameters params = MergeParameters.builder(db, namespace, tableName)
.partitionColumnValue(date)
.threadPoolSize(threadPoolSize)
.maxConcurrentColumns(maxConcurrentColumns)
.lowHeapUsage(lowHeapUsage)
.force(force)
.allowEmptyInput(allowEmptyInput)
.sortColumnFormula(sortColumnFormula)
.lateCleanup(lateCleanup)
.sSourceTable(sourceTable)
.build()
MergeData.of(params).run(com.fishlib.util.process.ProcessEnvironment.getGlobalFatalErrorReporter(), progress)
Switch to NFS v4 for Kubernetes RWX Persistent Volumes
NFS v3 Persistent Volume connections do not support locking. This manifests most obviously when attempting to work with user tables in Deephaven on Kubernetes. By default, user table activities will wait indefinitely to obtain a lock to read or write data. This can be bypassed by setting -DOnDiskDatabase.useTableLockFile=false
; this work-around was provided by DH-15640.
This change (DH-15830) switches Deephaven Kubernetes RWX Persistent Volume definitions to use NFS v4 instead, which includes lock management as part of the NFS protocol itself. In order for this change to be made, the NFS server must be reconfigured to export the RWX paths relative to a shared root path (fsid=0), but the existing PVs must use the same path to connect, since PV paths are immutable.
There are two options to reconfigure the NFS server:
-
The Deephaven Kubernetes install wrapper script (
dh_helm
) can be used for the upgrade; it automatically checks for an NFS Pod that was deployed as part of Deephaven Kubernetes setup, and runs an upgrade script to reconfigure it if it is not already exporting an NFS v4 path. -
In cases where the NFS server is not a Deephaven deployed Pod, or where you want to make other changes to the NFS configuration, you can manually run the
upgrade-nfs-minimal.sh
script against the NFS server. It is important to set the environment variableSETUP_NFS_EXPORTS
toy
before running the script.- To manually run the script against an NFS Pod:
-
Run
kubectl get pods
to get the name of your NFS server Pod and confirm that it is running. -
Copy the setup script to the NFS pod by running this command, using your specific NFS pod name:
# Run 'kubectl get pods' to find your specific nfs-server pod name and use that as the copy target host in this command. kubectl cp setupTools/upgrade-nfs-minimal.sh <nfs-server-name>:/upgrade-nfs-minimal.sh
-
Run this command to execute that script, once again substituting the name of your NFS Pod:
kubectl exec <nfs-server-name> -- bash -c "export SETUP_NFS_EXPORTS=y && chmod 755 /upgrade-nfs-minimal.sh && /upgrade-nfs-minimal.sh"
-
- To manually run the script against an NFS Pod:
The upgrade script:
- replaces
/etc/exports
, and backs up the original file to/etc/exports_<epoch_timestamp>
. The new file will have only one entry, which exports the/exports
directory with fsid=0. - adds an
exports
sub-directory under/exports
, and moves thedhsystem
directory there. This is so clients will still find their NFS paths under/exports/dhsystem
when connecting to the fsid=0 "root".
The existing PVs spec sections are updated with:
mountOptions:
- hard
- nfsvers=4.1
After upgrading to a version of Deephaven that includes this change (DH-15830), you should remove the -DOnDiskDatabase.useTableLockFile=false
work-around, so normal file locking behavior can be used when working with user tables.
Requiring ACLs on all exported objects
When exporting objects from a Persistent Query, there are now two modes of
operation controlled by the property PersistentQuery.openSharingDefault
.
In either mode, when an ACL is applied to any object (e.g, tables or plots) within the query, then objects without an ACL are only visible to the query owner and admins (owners and admins never have ACLs applied).
When a viewer connects:
- If
PersistentQuery.openSharingDefault
is set to true, persistent queries that are shared without specifying table ACLs allow all objects to be exported to viewers of the query without any additional filters supplied. This is the existing Deephaven behavior that makes it simple to share PQ work product with others. - If
PersistentQuery.openSharingDefault
is set to false, persistent queries that are shared without specifying table ACLs do not permit objects without an ACL applied to be exported to viewers. The owner of the persistent query must supply ACLs for each object that is to be exported.
Setting this property to false
makes it less convenient to share queries, but
reduces the risk of accidentally sharing data that the query writer did not
intend. To enable this new behavior, you should update your
iris-environment.prop
property file.
Tailer configuration changes to isolate user actions
The tailer allocates resources for each connection to a Data Import Server for each destination (namespace, table name, internal partition, and column partition). System table characteristics are predictable and fairly consistent, and can be used to configure the tailer with appropriate memory.
User tables are controlled by system users, so their characteristics are subject to unpredictable variations. It is possible for a user to cause the tailer to consume large amounts of tailer resources, which can impact System data processing or crash the process.
This change adds more properties for configuration, and adds constraints on User table processing separate from System tables.
User table isolation
Resources for User table locations are taken from a new resource pool. The buffers are smaller by default, and the pool
has a constrained size. This puts an upper limit on memory consumption when users flood the system with changed
locations, which can happen with closeAndDeleteCentral
or when back filling data.
The resources for this pool are pre-allocated at startup.
The pool size should be large enough to handle expected concurrent user table writes.
Property | Default | Description |
---|---|---|
DataContent.userPoolCapacity | 128 | The maximum number of user table locations that will be processed concurrently. If more locations are created at the same time, the processing will be serialized. |
DataContent.producerBufferSize.user | 256 * 1024 | The size in bytes of the buffers used to read data for User table locations. |
DataContent.disableUserPool | false | If true, user table locations are processed using the same resources as system tables. |
Tailer/DIS configuration options
The following properties configure the memory consumption of the Tailer and Data Import Server processes.
Property | Default | Description |
---|---|---|
DataContent.producersUseDirectBuffers | true | If true, the Tailer will use direct memory for its data buffers. |
DataContent.consumersUseDirectBuffers | true | Existing property. If true, the Data Import Server will use direct memory for its data buffers. |
BinaryStoreMaxEntrySize | 1024 * 1024 | Existing property. Sets the maximum size in bytes for a single data row in a binary log file. |
DataContent.producerBufferSize | 2 * BinaryStoreMaxEntrySize + 2 * Integer.BYTES | The size in bytes of buffers the tailer will allocate. |
DataContent.consumerBufferSize | 2 * producerBufferSize | The size in bytes of buffers the Data Import Server will allocate. This must be large enough for a producer buffer plus a full binary row. |
Revert to previous behavior
To disable the new behavior in the tailer, set the following property:
DataContent.disableUserPool = true
Added block flag to more dh_monit actions
This flag blocks scripting for the start, stop, and restart actions until the actions are completed. If any actions other than start, stop, restart, up or down are passed with the blocking flag, an error is generated. No other behaviors of the script have been changed.
These following options have been added:
/usr/illumon/latest/bin/dh_monit [ start | stop | restart ] [ process name | all ] [ -b | --block ]
These work as before:
/usr/illumon/latest/bin/dh_monit [ up | down ] [ -b | --block ]
Core Worker Notebook and Controller Groovy Script Imports
Users can now import Groovy scripts from their notebooks and the controller git integration from Community Core workers.
To qualify for such importing, Groovy scripts must:
- Belong to a package.
- Match their package name to their file location. For example, scripts belonging to package name
com.example.compute
must be found incom/example/compute
.
If a script exists with the same name as a notebook and in the controller Git integration, the notebook is prioritized as it is easier for users to modify if needed.
Importing Notebook Groovy Scripts
Below is a Groovy script notebook at test/notebook/NotebookImport.groovy
:
package test.notebook
return "Notebook"
String notebookMethod() {
return "Notebook method"
}
static String notebookStaticMethod() {
return "Notebook static method"
}
class NotebookClass {
final String value = "Notebook class method"
String getValue() {
return value
}
}
static String notebookStaticMethodUsingClass() {
new NotebookClass().getValue()
}
Below is an example of importing and using the groovy script from a user's notebooks. Note that per standard Groovy rules, you can run the script's top-level statements via main()
or run()
or use its defined methods like a typical Java class:
import test.notebook.NotebookImport
NotebookImport.main()
println new NotebookImport().run()
println new NotebookImport().notebookMethod()
println NotebookImport.notebookStaticMethod()
println NotebookImport.notebookStaticMethodUsingClass()
You can also use these classes and methods within Deephaven formulas:
import test.notebook.NotebookImport
import io.deephaven.engine.context.ExecutionContext
import io.deephaven.engine.util.TableTools
ExecutionContext.getContext().getQueryLibrary().importClass(NotebookImport.class)
testTable = TableTools.emptyTable(1).updateView(
"Test1 = new NotebookImport().run()",
"Test2 = new NotebookImport().notebookMethod()",
"Test3 = NotebookImport.notebookStaticMethod()",
"Test4 = NotebookImport.notebookStaticMethodUsingClass()"
)
Importing Controller Git Integration Groovy Scripts
Importing scripts from the controller git integration works the same way, except that script package names don't necessarily need to match every directory. For example, if the following property is set:
iris.scripts.repo.<repo>.paths=module/groovy
Then the package name for the groovy script at module/groovy/com/example/compute
must be com.example.compute
, not module.groovy.com.example.compute
.
Logging System Tables from Core+
Core+ workers can now log Table objects to a System table.
Many options are available using the Builder class returned by:
import io.deephaven.enterprise.database.SystemTableLogger
opts = SystemTableLogger.newOptionsBuilder().currentDateColumnPartition(true).build()
The only required option is what column partition to write to. You may specify a fixed column partition or use the current date (at the time the row was written, data is not introspected for a Timestamp). The default behavior is to write via the Log Aggregator Service, but you can also write via binary logs. No code generation or listener versioning is performed, you must write columns in the format that the listener expects. Complete Options are available in the Javadoc.
After creating an Options structure, you can then log the current table:
SystemTableLogger.logTable(db, "Namespace", "Tablename", tableToLog, opts)
When logging incrementally, a Closeable is returned. You must retain this object
to ensure liveness. Call close()
to stop logging and release resources.
lh=SystemTableLogger.logTableIncremental(db, "Namespace", "Tablename", tableToLog, opts)
The Python version does not use any options, but rather named arguments. If you specify None
for the column partition, then the current date is used.
system_table_logger.log_table("Namespace", "Tablename", table_to_log, columnPartition=None)
Similarly, if you call log_table_incremental
from Python; then you must close
the returned object (or use it as context manager in a with
statement)
Row by row logging is not yet supported in Core+ workers. Existing
binary loggers cannot be executed in the context of a Core+ worker; because
they reference classes that are shadowed (renamed). If row-level logging is
required, then you must use
io.deephaven.shadow.enterprise.com.illumon.iris.binarystore.BinaryStoreWriterV2
directly.
Only primitive types, Strings and Instants are supported. Complex data types cannot yet be logged.
Restrict available WorkerKinds with ACL groups
Use the new configuration parameter WorkerKind.<worker kind>.allowedGroups
to set the ACLs
for individual WorkerKinds. Groups are separated by commas. For example,
WorkerKind.DeephavenEnterprise.allowedGroups=iris-superusers,a_group
restricts the users that can create Enterprise (Legacy) workers to members of iris-superusers
and a_group
.
If not configured, the default is allusers
.
Core+ support for multiple partitioning columns
Deephaven Core+ workers now support reading tables stored in the Apache Hive layout. Hive is a multi-level partitioned format where each directory is a Key=Value pair.
For example:
| Market -- A Directory for the Namespace
| -- EquityTrade -- A directory for the Table
| | -- Region=US -- A Partition directory for the Region `US`
| | | -- Class=Equities -- A Partition directory for the Class `Equities`
| | | | -- Symbol=UVXY -- A Partition directory for the Symbol `UVXY`
| | | | | -- table.parquet -- A Parquet file containing data,
| | | | -- Symbol=VXX -- A Partition directory for the Symbol `VXX`
| | | | | -- table.size -- A set of files for a Deephaven format table
| | | | | -- TradeSize.dat
| | | | | -- ...
| | -- Region=Asia
| | | -- Class=Special
| | | | -- Symbol=ABCD
| | | | | -- table.parquet
| | | | -- Symbol=EFGH
| | | | | -- table.parquet
See the extended layouts documentation for more details on how to use this feature.
Core+ support for writing tables in Deephaven format
Deephaven Core+ workers now support writing tables in Deephaven format using
the io.deephaven.enterprise.table.EnterpriseTableTools
class in Groovy
workers and the deephaven_enterprise.table_tools
python module.
For example, to read a table from disk:
import io.deephaven.enterprise.table.EnterpriseTableTools
t = EnterpriseTableTools.readTable("/path/to/the/table")
from deephaven_enterprise import table_tools
t = table_tools.read_table("/path/to/the/table")
And to write a table:
import io.deephaven.enterprise.table.EnterpriseTableTools
EnterpriseTableTools.writeTable(qq, new File("/path/to/the/table"))
from deephaven_enterprise import table_tools
table_tools.write_table(table=myTable, path="/path/to/the/table")
See the Core+ documentation for more details on how to use this feature.
Core+ C++ client and derived clients support additional CURL options
When configuring a Session Manager with a URL for downloading a connection.json
file, the C++ client and derived clients (like Python ticking or R) use
libcurl to download the file from the supplied URL.
SSL connections in this context can fail for multiple options and it is customary
to support options to adjust SSL behavior and/or enable verbose output
for supporting debugging. We now support the following environment variables
from the clients:
CURL_CA_BUNDLE
: like the variable of the same name for thecurl(1)
command line utility. Points to a file containing a CA certificate chain to use instead of the system default.CURL_INSECURE
: if set to any non-empty value, disable validation of server certificate.CURL_VERBOSE
: if set to any non-empty value, enable debug output.
New Worker Labels
The Deephaven Enterprise system supports two kinds of workers.
The first uses the legacy Enterprise engine that predates the release of Deephaven Community Core. These workers are now labeled "Legacy" in the Code Studio and Persistent Query "Engine" field. Previously, these workers were labeled "Enterprise".
The second kind uses the Deephaven Community Core engine with Enterprise extensions. These workers are now labeled "Core+" in the Code Studio and Persistent Query "Engine" field. Previously, these workers were labeled "Community".
Although these changes may create short-term confusion for current users, Deephaven believes they better represent the function of these workers and will easily become familiar. Both Legacy and Core+ workers exist within the Deephaven Enterprise system. The Core+ workers additionally include significant Enterprise functionality that is not found within the Deepahven Community Core product.
To avoid breaking user code, we have not yet changed any package or class names that include either "Community" or "DnD" (an older abbreviation which stood for "Deephaven Community in Deephaven Enterprise").
Logger overhead
The default Logger creates a fixed pool of buffers. Certain processes are fine with a smaller size.
The following properties can be used to override the default configuration of the standard process Logger. Every log message uses an entry from the entry pool, and at least one buffer from the buffer pool. Additional buffers are taken from the buffer pool as needed. Both pools will expand as needed, so the values below dictate the minimum memory that will be consumed.
Property | Default | Description |
---|---|---|
IrisLogCreator.initialBufferSize | 1024 | The initial size of each data buffer. Buffers may be reallocated to larger sizes as required. |
IrisLogCreator.bufferPoolCapacity | 1024 | The starting (and minimum) number of buffers in the buffer pool. |
IrisLogCreator.entryPoolCapacity | 32768 | The initial (and minimum) size of the LogEntry pool. |
IrisLogCreator.timeZone | America/New_York | The timezone used in binary log file names. |
The default value for IrisLogCreator.entryPoolCapacity has been reduced to 16384 for Tailer processes.
generate-iris-keys
and generate-iris-rsa
no longer overwrite output
The generate-iris-keys
and generate-iris-rsa
scripts use OpenSSL to generate
public and private keys. If you have an existing key file, the scripts now exit
with a failure and you must remove the existing file before regenerating the key.
Additional Kubernetes Worker Creation Parameters
The Query Dispatcher now supports changing more Kubernetes parameters when creating a worker, which include:
- Persistent Volume Claim will mount an existing claim in your worker pod if it exists and is not already mounted elsewhere. If no claim exists, a new
PersistentVolumeClaim
will be created. If using a storage class that allows for dynamic volume creation, then thePersistentVolume
will also be created. Note that creating a new claim is subject to a validation check that requires a configured validator which will allow it. See the link below for more. - Storage Class is the storage class to be used for a new persistent volume claim. Acceptable Values will vary depending on your Kubernetes provider, the default is inserted into
iris-endpoints.prop
by the Helm chart'sglobal.storageClass
value. If using an existing claim this has no effect. - Storage Size is the size of the volume to be requested if creating a new
PersistentVolumeClaim
, in bytes as documented here. If using an existing claim this has no effect. - Mount Path denotes where in the pod the
PersistentVolumeClaim
will be mounted.
These are in addition to the existing parameters described in a previous release note introducing Kubernetes worker creation parameters and validators.
Kubernetes Helm Chart Changes
Some settings have changed or have been explicitly provided in place of whatever the default value was for your Kubernetes platform provider. For example terminationGracePeriodSeconds is set to a default of 10 in the management-shell. To avoid possible errors, delete the management-shell pod prior to doing the helm upgrade
if you have an older version already running. The pod can be deleted with this command: kubectl -n <your-namespace> delete pod management-shell --grace-period 1
.
Note that any files you may have copied or created locally on that pod will be removed. However, in the course of normal operations such files would not be present.
Controller client for Community Workers
This change reorganizes dependencies so that Community workers do not require the shadowed Controller and Console modules.
It also provides a io.deephaven.enterprise.dnd.controller.PersistentQueryControllerClient
interface and implementation for DnD workers to use. Both Enterprise and DnD implementations
now use the same shared underlying gRPC implementation.
Relocated Classes
The com.illumon.iris.controller.HeaderPopupProvider
class was moved to the Gui module as com.illumon.iris.gui.table.HeaderPopupProvider
Dependency updates
Deephaven has updated several dependencies to more recent versions. If you are using these dependencies in your scripts or other code running in the worker, then your code may need updates.
Dependency | Old | New |
---|---|---|
commons-codec | 1.15 | 1.16.0 |
commons-compress | 1.21 | 1.24.0 |
commons-io | 2.11.0 | 2.14.0 |
Groovy | 3.0.17 | 3.0.19 |
Jetty | 9.4.51.v20230217 | 9.4.53.v20231009 |
jgit | 5.8.1.202007141445-r | 5.13.2.202306221912-r |
org.apache.sling.commons.json | 2.0.20 | Removed |
org.xerial.snappy:snappy-java | 1.1.8.4 | 1.1.10.5 |
snakeyaml | 2.0 | 2.2 |
Shadowed dependencies, which generally should not be used directly have also
been updated. Of note is that Jackson which must sometimes be referenced as
io.deephaven.shadow.jackson.com.fasterxml
to interface with Deephaven
ingestion classes has been updated from 2.14.2 to 2.15.2.
Of particular note is that Groovy 3.0.19 includes at least one bug fix that changes the behavior of scripts. The Java Language Specification does not permit inheriting static members from parent classes, but Groovy versions prior to 3.0.18 did. GROOVY-8164 makes the Groovy language consistent with the JLS, but existing scripts that depend on inheriting static members fail at runtime.
Status Dashboard
A status dashboard process has been added to the Deephaven installation, providing data in a format that can be read by Prometheus. Full documentation is available in the System Administration section of the Deephaven documentation.
Updated Python Version
The default Python version is now 3.10, which is updated from 3.8. Python 3.8 and 3.9 are still supported, but Python 3.7 support has been dropped.
Python 3.10 drops support for numpy.object
, numpy.bool
and numpy.int
. If
you use these in your scripts, then you must use the corresponding built-in
Python object
, bool
, and int
types.
Of particular note is that Python 3.10 does not support OpenSSL 1.0, therefore
you must install OpenSSL 1.1 to build Python 3.10. SSL support is required for
the wheel package. CentOS 7 has OpenSSL 1.1 packages, which may be installed;
but the default directory layout is not suitable for Python 3.10. If the
installer root prepare script must build Python 3.10 on CentOS 7, then
/usr/illumon/openssl11
is created with symlinks to the openssl11-devel
yum
package.
Kafka Offset Column Name
The default Community name for storing offsets is KafkaOffset
. The Core+ Kafka
ingester assumed this name, rather than using the name from the
deephaven.offset.column.name
consumer property.
If the default columns names of KafkaOffset
, KafkaPartition
, and
KafkaTimestamp
are not in your Enterprise schema, then the ingester ignores
those columns. If you change column names for timestamp, offset, or partition;
then you must also ensure that your schema contains a column of the correct
type for that column.
Bypassing user table lock files
When a worker tries to write or read a User table, it will first try to lock a file in /db/Users/Metadata
to avoid potential concurrency issues. If filesystem permissions are set up incorrectly, or if the underlying filesystem does not support file locking, this can cause issues.
The following property can be set to disable the use of these lock files:
OnDiskDatabase.useTableLockFile=false
Worker-to-worker table resolution configuration
Worker-to-worker table resolution now uses the Deephaven cluster's trust store by default.
In some environments, there may be a SSL-related exception when when trying to resolve a table
defined in one persistent query from another (see
sharing tables
for more). The property uri.resolver.trustall
may be set to true
globally in a Deephaven
configuration file, or as a property in a Code Studio session as a JVM argument (e.g.
-Duri.resolver.trustall=true
). This will let the query worker sourcing the table trust a
certificate that would otherwise be untrusted.
Added Envoy properties to allow proper operation in IPv6 or very dynamic routing environments
The new properties envoy.DnsType
and envoy.DnsFamily
allow configuration of Envoy DNS behaviors for xds routes added by the Configuration server.
-
envoy.DnsType
configures the value to be set in dynamically added xds routes for type. The default if this property is not set isLOGICAL_DNS
. If there is a scenario where DNS should be checked on each connection to an endpoint, this can be changed toSTRICT_DNS
. Refer to Envoy documentation for more details about possible settings. -
envoy.DnsFamily
configures the value to be set in dynamically added xds routes for dns_lookup_family. The default if this property is not set isAUTO
. In environments where IPv6 is enabled theAUTO
setting may cause Envoy to resolve IPv6 addresses for Deephaven service endpoints; since these service endpoints only listen on IPv4 stacks, Envoy will return a 404 or 111 when getting "Connection refused" from the IPv6 stack. Refer to Envoy documentation for more details about possible settings.
Since Deephaven endpoint services listen only on IPv4 addresses, and Envoy, by default, prefers IPv6 addresses, it may be necessary to modify the configuration in environments where IPv6 is enabled. To do this:
-
add an entry to the
iris-environment.prop
properties file ofenvoy.DnsFamily=V4_ONLY
-
edit the
envoy3.yaml
(or whichever configuration file Envoy is using) and adddns_lookup_family=V4_ONLY
to thexds_service
section:static_resources: clusters: - name: xds_service connect_timeout: 0.25s type: STRICT_DNS dns_lookup_family: V4_ONLY
-
import the new configuration and restart the configuration server and the Envoy process for the changes to take effect.
Modified Bessel correction formula for weighted variance
The weighted variance computation formula has been changed to match that used in the Deephaven Community engine. We now use the standard formula for "reliability weights" instead of the previous "frequency weights" interpretation. This will affect statistics based on variance such as standard deviation.
Managing Community Worker Python Packages
When starting a Deephaven Python worker, it executes in the context of a Python virtual environment (venv
). This
environment determines what packages are available to Python scripts. Packages that are important systemically or
for multiple users should be added to the permanent virtual environment. With Community workers, the administrator
may configure multiple worker kinds each with
distinct virtual environments to enable more than one environment with a simple drop-down menu. For legacy Enterprise
workers, users must manually set properties to
select different virtual environments.
For experimentation, it can be convenient to install a Python package only in the context of the current worker.
Community Python workers now have a deephaven_enterprise.venv
module, which can be used to query the current
path to the virtual environment and to install packages into the virtual environment with via pip with the install
method. On Kubernetes, the container images now permit dbquery
and dbmerge
to write to the default virtual
environment of /usr/illumon/dnd/venv/latest
; which has no persistent effects on the system.
On a bare-Linux installation, the /usr/illumon/dnd/venv/latest
must not be writable by users to ensure isolation
between query workers. To allow users to install packages into the virtual environment, the administrator may configure
a worker kind to create ephemeral environments on worker startup by setting the property
WorkerKind.<name>.ephemeralVenv=true
. This process increases worker startup time as it requires executing
pip freeze
and then pip install
to create a clone of the original virtual environment. With an ephemeral virtual
environment, the user can use deephaven_enterprise.venv.install
to add additional packages to their worker. There
is currently no interface to choose ephemeral environments at runtime.
Kubernetes Image Customization
When building container images for Kubernetes, Deephaven uses a default set of requirements that provide a working
environment. However, many installations require additional packages. To facilitate adding new packages to the
default virtual environment, a customer_requirements.txt
file can be added to the deephaven_python
and
db_query_worker_dnd
subdirectories of the docker build. After installing the default packages into the worker's
virtual environment, pip
is called to install the packages listed in customer_requirements.txt
.
If these files do not exist, the Deephaven build script creates empty placeholder customer_requirements.txt
files.
With JDK Launcher Discontinued
The Windows launcher package that included a JDK has been discontinued. You must install a JDK on the client machine before running the Swing launcher. Please note that a JRE is not sufficient to run the Swing console, you must use a JDK.
The launcher is now compiled with JDK8, so that even if you download it from a Deephaven instance that is running a later Java version you may use it with Deephaven instances running older versions of Java. The JDK of the Swing client still must match that of the Deephaven server.
Make /db/Users mount writeable in Kubernetes
This changes both the yaml for worker templates and the permissions on the underlying volume
that is mounted as /db/Users in pods. If you are installing a new cluster, there is no action
necessary. However, if you have an existing cluster installed then run this command to change
the permissions: kubectl exec management-shell -- /usr/bin/chmod -vR 775 /db/Users
Helm improvements
A number of items have been added to the Deephaven helm chart, which allow for the following features:
- Configuration options to use an existing persistent volume claim in Deephaven, to allow for use of historical data stored elsewhere.
- Configuration options to mount existing secrets into worker pods.
- Configurable storageClass options to allow for easier deployment in various Kubernetes providers.
Required action when upgrading from an earlier release
-
Define global.storageClass: If you have installed an earlier version of Deephaven on Kubernetes then your my-values.yaml file used for the upgrade (not the Deephaven chart's values.yaml) should be updated to include a global.storageClass value, e.g.:
global: storageClass: "standard-rwo" # Use a value suitable for your Kubernetes provider
The value should be one that is suitable for your Kubernetes provider;
standard-rwo
is a GKE-specific storage class used as an example. To see storageClass values suitable for your cluster, consult your provider's documentation. You can view your cluster's configured storageClass by runningkubectl get storageClasses
-
Delete management-shell pod prior to running helm upgrade: Run
kubectl delete pod management-shell
to delete the pod. Note that if you happen to have any information stored on that pod it would be removed, though in the normal course of operations that would not be the case. This pod mounts the shared volumes used elsewhere in the cluster, and so changes to thestorageClass
values might result in an error similar to the following if it is not deleted when the upgrade is performed:$ helm upgrade my-deephaven-release-name ./deephaven/ -f ./my-values.yaml --set image.tag=1.20230511.248 --debug Error: UPGRADE FAILED: cannot patch “aclwriter-binlogs” with kind PersistentVolumeClaim: PersistentVolumeClaim “aclwriter-binlogs” is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims core.PersistentVolumeClaimSpec{ ... // 2 identical fields Resources: {Requests: {s”storage”: {i: {...}, s: “2Gi”, Format: “BinarySI”}}}, VolumeName: “pvc-80a518f6-1a24-4c27-93b5-c7e9bd25d824”, - StorageClassName: &“standard-rwo”, + StorageClassName: &“default”, VolumeMode: &“Filesystem”, DataSource: nil, DataSourceRef: nil, }
Ingesting Kafka Data from DnD
The Deephaven Community Kafka ingestion framework provides several advantages over the existing Enterprise framework. Notably:
- The Community Kafka ingester can read Kafka streams into memory and store them to disk.
- Key and Value specifications are disjoint, which is an improvement over the io.deephaven.kafka.ingest.ConsumerRecordToTableWriterAdapter pattern found in Enterprise.
- The Community KafkaIngester uses chunks for improved efficiency compared to row-oriented Enterprise adapters.
You can now use the Community Kafka ingester together with an in-worker ingestion server in a DnD worker. As with the existing Enterprise Kafka ingestion, you must create a schema and create a data import server within your data routing configuration. After creating the schema and DIS configuration, create an ingestion script using a Community worker.
You must create a KafkaConsumer Properties object. Persistent ingestion requires that auto commit is disabled in order to ensure exactly once delivery. The next step is creating an Options builder object for the ingestion and passing it to the KafkaTableWriter.consumeToDis function. You can retrieve the table in the same query, or from any other query according to your data routing configuration.
import io.deephaven.kafka.KafkaTools
import io.deephaven.enterprise.kafkawriter.KafkaTableWriter
final Properties props = new Properties()
props.put('bootstrap.servers', 'http://kafka-broker:9092')
props.put('schema.registry.url', 'http://kafka-broker:8081')
props.put("fetch.min.bytes", "65000")
props.put("fetch.max.wait.ms", "200")
props.put("deephaven.key.column.name", "Key")
props.put("deephaven.key.column.type", "long")
props.put("enable.auto.commit", "false")
props.put("group.id", "dis1")
final KafkaTableWriter.Options opts = new io.deephaven.enterprise.kafkawriter.KafkaTableWriter.Options()
opts.disName("KafkaCommunity")
opts.tableName("Table").namespace("Namespace").partitionValue(today())
opts.topic("demo-topic")
opts.kafkaProperties(props)
opts.keySpec(io.deephaven.kafka.KafkaTools.FROM_PROPERTIES)
opts.valueSpec(io.deephaven.kafka.KafkaTools.Consume.avroSpec("demo-value"))
KafkaTableWriter.consumeToDis(opts)
ingestedTable=db.liveTable("Namespace", "Table").where("Date=today()")
Customers can now provide their own JARs to Community in Enterprise (i.e. DnD) workers
Customers can now provide their own JARs into three locations that DnD workers can load from:
- Arbitrary locations specified by the "Extra Classpaths" field from e.g. a console or Persistent Query configuration
- A user-created location specific to a DnD Worker Kind configuration, specified by the
WorkerKind.<Name>.customLib
property - A default directory found in every DnD installation, e.g. /usr/illumon/dnd/latest/custom_lib/
Data routing file checks for duplicate keys
The data routing file is a YAML file. The YAML syntax includes name:value maps, and like most maps, cannot contain duplicate keys. Data routing file validation now raises an error when duplicate map keys are detected. The prior behavior was for the duplicate keys to silently replace the value in the map.
Reading Hierarchical Parquet Data
Deephaven Community workers can now read more complex Parquet formats through
the db.historical_table
method (or db.historicalTable
from Groovy). Three
new types of Parquet layouts are supported:
- metadata: A hierarchical structure where a root table_metadata.parquet file contains the metadata and paths for each partition of the table.
- kv: A hierarchical directory with key=value pairs for partitioning columns.
- flat: A directory containing one or more Parquet files that are combined into a single table.
To read a Parquet table with the historical_table
, you must first create a Schema that
matches the underlying Parquet data. The Table
element must have storageType="Extended"
,
and a child element for ExtendedStorage
that specifies a type
. The valid
type values are parquet:metadata
, parquet:kv
, and parquet:flat
,
corresponding to the supported layouts.
Legacy workers cannot read advanced Parquet layouts. If you call db.t
with a table that
defines Extended storage, an exception is raised.
com.illumon.iris.db.exceptions.ScriptEvaluationException: Error encountered at line 1: t=db.t("NAMESPACE", "TABLENAME")
...
caused by:
java.lang.UnsupportedOperationException: Tables with storage type Extended are only supported by Community workers.
Extended storage tables may have more than one partitioning column. The data import server can only ingest tables with a single partitioning column of type String. Attempts to tail binary files for tables that don't meet these criteria will raise an exception.
java.lang.RuntimeException: Could not create table listener
...
Caused by: com.illumon.iris.db.schema.SchemaValidationException: Tailing of schemas with multiple partitioning columns is not supported.
java.lang.RuntimeException: Could not create table listener
...
Caused by: com.illumon.iris.db.schema.SchemaValidationException: Tailing of schemas with a non-String partitioning column is not supported.
Discovering a Schema from an Existing Parquet Layout
You can read the Parquet directory using the standard community readTable function and create an Enterprise schema and table definition as follows:
import static io.deephaven.parquet.table.ParquetTools.readTable
import io.deephaven.enterprise.compatibility.TableDefinitionCompatibility
import static io.deephaven.shadow.enterprise.com.illumon.iris.db.tables.TableDefinition.STORAGETYPE_EXTENDED
result = readTable("/db/Systems/PQTest/Extended/commodities")
edef = TableDefinitionCompatibility.convertToEnterprise(result.getDefinition())
edef.setName("commodities")
edef.setNamespace("PQTest")
edef.setStorageType(STORAGETYPE_EXTENDED)
ss=io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.SchemaServiceFactory.getDefault()
ss.authenticate()
schema=io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.xml.SchemaXmlFactory.getXmlSchema(edef, io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.NamespaceSet.SYSTEM)
// If this is a new namespace
ss.createNamespace(io.deephaven.shadow.enterprise.com.illumon.iris.db.schema.NamespaceSet.SYSTEM, "PQTest")
// insert the ExtendedStorage type
schema.setExtendedStorageType("parquet:kv")
ss.addSchema(schema)
Read the table with:
db.historicalTable("PQTest", "Commodities")
Java Exception Logging
Deephaven logs now use the Java standard format for Exception stack traces, which includes suppressed exceptions and collapses repetitive stack trace elements, among other improvements.
ACLs for DbInternal CommunityIndex tables
Preexisting installs must manually add new ACLs for the new DbInternal tables.
First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCommunityIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCommunityIndex -overwrite_existing
exit
Then, run the following to add the new ACLs into the system:
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt
Alternatively, the ACLs can be added manually one by one in the Swing ACL Editor:
allusers | DbInternal | ServerStateLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCommunityIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
Seamless integration of Community panels in Deephaven Enterprise
Deephaven Enterprise now supports opening plots and tables from Community queries via the Panels menu. Community panels can be linked and filtered the same way as Enterprise.
Allow removal of "Help / Contact Support ..." via property
A new property, IrisConsole.contactSupportEnabled
has been added, which may be used to remove the "Help / Contact Support ..." button from the swing front-end.
By default, this property is set to true
in order to preserve current behavior. Setting this to false
in properties will remove the menu-option.
db
available via import in Community Python workers
In Community Python workers, the Database object db
can now be imported into user scripts and modules directly using import statements, for example:
from deephaven_enterprise.database import db
my_table = db.live_table(namespace="MyNamespace", table_name="MyTable").where("Date=today()")
The db
object is still available as a global variable for Consoles and Persistent Query scripts.
OperationUser columns added to DnD DbInternal tables
The internal performance tables for Community workers now have columns for OperationAuthenticatedUser and OperationEffectiveUser. This updates the schema for QueryPerformanceLogCommunity, QueryOperationPerformanceLogCommunity, and UpdatePerformanceLogCommunity. The operation user reflects the user that initiated an operation over the network, which is especially important for analyzing the performance of shared persistent queries. For example, filtering, sorting, or rolling up a table can require significant server resources.
No manual changes are needed. The Deephaven installer will deploy the new DbInternal schemas and the new data is ingested into separate internal partitions.
ProcessMetrics logging is now disabled by default
ProcessMetrics logging is now disabled by default in both Enterprise (DHE) and Community in Enterprise (DnD). To enable ProcessMetrics logging, set IrisLogDefaults.writeDatabaseProcessMetrics
to true
. If desired, you can control DnD ProcessMetrics logging separately from DHE via statsLoggingEnabled
.
Kafka Version Upgrade
We have upgraded our Kafka code from version 2.4 to version 3.4.
Confluent Breaking Changes
Confluent code must be upgraded to version 7.4 to be compatible with version 3.4. https://docs.confluent.io/platform/current/installation/versions-interoperability.html
Clients using Avro or POJO for in-worker DISes must switch to the 7.4 versions of the required jars, as specified here: https://deephaven.io/enterprise/docs/importing-data/advanced/streaming/kafka/#generic-record-adapter
The following dependencies are now included in the Deephaven installation:
jackson-core-2.10.0.jar
jackson-databind-2.10.0.jar
jackson-annotations-2.10.0.jar
Users should remove these from their classpath (probably /etc/sysconfig/illumon.d/java_lib
) to avoid conflict
with the included jars.
Controller Tool "Status" Option
The new --status
subcommand for the persistent query controller tool generates
a report to standard output with details of selected persistent queries.
With --verbose
, more details are included. If a query has a failure
recorded and only one query is selected, the stack trace is printed after the regular report. Use the --serial
option to directly select a specific query.
With --jsonOutput
, a JSON block detailing the selected query states is emitted instead of the formatted report. Use --jsonFile
to specify an output location other than standard output.
Possible breaking changes were introduced with this feature:
- Previously (before Silverheels) the
flag options
--continueAfterError
,--includeTemporary
and--includeNonDisplayable
required but ignored a parameter. For example,--includeTemporary=false
and--continueAfterError=never
were both accepted as "true" conditions. In Silverheels, the argument is still required, but onlytrue
and1
will be accepted as true,false
and0
will be accepted as false, and anything else will be treated as a command line error. - Details of information log entries generated by
command_tool
have changed. Important functionality had previously been deferred to after thestarting
/finished
log entries for the corresponding items had been emitted. Those actions are now bracketed by the log marker entries to better inform troubleshooting. - A warning message is emitted to the console when no queries are processed due to selection (filtering) criteria. An informational console message summarizing the filter actions has also been added.
Flight can now resolve Live, Historical and Catalog tables from the database
DnD workers now support retrieving live, historical and catalog tables through Arrow Flight. DnD's Python client has been updated with DndSession.live_table()
, DndSession.historical_table()
and DndSession.catalog_table()
to support this.
For example, to fetch the static FeedOS.EquityQuoteL1 table
from deephaven_enterprise.client.session_manager import SessionManager
connection_info = "https://my-deephaven-host.com:8000/iris/connection.json"
session_mgr: SessionManager = SessionManager(connection_info)
session_mgr.password("iris","iris")
session = session_mgr.connect_to_persistent_query("CommunityQuery")
Quotes = session.historical_table("FeedOS", "EquityQuoteL1").where("Date=`2023-06-15`")
Flight ticket structure
Database flight tickets start with a prefix d
and then are followed by a path consisting of three parts. The first part selects the type, the second is the namespace, and the third is the table name.
Available types are catalog
for the catalog table, live
for live tables and hist
for historical tables.
For example d/live/Market/EquityQuote
would fetch a the live Market.EquityQuote
table. Note that the catalog
version does not use a namespace or tablename d/catalog
will fetch the catalog table.
Reduce default max table display size
The maximum number of rows that may be displayed in the swing front-end before the red "warning bar" is displayed is now configurable. A new default maximum has been defined as 67,108,864 (64 x 1024 x 1024). Technical limitations cause rows beyond this limit to not properly update. When necessary, the Web UI is capable of displaying much larger tables than Swing.
The previous default max may be configured with the following property:
DBTableModel.defaultMaxRows=100000000
Note that the property-defined maximum may be programmatically reduced based on technical limits.
Improved Metadata Indexer tool
The Metadata Indexer tool has been improved so that it can now validate and list table metadata indexes on disk.
The tool can be invoked using the dhctl script with the metadata
command.
Deephaven now supports subplotting in the Web UI
Users now have the ability to view multiple charts subplotted in one figure using the Web UI. Create subplots using the newChart
, colSpan
, and rowSpan
functions available on a Figure
. Details are available in the Plotting Cheat Sheet.
Example Groovy code of subplots
tt = timeTable("00:00:00.01").update("X=0.01*ii", "Y=ii*ii", "S=sin(X)", "C=cos(X)", "T=tan(X)").tail(1000)
// Figure with single plot
f1 = figure().plot("Y", tt, "X", "Y").show()
// Figure with two plots, one on top of the other
f2 = figure(2, 1)
.newChart(0,0).plot("S", tt, "X", "S")
.newChart(1,0).plot("C", tt, "X", "C")
.show()
// Figure with 3 plots, one that takes up the full width and then two smaller ones
f3_c = figure(2, 2)
.newChart(0,0).plot("T", tt, "X", "T").colSpan(2)
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(1,1).plot("C", tt, "X", "C")
.show()
// Figure with 3 plots, one that takes up the full height and then two smaller ones
f3_r = figure(2, 2)
.newChart(0,0).plot("T", tt, "X", "T")
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(0,1).plot("C", tt, "X", "C").rowSpan(2)
.show()
// Figure with 4 plots arranged in a grid
f4 = figure(2, 2)
.newChart(0,0).plot("Y", tt, "X", "Y")
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(0,1).plot("C", tt, "X", "C")
.newChart(1,1).plot("T", tt, "X", "T")
.show()
// Re-ordered operations from f4, should appear the same though
f5 = figure(2, 2)
.newChart(1,1).plot("T", tt, "X", "T")
.newChart(0,1).plot("C", tt, "X", "C")
.newChart(1,0).plot("S", tt, "X", "S")
.newChart(0,0).plot("Y", tt, "X", "Y")
.show()
Improved validation of data routing configuration can cause errors in existing configurations
This Deephaven release includes new data routing features, and additional validation checks to detect possible configuration errors. Because of the additional validation, it is possible that an existing data routing configuration that was previously valid is now illegal and will cause parsing errors when the configuration server reads it.
If this occurs, the data routing configuration must be corrected, with the dhconfig
tool in --etcd
mode to bypass the configuration server (which fails to start when the routing configuration is invalid).
Export the configuration:
sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing export --file /tmp/routing.yml --etcd
Edit the exported file to correct errors, and import it:
sudo -u irisadmin /usr/illumon/latest/bin/dhconfig routing import --file /tmp/routing.yml --etcd
Additional details
When the data import configuration is incorrect, the configuration_server process will fail with an error like this:
Initiating shutdown due to: Uncaught exception in thread ConfigurationServer.main io.deephaven.UncheckedDeephavenException: java.util.concurrent.ExecutionException: com.illumon.iris.db.v2.routing.DataRoutingConfigurationException:
In the rare case when this happens in a previous version of Deephaven, or if the solution above doesn't work, the following direct commands can be used to correct the situation:
Export:
sudo DH_ETCD_DIR=/etc/sysconfig/illumon.d/etcd/client/datarouting-rw /usr/illumon/latest/bin/etcdctl.sh get /main/config/routing-file/file > /tmp/r.yml
Import:
sudo DH_ETCD_DIR=/etc/sysconfig/illumon.d/etcd/client/datarouting-rw /usr/illumon/latest/bin/etcdctl.sh put /main/config/routing-file/file </tmp/r.yml
Python Integral Widening
In the 1.20211129 release, the jpy module that Deephaven's Python integration depends on converting all Python integral results into a Java integer. This resulted in truncated results when values exceed Integer.MAX_VALUE. In 1.20221001, Deephaven is using an updated jpy Integration that returns values in the narrowest possible type; so results that previously were an integer could be returned as a byte or a short. Moreover, a formula may have different types for each row. This prevented casting the result into a primitive type, as boxed objects may not be casted to another primitive.
In 1.20221001.196, Python calls in a formula now widen Byte and Short results to an Integer. If the value returned exceeds, Integer.MAX_VALUE, then the result is a Long. Existing formulas that would not have been truncated by conversion to an int in 1.20211129, behave as they would have in that release.
As casting from an arbitrary integral type to a primitive may be required, we
have introduced a utility class com.illumon.iris.db.util.NumericCast
that
provides objectToByte
, objectToShort
, objectToInt
, and objectToLong
methods that will convert any Byte, Short, Integer, Long, or BigInteger into
the specified type. If an overflow would occur, an exception is thrown.
Numba formulas (those that are surrounded in the nb
function); have the
narrowing behavior as in prior versions of 1.20221001.
Changed to use DHC Fast CSV parser for readCsv
TableTools.readCsv calls now use the new DHC High-Performance CSV Parser that uses a column oriented approach to parse CSV files.
The change to DHC parser includes the following visible enhancements
-
Any column that is only populated with integer surrounded by white space will be identified as an integer column. The previous parser would identify the column as a double.
-
Only
7-bit ASCII
is supported as valid delimiters. This means characters such as€ (euro symbol)
are not valid. In these cases the following error will be thrown,delimiter is set to '€' but is required to be 7-bit ASCII
. -
Columns populated wholly with only single characters will be identified as Character columns instead of String columns.
-
Additional date time formats are automatically converted to
DBDateTime
columns. Previously, these formats were imported asString
columns. All other date time behavior remains unchanged.
| Format | Displayed Value in 1.20211129 | Data Type In 1.20211129 | Displayed Value in 1.20221001 | Data Type in 1.20221001 |
| DateTimeISO_UTC_1 | 2017-08-30 11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_UTC_2 | 2017-08-30T11:59:59.000Z | java.lang.String | 2017-08-30T07:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MillisOffset_2 | 2017-08-30T11:59:59.000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime | | DateTimeISO_MicrosOffset_2 | 2017-08-30T11:59:59.000000-04 | java.lang.String | 2017-08-30T11:59:59.000000000 NY | com.illumon.iris.db.tables.utils.DBDateTime |
To use the legacy CSV parser, set the configuration property com.illumon.iris.db.tables.utils.CsvHelpers.useLegacyCsv
to true
.
Support Barrage subscriptions between DnD workers
DnD workers can now subscribe to tables in other DnD workers using Barrage.
This can be done using ResolveTools
and a new URI scheme pq://<Query Identifier>/scope/<Table name>[?snapshot=true]
The Query Identifier
can be either the query name or the query serial. The Table Name
is the name of the table in the server query's scope. The optional snapshot=true
parameter indicates that a snapshot should be fetched instead of a live subscription.
import io.deephaven.uri.ResolveTools
TickingTable = ResolveTools.resolve("pq://CommunityQuery/scope/TickingTable?snapshot=true")
from deephaven_enterprise.uri import resolve
TickingTable = resolve("pq://CommunityQuery/scope/TickingTable?snapshot=true")
Improvements to command line scripts
Deephaven provides many maintenance and utility scripts in /usr/illumon/latest/bin
. This release changes many of these tools to more consistently handle configuration files, setting java path and classpath, error handling, and logging.
Classpaths now include customer plugins and custom jars. This is important for features that can include custom data types, including table definitions and schemas.
For the tools included in this update, there is now a consistent way to handle invalid configuration and other unforeseen errors.
Override the configuration (properties) file
If the default properties file is invalid for some reason, override it by setting DHCONFIG_ROOTFILE
. For example:
DHCONFIG_ROOTFILE=iris-defaults.prop /usr/illumon/latest/bin/dhconfig properties list
Add custom JVM arguments
Add java arguments to be passed into the java program invoked by these scripts by setting EXTRA_JAVA_ARGS
. For example:
EXTRA_JAVA_ARGS="-DConfiguration.rootFile=foo.prop" /usr/illumon/latest/bin/dhconfig properties list
Scripts included in this update
The following scripts have been updated:
- crcat
- data_routing
- defcat
- delete_schema
- dhconfig
- dhctl
- export_schema
- iriscat
- iristail
- migrate_acls
- migrate_controller_cache
- validate_routing_yml
Code Studio Engine Display Order
When selecting the engine (Enterprise or Community) in a Code Studio, existing Deephaven installations show the Enterprise engine first for backwards compatibility. New installations show the Community engine first. This is controlled by a display order property defined for each worker kind. Lower values are displayed first by the Code Studio drop down.
By default, the Enterprise engine has a display order of 100 and Community
engine has a display order of 200. For a new installation, the
iris-environment.prop
file sets the priority of the Community engine to 50 as
follows:
WorkerKind.DeephavenCommunity.displayOrder=50
You may adjust the display order properties for community workers by changing the display order property as desired.
etcd ownership
In previous releases, if the Deephaven installer installed etcd, the etcd
and etcdctl
executables in /usr/bin
were created with the ownership of the user who ran the installation. They should be owned by root.
ls -l /usr/bin/etcd*
If the ownership isn't root:
sudo chown root:root /usr/bin/etcd*
ACLs for DbInternal Index and Community tables
Preexisting installs must manually add new ACLs for the new DbInternal tables.
First, create a text file (e.g. /tmp/new-acls.txt) with the following contents:
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessEventLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessTelemetryIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogIndex -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessInfoLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ProcessMetricsLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))' -group allusers -namespace DbInternal -table ServerStateLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table UpdatePerformanceLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryOperationPerformanceLogCommunity -overwrite_existing
-add_acl 'new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))' -group allusers -namespace DbInternal -table QueryPerformanceLogCommunity -overwrite_existing
exit
Then, run the following to add the new ACLs into the system:
sudo -u irisadmin /usr/illumon/latest/bin/iris iris_db_user_mod --file /tmp/new-acls.txt
Alternatively, the ACLs can be added manually one by one in the Swing ACL Editor:
allusers | DbInternal | ProcessEventLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessTelemetryIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogIndex | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | ProcessInfoLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ProcessMetricsLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | ServerStateLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("EffectiveUser"), new UsernameFilterGenerator("AuthenticatedUser"))
allusers | DbInternal | UpdatePerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryOperationPerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
allusers | DbInternal | QueryPerformanceLogCommunity | new DisjunctiveFilterGenerator(new UsernameFilterGenerator("PrimaryEffectiveUser"), new UsernameFilterGenerator("PrimaryAuthenticatedUser"))
DnD Now supports Edge ACLs
Query writers can now specify ACLs on derived tables. These ACLs will be applied when tables or plots are fetched by a client based upon the client's groups.
Edge ACLs are created using the EdgeAclProvider
class in the io.deephaven.enterprise.acl
package. Additionally, the
io.deephaven.enterprise.acl.AclFilterGenerator
interface contains some helpful factory methods for commonly used ACL
types.
The following example assumes that a table "TickingTable" has already been created. Edge ACLs are created using a builder
that contains a few simple methods for building up ACL sets. Once build()
is called you have an ACL object which
can then be used to transform one or more tables using the applyTo()
method. Note that you must overwrite the scope
variable with the result of the application, since Table properties are immutable.
import io.deephaven.enterprise.acl.EdgeAclProvider
import io.deephaven.enterprise.acl.AclFilterGenerator
def ACL = EdgeAclProvider.builder()
.rowAcl("NYSE", AclFilterGenerator.where("Exchange in `NYSE`"))
.columnAcl("LimitPrice", "*", AclFilterGenerator.fullAccess())
.columnAcl("LimitPrice", ["Price", "TradeVal"], AclFilterGenerator.group("USym"))
.build()
TickingTable = ACL.applyTo(TickingTable)
from deephaven_enterprise.edge_acl import EdgeAclProvider
import deephaven_enterprise.acl_generator as acl_generator
ACL = EdgeAclProvider.builder() \
.row_acl("NYSE", acl_generator.where("Exchange in `NYSE`") \
.column_acl("LimitPrice", "*", acl_generator.full_access()) \
.column_acl("LimitPrice", ["Price", "TradeVal"], acl_generator.group("USym")) \
.build()
TickingTable = ACL.apply_to(TickingTable)
See the DnD documentation for details on the AclFilterGenerator
and EdgeAclProvider
interfaces.
Remote R Groovy Sessions
The idb.init
method now has an optional remote
parameter. When set to
TRUE, Groovy script code is not executed locally but rather on a remote Groovy
session as is done in the Swing console or Web Code Studio. This eliminates a class
of serialization problems that could otherwise occur with a local Groovy session
serializing classes to the remote server. To use the old local Groovy session,
you must pass the remote parameter as follows:
idb.init(devroot=devroot, workspace, propfile, keyfile=keyfile, jvmArgs=jvmLocalArgs, remote=FALSE)
Additionally, you may now call idb.close()
to terminate the remote worker and
release the associated server resources.