Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependency issues 1.0.9 -> 1.0.10 or 1.0.11 #224

Open
CyloNox opened this issue Sep 14, 2021 · 0 comments
Open

Dependency issues 1.0.9 -> 1.0.10 or 1.0.11 #224

CyloNox opened this issue Sep 14, 2021 · 0 comments

Comments

@CyloNox
Copy link

CyloNox commented Sep 14, 2021

Describe the bug
The build completes with some warnings but it seems like nothing major but in fact when I process some files it throws an error.

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 38, 10.158.153.11, executor 0): java.lang.NoSuchMethodError: com.github.benmanes.caffeine.cache.Caffeine.expireAfterWrite(Ljava/time/Duration;)Lcom/github/benmanes/caffeine/cache/Caffeine;
	at com.ibm.fhir.cache.util.CacheSupport.createCache(CacheSupport.java:149)
	at com.ibm.fhir.cache.util.CacheSupport.createCache(CacheSupport.java:120)
	at com.ibm.fhir.cache.util.CacheSupport.createCacheAsMap(CacheSupport.java:202)
	at com.ibm.fhir.cache.CachingProxy$CachingInvocationHandler.createCacheAsMap(CachingProxy.java:170)
	at com.ibm.fhir.cache.CachingProxy$CachingInvocationHandler.lambda$invoke$1(CachingProxy.java:126)
	at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
	at com.ibm.fhir.cache.CachingProxy$CachingInvocationHandler.invoke(CachingProxy.java:126)
	at com.sun.proxy.$Proxy31.getConcept(Unknown Source)
	at com.ibm.fhir.term.service.FHIRTermService.lookup(FHIRTermService.java:423)
	at com.ibm.fhir.term.service.FHIRTermService.lookup(FHIRTermService.java:488)
	at com.ibm.fhir.term.service.FHIRTermService.lookup(FHIRTermService.java:462)
	at io.github.linuxforhealth.core.terminology.TerminologyLookup.lookup(TerminologyLookup.java:37)
	at io.github.linuxforhealth.hl7.data.SimpleDataValueResolver.lambda$static$26(SimpleDataValueResolver.java:346)
	at io.github.linuxforhealth.hl7.expression.variable.DataTypeVariable.extractVariableValue(DataTypeVariable.java:60)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.resolveVariables(AbstractExpression.java:285)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.generateValue(AbstractExpression.java:265)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluateValueOfExpression(AbstractExpression.java:180)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluate(AbstractExpression.java:114)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.processExpression(ExpressionUtility.java:93)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.evaluate(ExpressionUtility.java:67)
	at io.github.linuxforhealth.hl7.resource.HL7DataBasedResourceModel.evaluate(HL7DataBasedResourceModel.java:78)
	at io.github.linuxforhealth.hl7.expression.ResourceExpression.evaluateExpression(ResourceExpression.java:61)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.generateValue(AbstractExpression.java:269)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluateValueOfExpression(AbstractExpression.java:180)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluate(AbstractExpression.java:114)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.processExpression(ExpressionUtility.java:93)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.evaluate(ExpressionUtility.java:67)
	at io.github.linuxforhealth.hl7.resource.HL7DataBasedResourceModel.evaluate(HL7DataBasedResourceModel.java:78)
	at io.github.linuxforhealth.hl7.message.HL7MessageEngine.generateMultipleResources(HL7MessageEngine.java:294)
	at io.github.linuxforhealth.hl7.message.HL7MessageEngine.generateResources(HL7MessageEngine.java:173)
	at io.github.linuxforhealth.hl7.message.HL7MessageEngine.transform(HL7MessageEngine.java:106)
	at io.github.linuxforhealth.hl7.message.HL7MessageModel.convert(HL7MessageModel.java:75)
	at io.github.linuxforhealth.hl7.HL7ToFHIRConverter.convert(HL7ToFHIRConverter.java:127)
	at io.github.linuxforhealth.hl7.HL7ToFHIRConverter.convert(HL7ToFHIRConverter.java:102)
	at line33378a206fcd438a83deae9994cf541125.$read$$iw$$iw$$iw$$iw$$iw$$iw.$anonfun$hl7ToFhir$1(command-4148118936536137:46)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:733)
	at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:80)
	at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$1(Collector.scala:187)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:144)
	at org.apache.spark.scheduler.Task.run(Task.scala:117)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$9(Executor.scala:655)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:658)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2519)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2466)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2460)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2460)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1152)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1152)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1152)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2721)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2668)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2656)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:938)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2339)
	at org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:298)
	at org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:308)
	at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:82)
	at org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:88)
	at org.apache.spark.sql.execution.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:508)
	at org.apache.spark.sql.execution.CollectLimitExec.executeCollectResult(limit.scala:58)
	at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:2994)
	at org.apache.spark.sql.Dataset.$anonfun$collectResult$1(Dataset.scala:2985)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3709)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$5(SQLExecution.scala:116)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:249)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:101)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:845)
	at org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:77)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:199)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3707)
	at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:2984)
	at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation0(OutputAggregator.scala:194)
	at com.databricks.backend.daemon.driver.OutputAggregator$.withOutputAggregation(OutputAggregator.scala:57)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.$anonfun$getResultBufferInternal$3(ScalaDriverLocal.scala:325)
	at scala.Option.map(Option.scala:230)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.$anonfun$getResultBufferInternal$1(ScalaDriverLocal.scala:305)
	at scala.Option.map(Option.scala:230)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.getResultBufferInternal(ScalaDriverLocal.scala:269)
	at com.databricks.backend.daemon.driver.DriverLocal.getResultBuffer(DriverLocal.scala:538)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:246)
	at com.databricks.backend.daemon.driver.DriverLocal.$anonfun$execute$10(DriverLocal.scala:431)
	at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$1(UsageLogging.scala:239)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
	at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:234)
	at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:231)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
	at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:276)
	at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:269)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:408)
	at com.databricks.backend.daemon.driver.DriverWrapper.$anonfun$tryExecutingCommand$1(DriverWrapper.scala:653)
	at scala.util.Try$.apply(Try.scala:213)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:645)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:486)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:598)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:391)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: com.github.benmanes.caffeine.cache.Caffeine.expireAfterWrite(Ljava/time/Duration;)Lcom/github/benmanes/caffeine/cache/Caffeine;
	at com.ibm.fhir.cache.util.CacheSupport.createCache(CacheSupport.java:149)
	at com.ibm.fhir.cache.util.CacheSupport.createCache(CacheSupport.java:120)
	at com.ibm.fhir.cache.util.CacheSupport.createCacheAsMap(CacheSupport.java:202)
	at com.ibm.fhir.cache.CachingProxy$CachingInvocationHandler.createCacheAsMap(CachingProxy.java:170)
	at com.ibm.fhir.cache.CachingProxy$CachingInvocationHandler.lambda$invoke$1(CachingProxy.java:126)
	at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
	at com.ibm.fhir.cache.CachingProxy$CachingInvocationHandler.invoke(CachingProxy.java:126)
	at com.sun.proxy.$Proxy31.getConcept(Unknown Source)
	at com.ibm.fhir.term.service.FHIRTermService.lookup(FHIRTermService.java:423)
	at com.ibm.fhir.term.service.FHIRTermService.lookup(FHIRTermService.java:488)
	at com.ibm.fhir.term.service.FHIRTermService.lookup(FHIRTermService.java:462)
	at io.github.linuxforhealth.core.terminology.TerminologyLookup.lookup(TerminologyLookup.java:37)
	at io.github.linuxforhealth.hl7.data.SimpleDataValueResolver.lambda$static$26(SimpleDataValueResolver.java:346)
	at io.github.linuxforhealth.hl7.expression.variable.DataTypeVariable.extractVariableValue(DataTypeVariable.java:60)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.resolveVariables(AbstractExpression.java:285)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.generateValue(AbstractExpression.java:265)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluateValueOfExpression(AbstractExpression.java:180)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluate(AbstractExpression.java:114)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.processExpression(ExpressionUtility.java:93)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.evaluate(ExpressionUtility.java:67)
	at io.github.linuxforhealth.hl7.resource.HL7DataBasedResourceModel.evaluate(HL7DataBasedResourceModel.java:78)
	at io.github.linuxforhealth.hl7.expression.ResourceExpression.evaluateExpression(ResourceExpression.java:61)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.generateValue(AbstractExpression.java:269)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluateValueOfExpression(AbstractExpression.java:180)
	at io.github.linuxforhealth.hl7.expression.AbstractExpression.evaluate(AbstractExpression.java:114)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.processExpression(ExpressionUtility.java:93)
	at io.github.linuxforhealth.hl7.util.ExpressionUtility.evaluate(ExpressionUtility.java:67)
	at io.github.linuxforhealth.hl7.resource.HL7DataBasedResourceModel.evaluate(HL7DataBasedResourceModel.java:78)
	at io.github.linuxforhealth.hl7.message.HL7MessageEngine.generateMultipleResources(HL7MessageEngine.java:294)
	at io.github.linuxforhealth.hl7.message.HL7MessageEngine.generateResources(HL7MessageEngine.java:173)
	at io.github.linuxforhealth.hl7.message.HL7MessageEngine.transform(HL7MessageEngine.java:106)
	at io.github.linuxforhealth.hl7.message.HL7MessageModel.convert(HL7MessageModel.java:75)
	at io.github.linuxforhealth.hl7.HL7ToFHIRConverter.convert(HL7ToFHIRConverter.java:127)
	at io.github.linuxforhealth.hl7.HL7ToFHIRConverter.convert(HL7ToFHIRConverter.java:102)
	at line33378a206fcd438a83deae9994cf541125.$read$$iw$$iw$$iw$$iw$$iw$$iw.$anonfun$hl7ToFhir$1(command-4148118936536137:46)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:733)
	at org.apache.spark.sql.execution.collect.UnsafeRowBatchUtils$.encodeUnsafeRows(UnsafeRowBatchUtils.scala:80)
	at org.apache.spark.sql.execution.collect.Collector.$anonfun$processFunc$1(Collector.scala:187)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.doRunTask(Task.scala:144)
	at org.apache.spark.scheduler.Task.run(Task.scala:117)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$9(Executor.scala:655)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:658)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

To Reproduce
In order for me to reproduce it I just attach the jar to a notebook in databricks and the error comes when processing it.

Expected behavior
To Run

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Windows
  • Version: Java 1.8, scala 2.12.10, sbt 1.4.7

Additional context
It took me awhile but I did find what change triggered this for me and I found a way to fix it by overwriting the dependencies. It was a change to the build.gradle file, in particular it's these two: "fhir-registry" & "fhir-term" going from 4.7.1 to 4.9.0. I ran a evicted command in sbt to trace in order to see how many there might be.

The fix
In my build.sbt file I include the following to force older libraries:

dependencyOverrides += "com.github.ben-manes.caffeine" % "caffeine" % "2.7.0"
dependencyOverrides += "com.ibm.fhir" % "fhir-registry" % "4.7.1"
dependencyOverrides += "com.ibm.fhir" % "fhir-term" % "4.7.1"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant