Spark ошибка во время передачи файла

Редактировать | Профиль | Сообщение | Цитировать | Сообщить модератору shustersh
netdiag

Computer Name: SERVER-KIS
DNS Host Name: SERVER-KIS.kis.kz
System info : Windows 2000 Server (Build 3790)
Processor : x86 Family 15 Model 4 Stepping 3, GenuineIntel
List of installed hotfixes :
Q147222

Netcard queries test . . . . . . . : Passed

Per interface results:

Adapter : ╧юфъы■ўхэшх яю ыюъры№эющ ёхЄш

Netcard queries test . . . : Passed

Host Name. . . . . . . . . : SERVER-KIS
IP Address . . . . . . . . : 192.168.79.3
Subnet Mask. . . . . . . . : 255.255.255.0
Default Gateway. . . . . . : 192.168.79.8
Primary WINS Server. . . . : 192.168.79.3
Dns Servers. . . . . . . . : 192.168.79.3

AutoConfiguration results. . . . . . : Passed

Default gateway test . . . : Failed
No gateway reachable for this adapter.

NetBT name test. . . . . . : Passed
No remote names have been found.

WINS service test. . . . . : Passed

Domain membership test . . . . . . : Passed

NetBT transports test. . . . . . . : Passed
List of NetBt transports currently configured:
NetBT_Tcpip_ <8E45C675-BE4A-454D-92E2-E3FF45108475>
1 NetBt transport currently configured.

Autonet address test . . . . . . . : Passed

IP loopback ping test. . . . . . . : Passed

Default gateway test . . . . . . . : Failed

[FATAL] NO GATEWAYS ARE REACHABLE.
You have no connectivity to other network segments.
If you configured the IP protocol manually then
you need to add at least one valid gateway.

NetBT name test. . . . . . . . . . : Passed

Winsock test . . . . . . . . . . . : Passed

DNS test . . . . . . . . . . . . . : Failed
[FATAL] File confignetlogon.dns contains invalid DNS entries. [FATAL
] No DNS servers have the DNS records for this DC registered.

Redir and Browser test . . . . . . : Passed
List of NetBt transports currently bound to the Redir
NetBT_Tcpip_ <8E45C675-BE4A-454D-92E2-E3FF45108475>
The redir is bound to 1 NetBt transport.

List of NetBt transports currently bound to the browser
NetBT_Tcpip_ <8E45C675-BE4A-454D-92E2-E3FF45108475>
The browser is bound to 1 NetBt transport.

DC discovery test. . . . . . . . . : Passed

DC list test . . . . . . . . . . . : Passed

Trust relationship test. . . . . . : Skipped

Kerberos test. . . . . . . . . . . : Passed

LDAP test. . . . . . . . . . . . . : Passed

Bindings test. . . . . . . . . . . : Passed

WAN configuration test . . . . . . : Skipped
No active remote access connections.

Modem diagnostics test . . . . . . : Passed

IP Security test . . . . . . . . . : Skipped

Note: run «netsh ipsec dynamic show /?» for more detailed information

The command completed successfully

Domain Controller Diagnosis

Performing initial setup:
Done gathering initial info.

Doing initial required tests

Testing server: Default-First-Site-NameSERVER-KIS
Starting test: Connectivity
. SERVER-KIS passed test Connectivity

Doing primary tests

Testing server: Default-First-Site-NameSERVER-KIS
Starting test: Replications
. SERVER-KIS passed test Replications
Starting test: NCSecDesc
. SERVER-KIS passed test NCSecDesc
Starting test: NetLogons
. SERVER-KIS passed test NetLogons
Starting test: Advertising
. SERVER-KIS passed test Advertising
Starting test: KnowsOfRoleHolders
. SERVER-KIS passed test KnowsOfRoleHolders
Starting test: RidManager
. SERVER-KIS passed test RidManager
Starting test: MachineAccount
. SERVER-KIS passed test MachineAccount
Starting test: Services
. SERVER-KIS passed test Services
Starting test: ObjectsReplicated
. SERVER-KIS passed test ObjectsReplicated
Starting test: frssysvol
. SERVER-KIS passed test frssysvol
Starting test: frsevent
. SERVER-KIS passed test frsevent
Starting test: kccevent
. SERVER-KIS passed test kccevent
Starting test: systemlog
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:34:58
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:34:58
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:34:58
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:39:10
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:39:11
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:39:12
(Event String could not be retrieved)
. SERVER-KIS failed test systemlog
Starting test: VerifyReferences
. SERVER-KIS passed test VerifyReferences

Running partition tests on : DomainDnsZones
Starting test: CrossRefValidation
. DomainDnsZones passed test CrossRefValidation

Starting test: CheckSDRefDom
. DomainDnsZones passed test CheckSDRefDom

Running partition tests on : ForestDnsZones
Starting test: CrossRefValidation
. ForestDnsZones passed test CrossRefValidation

Starting test: CheckSDRefDom
. ForestDnsZones passed test CheckSDRefDom

Running partition tests on : Schema
Starting test: CrossRefValidation
. Schema passed test CrossRefValidation
Starting test: CheckSDRefDom
. Schema passed test CheckSDRefDom

Running partition tests on : Configuration
Starting test: CrossRefValidation
. Configuration passed test CrossRefValidation
Starting test: CheckSDRefDom
. Configuration passed test CheckSDRefDom

Running partition tests on : kis
Starting test: CrossRefValidation
. kis passed test CrossRefValidation
Starting test: CheckSDRefDom
. kis passed test CheckSDRefDom

Running enterprise tests on : kis.kz
Starting test: Intersite
. kis.kz passed test Intersite
Starting test: FsmoCheck
. kis.kz passed test FsmoCheck

Добавлено:
все время так работало и никогда не было проблем с днс. nslookup отрабатывает нормально ошибок в журнале днс тоже нет. в домене все регатся тоже отлично.никаких траблов.

I had a similar problem to this. What version of openfire are you on?  Also, when the file is being transferred, is the file also open?

If you have access to openfire, I would check the logs and see what the error is returning.  On ours, I started to think the problem was with the file.  I renamed it, stored it somewhere else and away it went.  The problem could have been an issue with a corrupted temp file when the file was open.  

Also with Windows 7 and above you need to make sure that the downloads folder location in the «settings» of the user receiving it is in an area they have write access to.  When I first installed spark on a Windows 7, I found that the downloads folder was sitting under the admin profile instead of the user profile. 

Hope this is of some help. 


Was this post helpful?
thumb_up
thumb_down

#apache-spark #apache-spark-sql #jupyter-notebook

Вопрос:

Я нахожусь в EMR, получаю данные из каталога клея

СТУДИЯ EMR

когда я пытаюсь передать эти данные и прочитать их через Spark SQL, это выдает мне следующую ошибку:

Ошибка

 Caused by: org.apache.spark.SparkUpgradeException: 
You may get a different result due to the upgrading of Spark 3.0: reading dates before 1582-10-15 or timestamps 
before 1900-01-01T00:00:00Z from Parquet files can be ambiguous, as the files may be written by Spark 2.x or legacy versions of Hive, which uses a legacy hybrid calendar that is different from Spark 3.0 's Proleptic Gregorian calendar. 
See more details in SPARK-31404. You can set spark.sql.legacy.parquet.datetimeRebaseModeInRead to 'LEGACY' to rebase the datetime values w.r.t. the calendar difference during reading. Or set spark.sql.legacy.parquet.datetimeRebaseModeInRead to 'CORRECTED' to read the datetime values as it is.
    at org.apache.spark.sql.execution.datasources.DataSourceUtils$.newRebaseExceptionInRead(DataSourceUtils.scala:159)
    at org.apache.spark.sql.execution.datasources.DataSourceUtils$.$anonfun$creteTimestampRebaseFuncInRead$1(DataSourceUtils.scala:209)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetRowConverter$anon$4.addLong(ParquetRowConverter.scala:330)
    at org.apache.parquet.column.impl.ColumnReaderImpl$2$4.writeValue(ColumnReaderImpl.java:268)
    at org.apache.parquet.column.impl.ColumnReaderImpl.writeCurrentValueToConverter(ColumnReaderImpl.java:367)
    at org.apache.parquet.io.RecordReaderImplementation.read(RecordReaderImplementation.java:406)
    at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:226)
    ... 21 more
 

Я попытался изменить следующие настройки в spark, но безуспешно

spark.conf.set(«spark.sql.legacy.parquet.datetimeRebaseModeInRead»,»ИСПРАВЛЕНО») и spark.conf.set(«spark.sql.legacy.parquet.datetimeRebaseModeInRead», «НАСЛЕДИЕ»)

Я также сделал выбор в представлении, созданном с помощью следующего кода, и он работал без проблем.

введите описание изображения здесь

поэтому это заставляет меня думать, что проблема в том, что я использую% sql

почему это происходит? Я делаю что-то не так?

Комментарии:

1. Было ли найдено решение этой проблемы?

2. Со своей стороны, код верен, и мы провели тесты с поддержкой aws для видеозвонков. Я собрал билет в службу поддержки Amazon. Он все еще находится на рассмотрении.

Spark Streaming in cluster mode is throwing FileNotFoundException with linux file system (GFS — shared file system across all nodes) but working fine with HDFS as input.

Data is actually available and accessible on this path from all the worker nodes.

JavaPairInputDStream<Text, Text> myDStream =
    jssc.fileStream(path, Text.class, Text.class, customInputFormat.class, new Function<Path, Boolean>() {
      @Override
      public Boolean call(Path v1) throws Exception {
        return Boolean.TRUE;
      }
    }, false);

error message:

14/06/03 21:33:40 WARN FileInputDStream: Error finding new files
java.io.FileNotFoundException: File /data/spark/input does not exist.
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:697)
        at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
        at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
        at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
        at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:176)
        at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:134)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.FlatMappedDStream.compute(FlatMappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.FlatMappedDStream.compute(FlatMappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:38)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
        at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:243)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:241)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:241)
        at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:177)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$start$1$$anon$1$$anonfun$receive$1.applyOrElse(JobGenerator.scala:86)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/06/03 21:33:40 INFO FileInputDStream: New files at time 1433347420000 ms:

Note:
Spark shell works with this shared file system.

How to resolve this issue?

asked Jun 3, 2015 at 16:37

Vijay Innamuri's user avatar

Vijay InnamuriVijay Innamuri

4,2326 gold badges42 silver badges67 bronze badges

JavaPairInputDStream<Text, Text> myDStream =
    jssc.fileStream(path, Text.class, Text.class, customInputFormat.class, new Function<Path, Boolean>() {
      @Override
      public Boolean call(Path v1) throws Exception {
        return Boolean.TRUE;
      }
    }, false);

resolved after directory path is prefixed with file:///

answered Jun 4, 2015 at 10:38

Vijay Innamuri's user avatar

Vijay InnamuriVijay Innamuri

4,2326 gold badges42 silver badges67 bronze badges

My guess is that probably it’s permission problem.

Make sure, when you run the job, you are under the user with sufficient privilege (for master node or the machine you submit the job ) to ssh to worker nodes and r/w/x on worker file system.

answered Jun 4, 2015 at 3:11

keypoint's user avatar

1

Когда вы передаете путь к файлам с помощью —files, они сохраняются в локальном каталоге (временном) для каждого исполнителя. Поэтому, если имена файлов не меняются, вы можете просто использовать их следующим образом вместо использования полного пути, указанного в аргументах.

String connDetailsFile = "SalesforceConn.properties";
String mapFile = "columnMapping.prop";
String sourceColumnsFile = "sourceTableColumns.prop";

Если имена файлов меняются каждый раз, вам нужно удалить путь к файлу и просто использовать имя файла. Это связано с тем, что spark не распознает это как путь, а считает всю строку именем файла.
Например, /home/aiman/SalesforceConn.properties будет считаться именем файла, и искра выдаст вам исключение, сообщающее, что не может найти файл с именем /home/aiman/SalesforceConn.properties.

Итак, ваш код должен быть примерно таким.

String connDetailsFile = args[0].split("/").last
String mapFile = args[1].split("/").last
String sourceColumnsFile = args[2].split("/").last
Редактировать | Профиль | Сообщение | Цитировать | Сообщить модератору shustersh
netdiag

Computer Name: SERVER-KIS
DNS Host Name: SERVER-KIS.kis.kz
System info : Windows 2000 Server (Build 3790)
Processor : x86 Family 15 Model 4 Stepping 3, GenuineIntel
List of installed hotfixes :
Q147222

Netcard queries test . . . . . . . : Passed

Per interface results:

Adapter : ╧юфъы■ўхэшх яю ыюъры№эющ ёхЄш

Netcard queries test . . . : Passed

Host Name. . . . . . . . . : SERVER-KIS
IP Address . . . . . . . . : 192.168.79.3
Subnet Mask. . . . . . . . : 255.255.255.0
Default Gateway. . . . . . : 192.168.79.8
Primary WINS Server. . . . : 192.168.79.3
Dns Servers. . . . . . . . : 192.168.79.3

AutoConfiguration results. . . . . . : Passed

Default gateway test . . . : Failed
No gateway reachable for this adapter.

NetBT name test. . . . . . : Passed
No remote names have been found.

WINS service test. . . . . : Passed

Domain membership test . . . . . . : Passed

NetBT transports test. . . . . . . : Passed
List of NetBt transports currently configured:
NetBT_Tcpip_ <8E45C675-BE4A-454D-92E2-E3FF45108475>
1 NetBt transport currently configured.

Autonet address test . . . . . . . : Passed

IP loopback ping test. . . . . . . : Passed

Default gateway test . . . . . . . : Failed

[FATAL] NO GATEWAYS ARE REACHABLE.
You have no connectivity to other network segments.
If you configured the IP protocol manually then
you need to add at least one valid gateway.

NetBT name test. . . . . . . . . . : Passed

Winsock test . . . . . . . . . . . : Passed

DNS test . . . . . . . . . . . . . : Failed
[FATAL] File confignetlogon.dns contains invalid DNS entries. [FATAL
] No DNS servers have the DNS records for this DC registered.

Redir and Browser test . . . . . . : Passed
List of NetBt transports currently bound to the Redir
NetBT_Tcpip_ <8E45C675-BE4A-454D-92E2-E3FF45108475>
The redir is bound to 1 NetBt transport.

List of NetBt transports currently bound to the browser
NetBT_Tcpip_ <8E45C675-BE4A-454D-92E2-E3FF45108475>
The browser is bound to 1 NetBt transport.

DC discovery test. . . . . . . . . : Passed

DC list test . . . . . . . . . . . : Passed

Trust relationship test. . . . . . : Skipped

Kerberos test. . . . . . . . . . . : Passed

LDAP test. . . . . . . . . . . . . : Passed

Bindings test. . . . . . . . . . . : Passed

WAN configuration test . . . . . . : Skipped
No active remote access connections.

Modem diagnostics test . . . . . . : Passed

IP Security test . . . . . . . . . : Skipped

Note: run «netsh ipsec dynamic show /?» for more detailed information

The command completed successfully

Domain Controller Diagnosis

Performing initial setup:
Done gathering initial info.

Doing initial required tests

Testing server: Default-First-Site-NameSERVER-KIS
Starting test: Connectivity
. SERVER-KIS passed test Connectivity

Doing primary tests

Testing server: Default-First-Site-NameSERVER-KIS
Starting test: Replications
. SERVER-KIS passed test Replications
Starting test: NCSecDesc
. SERVER-KIS passed test NCSecDesc
Starting test: NetLogons
. SERVER-KIS passed test NetLogons
Starting test: Advertising
. SERVER-KIS passed test Advertising
Starting test: KnowsOfRoleHolders
. SERVER-KIS passed test KnowsOfRoleHolders
Starting test: RidManager
. SERVER-KIS passed test RidManager
Starting test: MachineAccount
. SERVER-KIS passed test MachineAccount
Starting test: Services
. SERVER-KIS passed test Services
Starting test: ObjectsReplicated
. SERVER-KIS passed test ObjectsReplicated
Starting test: frssysvol
. SERVER-KIS passed test frssysvol
Starting test: frsevent
. SERVER-KIS passed test frsevent
Starting test: kccevent
. SERVER-KIS passed test kccevent
Starting test: systemlog
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:34:58
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:34:58
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:34:58
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:39:10
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:39:11
(Event String could not be retrieved)
An Error Event occured. EventID: 0x00000457
Time Generated: 06/01/2007 14:39:12
(Event String could not be retrieved)
. SERVER-KIS failed test systemlog
Starting test: VerifyReferences
. SERVER-KIS passed test VerifyReferences

Running partition tests on : DomainDnsZones
Starting test: CrossRefValidation
. DomainDnsZones passed test CrossRefValidation

Starting test: CheckSDRefDom
. DomainDnsZones passed test CheckSDRefDom

Running partition tests on : ForestDnsZones
Starting test: CrossRefValidation
. ForestDnsZones passed test CrossRefValidation

Starting test: CheckSDRefDom
. ForestDnsZones passed test CheckSDRefDom

Running partition tests on : Schema
Starting test: CrossRefValidation
. Schema passed test CrossRefValidation
Starting test: CheckSDRefDom
. Schema passed test CheckSDRefDom

Running partition tests on : Configuration
Starting test: CrossRefValidation
. Configuration passed test CrossRefValidation
Starting test: CheckSDRefDom
. Configuration passed test CheckSDRefDom

Running partition tests on : kis
Starting test: CrossRefValidation
. kis passed test CrossRefValidation
Starting test: CheckSDRefDom
. kis passed test CheckSDRefDom

Running enterprise tests on : kis.kz
Starting test: Intersite
. kis.kz passed test Intersite
Starting test: FsmoCheck
. kis.kz passed test FsmoCheck

Добавлено:
все время так работало и никогда не было проблем с днс. nslookup отрабатывает нормально ошибок в журнале днс тоже нет. в домене все регатся тоже отлично.никаких траблов.

I had a similar problem to this. What version of openfire are you on?  Also, when the file is being transferred, is the file also open?

If you have access to openfire, I would check the logs and see what the error is returning.  On ours, I started to think the problem was with the file.  I renamed it, stored it somewhere else and away it went.  The problem could have been an issue with a corrupted temp file when the file was open.  

Also with Windows 7 and above you need to make sure that the downloads folder location in the «settings» of the user receiving it is in an area they have write access to.  When I first installed spark on a Windows 7, I found that the downloads folder was sitting under the admin profile instead of the user profile. 

Hope this is of some help. 


Was this post helpful?
thumb_up
thumb_down

Spark Streaming in cluster mode is throwing FileNotFoundException with linux file system (GFS — shared file system across all nodes) but working fine with HDFS as input.

Data is actually available and accessible on this path from all the worker nodes.

JavaPairInputDStream<Text, Text> myDStream =
    jssc.fileStream(path, Text.class, Text.class, customInputFormat.class, new Function<Path, Boolean>() {
      @Override
      public Boolean call(Path v1) throws Exception {
        return Boolean.TRUE;
      }
    }, false);

error message:

14/06/03 21:33:40 WARN FileInputDStream: Error finding new files
java.io.FileNotFoundException: File /data/spark/input does not exist.
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:697)
        at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
        at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
        at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
        at org.apache.spark.streaming.dstream.FileInputDStream.findNewFiles(FileInputDStream.scala:176)
        at org.apache.spark.streaming.dstream.FileInputDStream.compute(FileInputDStream.scala:134)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.FlatMappedDStream.compute(FlatMappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.FilteredDStream.compute(FilteredDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.MappedDStream.compute(MappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.FlatMappedDStream.compute(FlatMappedDStream.scala:35)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:300)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:299)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:287)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:284)
        at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:38)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
        at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:243)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:241)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:241)
        at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:177)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$start$1$$anon$1$$anonfun$receive$1.applyOrElse(JobGenerator.scala:86)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
14/06/03 21:33:40 INFO FileInputDStream: New files at time 1433347420000 ms:

Note:
Spark shell works with this shared file system.

How to resolve this issue?

asked Jun 3, 2015 at 16:37

Vijay Innamuri's user avatar

Vijay InnamuriVijay Innamuri

4,2326 gold badges42 silver badges67 bronze badges

JavaPairInputDStream<Text, Text> myDStream =
    jssc.fileStream(path, Text.class, Text.class, customInputFormat.class, new Function<Path, Boolean>() {
      @Override
      public Boolean call(Path v1) throws Exception {
        return Boolean.TRUE;
      }
    }, false);

resolved after directory path is prefixed with file:///

answered Jun 4, 2015 at 10:38

Vijay Innamuri's user avatar

Vijay InnamuriVijay Innamuri

4,2326 gold badges42 silver badges67 bronze badges

My guess is that probably it’s permission problem.

Make sure, when you run the job, you are under the user with sufficient privilege (for master node or the machine you submit the job ) to ssh to worker nodes and r/w/x on worker file system.

answered Jun 4, 2015 at 3:11

keypoint's user avatar

1

У меня есть код Java-spark, который читает определенные файлы свойств. Эти свойства передаются с помощью spark-submit, например:

spark-submit 
--master yarn 
--deploy-mode cluster 
--files /home/aiman/SalesforceConn.properties,/home/aiman/columnMapping.prop,/home/aiman/sourceTableColumns.prop 
--class com.sfdc.SaleforceReader 
--verbose 
--jars /home/ebdpbuss/aiman/Salesforce/ojdbc-7.jar /home/aiman/spark-salesforce-0.0.1-SNAPSHOT-jar-with-dependencies.jar SalesforceConn.properties columnMapping.prop sourceTableColumns.prop

Код, который я написал:

SparkSession spark = SparkSession.builder().master("yarn").config("spark.submit.deployMode","cluster").getOrCreate();
JavaSparkContext jsc = new JavaSparkContext(spark.sparkContext());
Configuration config = jsc.hadoopConfiguration();
FileSystem fs = FileSystem.get(config);

//args[] is the file names that is passed as arguments.
String connDetailsFile = args[0];
String mapFile = args[1];
String sourceColumnsFile = args[2];

String connFile = SparkFiles.get(connDetailsFile);
String mappingFile = SparkFiles.get(mapFile);
String srcColsFile = SparkFiles.get(sourceColumnsFile);

Properties prop = loadProperties(fs,connFile);
Properties mappings = loadProperties(fs,mappingFile);
Properties srcColProp = loadProperties(fs,srcColsFile);

Метод loadProperties(), который я использовал выше:

private static Properties loadProperties(FileSystem fs, String path)
{
    Properties prop = new Properties();
    FSDataInputStream is = null;
    try{
        is = fs.open(new Path(path));
        prop.load(is);
    } catch(Exception e){
        e.printStackTrace();
        System.exit(1);
    }

    return prop;        
}

И это дает мне исключение:

Exception in thread "main" org.apache.spark.SparkException: Application application_1550650871366_125913 finished with failed status
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1187)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1233)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:782)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/03/01 14:34:00 INFO ShutdownHookManager: Shutdown hook called

1 ответ

Когда вы передаете путь к файлам с помощью —files, они сохраняются в локальном каталоге (временном) для каждого исполнителя. Поэтому, если имена файлов не меняются, вы можете просто использовать их следующим образом вместо использования полного пути, указанного в аргументах.

String connDetailsFile = "SalesforceConn.properties";
String mapFile = "columnMapping.prop";
String sourceColumnsFile = "sourceTableColumns.prop";

Если имена файлов меняются каждый раз, вам нужно удалить путь к файлу и просто использовать имя файла. Это связано с тем, что spark не распознает это как путь, а считает всю строку именем файла. Например, /home/aiman/SalesforceConn.properties будет считаться именем файла, и spark выдаст вам исключение, сообщающее, что не может найти файл с именем /home/aiman/. SalesforceConn.properties

Итак, ваш код должен быть примерно таким.

String connDetailsFile = args[0].split("/").last
String mapFile = args[1].split("/").last
String sourceColumnsFile = args[2].split("/").last


0

vikram
2 Мар 2019 в 00:36

Понравилась статья? Поделить с друзьями:
  • Spacedesk код ошибки 2 5
  • Space marine ошибка при запуске приложения 0xc0000906
  • Space hulk deathwing ошибка подключения
  • Space engineers ошибка при установке
  • Space engineers ошибка при запуске