By default, YARN aggregates logs after an application completes. But I am
trying aggregate logs for spark streaming job which in theory will run
forever. I have set the property the following properties for log
aggregation and restarted yarn by restarting hadoop-yarn-nodemanager on core
& task nodes and hadoop-yarn-resourcemanager on master node on my emr
cluster. I can view my changes in http://node-ip:8088/conf.
All the articles and resources have only mentioned to include
and yarn will starting aggregating logs for running jobs. But it is not
working in my case.