Filtering CloudWatch logs for a long time window

Recently I needed to filter cloud watch logs for a keyword, since the time window was too long and it was container based app, there were too many log streams to filter on. So not a task that can be done manually using console.

AWS cli came handy and below did the magic ..

aws logs filter-log-events --log-group-name production/ecs-service --log-stream-name-prefix my-service-prod --filter-pattern user111  --start-time 1561997947000 --end-time 1574698747000 --output text >> user1.txt

Time filter is given in milliseconds since 1970 .. This web site was handy to come up for those

After a long time

I just realized that I haven't posted anything since 2014.. Hey, seeing my last post was about BPM/ADF .. many things changed since then.. I am not mostly working with AWS services..

Recently worked on a migration project that includes containerizing web services that used to be deployed on Weblogic and running them with Glassfish .. Good bye WebLogic ! Still struggling  with an ADF app though .. is not very suitable to run with ADF essentials not sure how to migrate it to AWS yet ..

Feeling motivated to start writing about my journey to AWS from ADF/BPM..

Retrieve BPM composite version from ADF

It is a common scenario that you use the same ADF app for different versions of the BPM flow. In those cases you would like to control the visibility of the new features of the ADF app that supports the BPM flow changes ..

Here is the way to read this info from ADF

Add below to you page definitions..

<accessorIterator id="scaIterator" MasterBinding="taskIterator" Binds="sca" RangeSize="25" DataControl="Details" BeanClass="Details.scaType"/> <attributeValues IterBinding="scaIterator" id="compositeVersion"> <AttrNames> <Item Value="compositeVersion"/> </AttrNames> </attributeValues>

And then control your features as below, assuming the new button is visible at 1.10
rendered="#{bindings.compositeVersion.inputValue  >= 1.10 }”

How to avoid timeouts with Oracle BPM loops

Say that you have a loop in your flow and you send some notification to all of your customers. This might be thousands of executions. And if things go wrong you may hit the timeout limit and end up with a suspended/halted bpm instance. To avoid this we can use a Timer in our loop even though we don’t need one. Here is the idea. In my environment the timeout is set to 30 seconds to simulate the issue and the solution. Each service call takes 10 seconds. The loop cardinality is 5. So it will timeout after the 3rd call. Lets see. Yep it faulted As it can be seen in EM it made the 4th call but never came back. Now lets add timers to our flow. I am adding a dummy timer that will hold the execution for a second. Now we didn’t have any issues and process executed properly,

Update: I found out that this is called "Forced Dehydration" in Oracle terminology, more info

How Parallel Is the Oracle Bpm's loop ?

I found out that  when you create a loop in your BPM flow and mark its mode as Parellel  it is actually not running in real parallel execution. Instead it creates loop instances first and runs the 1st activity of all of them . Then it starts running the 2nd activity and so for.. In my test case I pass the service name  and loop counter to the WS and WS just prints the string input. And here is the output :
Service : First Call :1 Service : First Call :2 Service : First Call :3 Service : First Call :4 Service : First Call :5 Service : Second Call :1 Service : Second Call :2 Service : Second Call :3 Service : Second Call :4 Service : Second Call :5 As it can be seen clearly that ot doesn’t execute in parallel. Meaning in case something goes wrong at one loop instance others won’t be executed until it is fixed. By the way if you wonder same applies to parallel executions that are forked at a gate as well.

An error occurred while trying to retrieve audit trail for this instance. Please review log files for detailed reasons.

If you face this error while trying to access the BPM process instances in EM , there is a setting on the Adminserver that you need to increase. You should be seeing below error in admin server logs.

IOException occurred on socket: Socket[,port=8001,localport=43747]
 weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3'.
weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3'
at weblogic.socket.BaseAbstractMuxableSocket.incrementBufferOffset(
at weblogic.rjvm.t3.MuxableSocketT3.incrementBufferOffset(

To resolve this set the below setting to a higher value. 

Maximum Message Size in the weblogic admin console by going to 
Servers &…

Get list of Data Souces on weblogic

In case you need to give the user option to select a data source before executing a query below might be helpful It will return a list of select items that contains the data sources under jdbc/ List list = new ArrayList(); try { InitialContext initialContext = new InitialContext(); if (initialContext != null) { NamingEnumeration ne = initialContext.list("jdbc"); while (ne.hasMore()) { NameClassPair nc = (NameClassPair); list.add(new SelectItem("jdbc/" + nc.getName())); } } }