Posts

Showing posts from 2013

How to avoid timeouts with Oracle BPM loops

Image
Say that you have a loop in your flow and you send some notification to all of your customers. This might be thousands of executions. And if things go wrong you may hit the timeout limit and end up with a suspended/halted bpm instance. To avoid this we can use a Timer in our loop even though we don’t need one. Here is the idea. In my environment the timeout is set to 30 seconds to simulate the issue and the solution. Each service call takes 10 seconds. The loop cardinality is 5. So it will timeout after the 3rd call. Lets see. Yep it faulted As it can be seen in EM it made the 4th call but never came back. Now lets add timers to our flow. I am adding a dummy timer that will hold the execution for a second. Now we didn’t have any issues and process executed properly, Update: I found out that this is called "Forced Dehydration" in Oracle terminology, more info

How Parallel Is the Oracle Bpm's loop ?

Image
I found out that  when you create a loop in your BPM flow and mark its mode as Parellel  it is actually not running in real parallel execution. Instead it creates loop instances first and runs the 1st activity of all of them . Then it starts running the 2nd activity and so for.. In my test case I pass the service name  and loop counter to the WS and WS just prints the string input. And here is the output : Service : First Call :1 Service : First Call :2 Service : First Call :3 Service : First Call :4 Service : First Call :5 Service : Second Call :1 Service : Second Call :2 Service : Second Call :3 Service : Second Call :4 Service : Second Call :5 As it can be seen clearly that ot doesn’t execute in parallel. Meaning in case something goes wrong at one loop instance others won’t be executed until it is fixed. By the way if you wonder same applies to parallel executions that are forked at a gate as well.

An error occurred while trying to retrieve audit trail for this instance. Please review log files for detailed reasons.

If you face this error while trying to access the BPM process instances in EM , there is a setting on the Adminserver that you need to increase. You should be seeing below error in admin server logs. IOException occurred on socket: Socket[addr=egw-bpm1-mnad-nsc.wfs.com/10.10.224.65,port=8001,localport=43747]  weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3'. weblogic.socket.MaxMessageSizeExceededException: Incoming message of size: '10000080' bytes exceeds the configured maximum of: '10000000' bytes for protocol: 't3' at weblogic.socket.BaseAbstractMuxableSocket.incrementBufferOffset(BaseAbstractMuxableSocket.java:230) at weblogic.rjvm.t3.MuxableSocketT3.incrementBufferOffset(Mux...

Get list of Data Souces on weblogic

In case you need to give the user option to select a data source before executing a query below might be helpful It will return a list of select items that contains the data sources under jdbc/ List list = new ArrayList (); try { InitialContext initialContext = new InitialContext(); if (initialContext != null) { NamingEnumeration ne = initialContext.list("jdbc"); while (ne.hasMore()) { NameClassPair nc = (NameClassPair)ne.next(); list.add(new SelectItem("jdbc/" + nc.getName())); } } }

Logging web services request/response messages in WLS

This comes very handy while debugging an issue with Oracle BPM Suite Add those parameters to server start script or through the console change the startup parameters. -Dcom.sun.xml.ws.transport.http.client.HttpTransportPipe.dump=true -Dcom.sun.xml.internal.ws.transport.http.HttpAdapter.dump=true -Dcom.sun.xml.ws.transport.http.HttpAdapter.dump=true -Dcom.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.dump=true

Refreshing Data Controls When the BPM Payload is changed

As you all know by now that simple things can get complicated with JDeveloper. This is one of them, While working with BPM you sometime need to alter the Payload type and add new attributes to it. In able make those new attributes visible at the ADF taskflow side you need to refresh the data control definitions. This is a very simple, straighforward task. The issue comes around when you work with a team , where each member uses a different folder structure for their dev box. The project that you have the ADF pages and taskflows keeps a reference to the payload's schema definition files. The problem is this reference uses an absolute path instead of a relative path. So once you move your workspace to a different folder you can no longer do a "Refresh Data Control". To workaround this open DataControls.dcx file and update the references to the PayLoad XSD files.