Skip to main content

Posts

Showing posts from August, 2017

Local Testing setting in Debug mode in Siebel Local

In order to test your changes in the local we need to setup the Debug setting in Tools to be able to launch the Thick client. In tools, go to View->Options , a dialog box opens up then Click on Debug tab which looks as below screen shot. For the first time you have enter some paths in this dialog box as below. Executable – Provide the client Siebel.exe file path name, usually its under Client folder of you installation. CFG file – Provide the client CFG path with file name, make sure you have client local path specified inside the file. Browser – Provide the Internet Explorer exe file path Working Directory -   Usually its under the BIN folder within Client. You can also try the ENU under BIN if its not working. Provide USERNAME and PASSWORD and select data source as Local.   PS: Make sure you have compiled your changes into your .srf (local srf) which is specified in the client cfg mentioned in the debug setti

EDQ Interview Questions-3

Which processor you use to exclude the duplicate records ? Firstly we need to identify the duplicates by using the “Duplicate check” processor providing the attributes on which you want list duplicates. Take only the output records of this processor from “Non-Duplicated” port, thereby eliminating duplicates from the data stream. Which Processor is used to eliminate Duplicates ? In order to eliminate duplicates, we can use “Group and Merge” processor, which in turn has 3 sub-processors i.e. Input, Group and Merge. Add Attributes to Input Sub-processor to be considered in this data stream. Add the Attribute(s) on which to eliminate the duplicate to the “Group” sub processor. In the Merge Sub-process, select the relevant Merge function, by default its “Most Common Value” Consider the Merged output results for the De-duplicated records. What is the difference between “Lookup and Return” and “Lookup Check” Processors ? Lookup and Return, does the look up on t

EDQ Interview Questions & Answers-1

1.     What are the types of external souces from which you can import data into EDQ? EDQ can import from different types of sources like text(.txt, .dsv etc), excel (.xls, csv), and all types of databases like Oracle, DB2, Postgresql, Mysql, Microsoft Sql Server, Sybase etc.. 2.      What are the objects you create in EDQ to import files or from database? First of all we need to create a Data store pointing to file or database and then create and run the staged data to import data. In case of file you can either give the local path or if its server give the server credentials and path of the file to select the file. 3.     What is the Staged data? Staged data is where you store the intermediate or final results within your EDQ space, it’s like a EDQ table which stores the Processed data from the processes 4.      What is the different between Staged data and Reference data? Staged data is used to store the data being processed or the final data after proce

EDQ Interview Questions & Answers – 2

What is the main purpose of Lookup and Return? Lookup and return is one of the main processor used in the EDQ for the data enrichment. This processors takes one or more attributes as input and returns one or more attributes as output as per the reference data definition. If you multiple files/sources to read the data, how are going bring all data together in one stream? First of all create snapshot of all the files and add a reader processor for each file and then by using the Merge processor you can bring all the files together. P.S : All the files has to be in the same format to bring together in merge process/ you can selectively choose few columns from each file in Merge processor How will you identify and eliminate duplicates in EDQ ? In order to just identify  duplicates  we can use Duplicate check processor by passing one or more  attributes on which duplicates needs to be identified. In order to eliminate/merge these duplicate, we can use Group and merge

EDQ Interview Questions & Answers-3

Which processor you use to exclude the duplicate records ? Firstly we need to identify the duplicates by using the “Duplicate check” processor providing the attributes on which you want list duplicates. Take only the output records of this processor from “Non-Duplicated” port, thereby eliminating duplicates from the data stream. Which Processor is used to eliminate Duplicates ? In order to eliminate duplicates, we can use “Group and Merge” processor, which in turn has 3 sub-processors i.e. Input, Group and Merge. Add Attributes to Input Sub-processor to be considered in this data stream. Add the Attribute(s) on which to eliminate the duplicate to the “Group” sub processor. In the Merge Sub-process, select the relevant Merge function, by default its “Most Common Value” Consider the Merged output results for the De-duplicated records. What is the difference between “Lookup and Return” and “Lookup Check” Processors ? Lookup and Return, does the look up on t

Top Photo Editing Apps

Prisma App Price: Free Platform: iOS & Android Best app for re-creation of artistic styles Transforms image into graphic artwork using its artificial intelligence Ideal for creating a fancy art out of image to be shared in social media New version lets users crop images instead of applying filter on the complete image and rotate. Autodesk Pixlr App Price: Free Platform: iOS & Android User friendly rich interface It has lots of presets and a huge range of controls Editing options great for social media posts Features such as radial or linear blur adjustments, double exposure, a red eye fix and spot healing. Good frame options, stickers and a text tool makes Pixlr worth using Adobe Lightroom Mobile App Price : Free Platform : iOS & Android Features : All the editing tools available in the full version Images can be rated and flagged Automated features with the tap of finger Edit Raw files from iPhone Few major enhancements avai

EDQ export issue from Result window

Often we export the data in the Results window of EDQ, but after exporting we notice that in the exported file we don’t have all the records we intended to export. Usually records gets limited to 1000, this is because in the Local EDQ Preferences by default is set to 1000 Records to export. Change this limit to your desired number not more than 30,000, as it may cause some java error while exporting beyond 30,000 in the results window and application may crash. PS: The client computer only stores user preferences for the presentation of the client applications, while all other information is stored on the EDQ server.

Upper processor mixing up values in the Result Window

Its been noticed many times that some of the processor in EDQ are mixing up values in the results browsers, that too primarily in the Upper processor. For Example. If you have an  Email =  John.smith@xyz.com   ,  and you pass it thru Upper processor, then the results of   Email.upper =  david.cooper@abc.com  ,  a totally new values from a different record set. You have to be careful with some processors, sometimes though it shows the wrong values in the results browser but if you check the final staged data there may not be any wrong values. It’s always better to check results of each processors and validate. Oracle says its memory management issue you have to allocate appropriate memory during installation if not Merge processors will cause such mix ups. Though it’s a DBA’s job, developer has to be careful of such issues so that it don’t reach the staged data you are writing in the end. As an immediate work around, you can re-run the process and see if the wrong va